Career upgrade: Learn practical AI skills for better jobs and higher pay.
Level up
All Practice Exams

100+ Free Cisco DCID 300-610 Practice Questions

Pass your Designing Cisco Data Center Infrastructure for Traditional and AI Workloads (300-610 DCID v1.2) exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
Cisco does not publicly report pass rates Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

In an HGX H100 8-GPU server, which interconnect provides 900 GB/s of GPU-to-GPU bidirectional bandwidth inside the server?

A
B
C
D
to track
2026 Statistics

Key Facts: Cisco DCID 300-610 Exam

90 min

Exam Time

Cisco 300-610 DCID exam page

$300

Exam Fee

Cisco / Pearson VUE pricing

v1.2

Blueprint Version

Adds AI workloads

35/25/20/20

Domain Weights (%)

Network / Compute / Storage / Automation

3 yrs

Validity

Cisco professional cert

Pearson VUE

Test Delivery

In-person or online proctored

Cisco 300-610 DCID v1.2 is a 90-minute, ~55-65 question $300 concentration exam for the CCNP Data Center certification. Version 1.2 explicitly adds AI workloads, expanding Network Design to cover RoCEv2, InfiniBand, RDMA, GPUs, DPUs, and AI fabric architecture. It still tests VXLAN EVPN, Cisco ACI, vPC, VRF Lite, lossless Ethernet QoS, UCS-X with Intersight Managed Mode, FC/iSCSI storage, NX-API, Ansible/Terraform, Intersight Cloud Orchestrator, and Nexus Dashboard / NDFC. Cisco does not publish a fixed passing score, and the credential is valid for 3 years.

Sample Cisco DCID 300-610 Practice Questions

Try these sample questions to test your Cisco DCID 300-610 exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which AI workload phase is characterized by long-running, synchronous, all-reduce traffic patterns between large numbers of GPUs?
A.Inference serving
B.Distributed training
C.Data ingestion
D.Model export
Explanation: Distributed training executes for hours to weeks and uses collective operations such as all-reduce and all-gather to synchronize gradients across every GPU in every step. Tail latency on a single flow can stall the entire job, which is why training fabrics demand lossless, high-bandwidth transport.
2Which statement best describes inference compared with training in an AI data center?
A.Inference uses far more GPUs per job than training
B.Inference is latency-sensitive and often single-node, while training is bandwidth- and synchronization-sensitive
C.Inference always requires InfiniBand while training can use plain Ethernet
D.Inference and training generate identical fabric traffic patterns
Explanation: Inference is dominated by user-facing request/response or batch jobs that prioritize p99 latency and typically run on a single node or a small group of GPUs. Training spans hundreds or thousands of GPUs and is gated by the slowest collective, so it prioritizes lossless, deterministic high bandwidth.
3Which device offloads networking, security, and storage services from the host CPU to free GPU servers for AI workloads?
A.GPU
B.DPU
C.TPM
D.BMC
Explanation: A Data Processing Unit (DPU) such as NVIDIA BlueField or AMD Pensando offloads networking, storage, and security functions to a programmable NIC, returning host CPU cycles to applications. SmartNICs are a closely related category with similar offload goals.
4An AI training fabric must deliver lossless, high-throughput RDMA between GPU servers over Ethernet. Which protocol is most commonly chosen?
A.iWARP
B.RoCEv2
C.FCoE
D.TRILL
Explanation: RoCEv2 (RDMA over Converged Ethernet v2) is the de facto choice for AI/ML training fabrics on Ethernet because it is routable over UDP/IP and enables RDMA verbs (read/write/send) when combined with PFC and ECN for lossless behavior.
5Which two Ethernet features must work together to deliver lossless behavior required by RoCEv2 in an AI fabric? (Choose the best pair.)
A.STP and LACP
B.PFC and ECN
C.BFD and uRPF
D.VRRP and HSRP
Explanation: Priority Flow Control (PFC) pauses a specific 802.1p priority class to prevent buffer overflow and packet drops, while Explicit Congestion Notification (ECN) marks packets early so RoCEv2 endpoints can throttle (DCQCN) before any drop occurs. Together they create the lossless behavior RoCEv2 needs.
6Which DCB protocol is used by adjacent switches and adapters to negotiate PFC, ETS, and lossless capabilities automatically?
A.LLDP
B.DCBX
C.VTP
D.CDP
Explanation: Data Center Bridging Exchange (DCBX) runs as a TLV inside LLDP to negotiate PFC priorities, ETS bandwidth allocations, and Application Priority between switch and host adapter. Without DCBX, lossless settings must be manually mirrored on every device.
7ETS (Enhanced Transmission Selection) provides which capability in a lossless data center fabric?
A.Tags every flow with a unique queue identifier
B.Allocates minimum guaranteed bandwidth shares to traffic classes while allowing unused capacity to be reclaimed
C.Performs IPSec encryption between endpoints
D.Buffers idle ports during topology changes
Explanation: ETS (IEEE 802.1Qaz) allocates a minimum guaranteed bandwidth percentage to each Traffic Class Group (storage, compute, management) and lets idle capacity be reclaimed by other groups. It pairs with PFC to give different traffic classes appropriate behavior on a converged fabric.
8Which AI fabric option natively provides lossless transport, Subnet Manager-based addressing, and credit-based flow control without requiring PFC/ECN tuning?
A.RoCEv2 over Ethernet
B.InfiniBand
C.iSCSI
D.FCoE
Explanation: InfiniBand uses a credit-based link layer that is inherently lossless, with a Subnet Manager (SM) handling addressing (LIDs), routing, and partitioning. NDR runs at 400 Gbps and HDR at 200 Gbps per port. Operators choose IB when they want a turnkey lossless fabric without PFC/ECN tuning.
9What is the per-port line rate of NDR InfiniBand commonly used in NVIDIA H100 GPU clusters?
A.100 Gbps
B.200 Gbps
C.400 Gbps
D.800 Gbps
Explanation: NDR (Next Data Rate) InfiniBand operates at 400 Gbps per port and is the generation paired with NVIDIA's Quantum-2 switches and ConnectX-7 adapters used in H100/H200 clusters. HDR is the previous 200 Gbps generation.
10Which RDMA capability allows a GPU server to write directly into the memory of a remote GPU server without involving either kernel?
A.TCP offload
B.RDMA send/receive and read/write verbs
C.Spanning Tree fast convergence
D.VXLAN bridge domain
Explanation: RDMA verbs (send, receive, read, write) bypass both kernels and copy data directly between user-space buffers on remote hosts via the NIC. This zero-copy path eliminates host CPU overhead and is what makes RoCEv2/InfiniBand attractive for AI training.

About the Cisco DCID 300-610 Exam

Cisco 300-610 DCID v1.2 (Designing Cisco Data Center Infrastructure for Traditional and AI Workloads) is the concentration exam for the CCNP Data Center certification. The v1.2 blueprint, released to align with AI-era data center designs, adds AI/ML concepts (training vs inference, GPUs, DPUs/SmartNICs), high-performance fabrics (RoCEv2 with PFC/ECN/DCBX, InfiniBand, RDMA), and Nexus Dashboard topics on top of the traditional network, compute, storage, and automation domains. Candidates design Layer 2/Layer 3 connectivity, VXLAN EVPN multi-site, ACI multi-pod, UCS-X with Intersight Managed Mode, FC and iSCSI storage, and end-to-end automation across Intersight and Nexus Dashboard.

Assessment

Approximately 55-65 multiple-choice and multi-select questions covering Network Design (35%), Compute Design (25%), Storage Network Design (20%), and Automation Design (20%) for traditional and AI workloads

Time Limit

90 minutes

Passing Score

Cisco does not publish a fixed passing score

Exam Fee

$300 USD (Cisco / Pearson VUE)

Cisco DCID 300-610 Exam Content Outline

35%

Network Design

AI/ML concepts (training vs inference, GPUs, DPUs/SmartNICs, sustainability), high-performance and AI fabrics (RoCEv2, Ethernet, InfiniBand, RDMA), L2 (vPC, LACP, endpoint mobility, services insertion), L3 (graceful restart/NSF, VRF Lite), lossless Ethernet QoS (PFC, ETS, DCBX), VXLAN EVPN DCI, Nexus Dashboard, and ACI segmentation

25%

Compute Design

Ethernet and storage connectivity, FI end-host vs switch mode, Cisco VIC virtualization with service profiles and Ethernet/FC adapter policies, UCS-X 9508 design choices, and compute requirements for AI/ML

20%

Storage Network Design

iSCSI deployment with multipathing and dual-fabric addressing, QoS for FC and iSCSI, Fibre Channel port types and ISL design, oversubscription, and storage deployment for traditional and high-performance networks

20%

Automation Design

Cisco Intersight, NX-API REST (JSON/XML), model-driven programmability, Ansible, Python, Terraform CLI, service-profile templates with vNIC/vHBA templates, global vs local policies, Intersight Cloud Orchestrator workflows, and automatic deployment via Nexus Dashboard

How to Pass the Cisco DCID 300-610 Exam

What You Need to Know

  • Passing score: Cisco does not publish a fixed passing score
  • Assessment: Approximately 55-65 multiple-choice and multi-select questions covering Network Design (35%), Compute Design (25%), Storage Network Design (20%), and Automation Design (20%) for traditional and AI workloads
  • Time limit: 90 minutes
  • Exam fee: $300 USD

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

Cisco DCID 300-610 Study Tips from Top Performers

1Memorize the v1.2 AI fabric stack: RoCEv2 needs PFC + ECN, DCBX negotiates, ETS shares bandwidth — and InfiniBand NDR is 400 Gbps with a Subnet Manager
2Be able to compare training vs inference traffic patterns: long-running synchronous all-reduce/all-gather (training) vs latency-sensitive request/response (inference)
3Know the Nexus 9300 GX2 platforms used in AI fabrics: 9332D-GX2B (1RU 32x 400G) and 9364D-GX2A (2RU 64x 400G), plus when to choose leaf vs spine roles
4Master VXLAN EVPN: OSPF/IS-IS underlay, BGP L2VPN EVPN overlay, Multi-Site Border Gateways, and ACI Multi-Pod vs Multi-Site differences
5Understand UCS-X chassis layout: IFMs multiplex Ethernet to the FI 6500-series in IMM, X-Fabric Modules expose PCIe for GPU resources, server-profile templates with vNIC/vHBA templates manage policy at scale
6Practice the automation stack end-to-end: NX-API JSON/XML, Ansible, Python, Terraform with the Intersight/ACI providers, ICO workflows, and NDFC Easy Fabric POAP

Frequently Asked Questions

What is the Cisco 300-610 DCID v1.2 exam?

Cisco 300-610 DCID is the Designing Cisco Data Center Infrastructure for Traditional and AI Workloads concentration exam. It is one of the concentration choices for the CCNP Data Center certification (paired with the 350-601 DCCOR core). Version 1.2 added AI/ML workloads, RoCEv2/InfiniBand/RDMA fabrics, and Nexus Dashboard topics.

How long is the Cisco 300-610 exam and how much does it cost?

The 300-610 DCID exam is 90 minutes long and costs $300 USD. It is delivered through Pearson VUE either at a testing center or online with remote proctoring. Cisco typically lists 55-65 questions per delivery but does not publish an exact count.

What is the passing score for the Cisco 300-610 DCID exam?

Cisco does not publish a fixed passing score for the 300-610 exam. Scaled scores are reported as pass or fail, and Cisco recommends preparing with a target of mastery across all four domains rather than chasing a numerical threshold.

What changed in DCID v1.2?

Version 1.2 renames the exam to 'Designing Cisco Data Center Infrastructure for Traditional and AI Workloads' and explicitly adds AI/ML topics: training vs inference, GPUs, DPUs/SmartNICs, AI fabric requirements, RoCEv2, Ethernet, InfiniBand, and RDMA. It also formalizes lossless Ethernet QoS (PFC, ETS, DCBX) and Nexus Dashboard for centralized management.

What topics does the 300-610 exam cover?

The blueprint has four domains: Network Design (35%) covering AI/ML, lossless fabrics, Layer 2/3 connectivity, VXLAN EVPN, and ACI; Compute Design (25%) covering UCS-X, Intersight Managed Mode, VIC adapter policies, and AI compute; Storage Network Design (20%) covering iSCSI multipathing, FC port types, ISLs, and high-performance storage; and Automation Design (20%) covering Intersight, NX-API, Ansible, Python, Terraform, ICO, and Nexus Dashboard.

How long is the 300-610 certification valid?

Cisco professional-level certifications are valid for 3 years. You can recertify by passing a current professional or expert exam, by earning Continuing Education credits through approved Cisco activities, or by a combination of the two before your CCNP Data Center expires.

How should I prepare for Cisco 300-610 DCID v1.2?

Plan 60-100 hours over 6-10 weeks. Start with Cisco's exam topics page and the DCID course. Build hands-on familiarity with Nexus 9000 / VXLAN EVPN, Cisco ACI, UCS-X with Intersight Managed Mode, MDS Fibre Channel, NX-API, Terraform, and Intersight Cloud Orchestrator. Use this 100-question practice set, the Cisco Press DCID guide, and Cisco DevNet sandboxes to lock in the AI workload topics that are new in v1.2.