All Practice Exams

100+ Free IBM Cloud Pak for Data v4.7 Architect Practice Questions

Pass your IBM Certified Architect - Cloud Pak for Data V4.7 (C1000-173) exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
Not published Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

Which Red Hat platform is required to run IBM Cloud Pak for Data v4.7?

A
B
C
D
to track
2026 Statistics

Key Facts: IBM Cloud Pak for Data v4.7 Architect Exam

62

Questions

IBM Training

41/62

Passing Score

IBM Training

90 min

Exam Duration

IBM Training

$200

Exam Fee

Pearson VUE

19%

Plan + Governance

Largest domains

6 sections

Content Areas

IBM prep guide

The IBM C1000-173 exam has 62 questions in 90 minutes, requiring 41 correct (~66%) to pass. The six official sections are Plan for Implementation 19%, Security 16%, AI Services 17%, Analytic Services 16%, Data Governance 19%, and Data Source Services 13%. The exam fee is $200 USD via Pearson VUE.

Sample IBM Cloud Pak for Data v4.7 Architect Practice Questions

Try these sample questions to test your IBM Cloud Pak for Data v4.7 Architect exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which Red Hat platform is required to run IBM Cloud Pak for Data v4.7?
A.Red Hat Enterprise Linux on bare metal only
B.Red Hat OpenShift Container Platform
C.Red Hat Ansible Tower
D.Red Hat Satellite
Explanation: Cloud Pak for Data v4.7 is delivered as a set of operators that run on Red Hat OpenShift Container Platform (OCP). All services, the control plane, and add-ons are packaged as OpenShift operators and require an OCP cluster (managed or self-managed) of a supported version.
2An architect must choose between IBM Cloud Pak for Data as a Service and self-managed Cloud Pak for Data on customer-owned OpenShift. Which factor most strongly favors the SaaS option?
A.The customer needs to run the platform in an air-gapped data center
B.The customer wants minimum operational overhead and rapid time-to-value
C.The customer must integrate with on-premises mainframe Db2 z/OS through private network only
D.The customer requires fine-grained control over OpenShift node sizing
Explanation: Cloud Pak for Data as a Service is fully hosted by IBM on IBM Cloud. Customers do not patch the cluster, install operators, or size nodes, which minimizes operations and accelerates time-to-value. Air-gapped, private-only, or hardware-tuned scenarios point to self-managed deployments.
3Which storage class is recommended by IBM for production Cloud Pak for Data v4.7 deployments that require ReadWriteMany (RWX) access?
A.hostPath
B.IBM Storage Fusion or OpenShift Data Foundation (OCS) or NFS
C.OpenShift Local Storage Operator with block-only volumes
D.AWS EBS gp3
Explanation: Cloud Pak for Data v4.7 requires a storage class that supports both RWO and RWX persistent volumes. IBM supports IBM Storage Fusion, OpenShift Data Foundation (formerly OCS), Portworx, and NFS for production workloads. EBS gp3 is RWO only, and hostPath/local storage is not suitable for stateful, HA services.
4What is the smallest deployment topology IBM lists for a Cloud Pak for Data v4.7 production cluster, in addition to the OpenShift control-plane nodes?
A.1 worker node
B.2 worker nodes
C.3 worker nodes
D.5 worker nodes
Explanation: For production deployments, IBM recommends a minimum of three OpenShift worker nodes plus three control-plane nodes. Three workers allow services that require pod anti-affinity and quorum (etcd-style replicas, Db2 HADR, Watson Studio runtimes) to remain available during a node failure or rolling update.
5Which tool is the primary mechanism for backup and restore of Cloud Pak for Data v4.7 services on a self-managed cluster?
A.OADP (OpenShift API for Data Protection) or IBM Storage Fusion Backup & Restore
B.rsync from each pod's persistent volume
C.etcd snapshots only
D.Db2 online backup using db2 backup db
Explanation: IBM documents OADP (OpenShift API for Data Protection, based on Velero) and IBM Storage Fusion Backup & Restore as the supported tools to back up and restore Cloud Pak for Data namespaces, custom resources, and persistent volumes. They orchestrate quiescing of services and PV snapshots together.
6Which statement best describes Cloud Pak for Data v4.7 multi-tenancy?
A.Tenants are separated by deploying multiple Cloud Pak for Data instances or by using projects within a single instance
B.Tenants are isolated by network policy only, sharing all OpenShift namespaces
C.Each tenant must be installed in its own OpenShift cluster — no in-cluster multi-tenancy is supported
D.Tenants share a single project but have separate Db2 schemas
Explanation: Cloud Pak for Data supports multi-tenancy at two levels: deploying multiple CP4D instances on the same or different clusters, or creating multiple projects (with role-based access) inside a single instance for logical separation of users, data assets, and runtimes. Strong isolation typically uses separate instances; soft isolation uses projects.
7When sizing Cloud Pak for Data v4.7 for production, IBM publishes per-service minimum CPU, memory, and storage requirements. Where does the architect normally find these values?
A.Pearson VUE candidate handbook
B.The Cloud Pak for Data system requirements pages in IBM Documentation
C.OpenShift Container Platform release notes
D.Red Hat Customer Portal Knowledge Base only
Explanation: IBM publishes detailed system requirements for Cloud Pak for Data v4.7 in the product documentation under 'System requirements' for each service. These pages list per-service minimum CPU, memory, and persistent storage needs that the architect aggregates to size the cluster.
8An architect must design Cloud Pak for Data for high availability across an OpenShift cluster with three availability zones. Which two design choices BEST support service-level HA? (Choose the most accurate single answer.)
A.Spread worker nodes evenly across three zones and rely on service replicas with pod anti-affinity
B.Place all worker nodes in one zone and snapshot persistent volumes nightly
C.Run a single replica of each service to reduce shared-state contention
D.Use ReadWriteOnce storage for every service, including those requiring shared file access
Explanation: HA on Cloud Pak for Data depends on a multi-zone OpenShift cluster with worker nodes balanced across zones and on services configured with multiple replicas plus pod anti-affinity so a zone failure does not take a service offline. Storage classes that survive a zone outage round out the design.
9A customer is planning a Cloud Pak for Data deployment in a regulated environment with no internet access. Which deployment path applies?
A.Cloud Pak for Data as a Service on IBM Cloud
B.Air-gapped install using a private container image registry mirrored from the IBM Entitled Registry
C.Direct OpenShift OperatorHub install pulling images from quay.io at runtime
D.Helm chart install from artifacthub.io
Explanation: Air-gapped Cloud Pak for Data installs require mirroring the required images from the IBM Entitled Registry into a private registry inside the network and configuring the OpenShift cluster to pull from it. The IBM 'Mirroring images for an air-gapped install' procedure documents the supported flow.
10Which statement about migrating from an older Cloud Pak for Data version (e.g., 4.0) to v4.7 is accurate?
A.Migration is automatic with no version dependencies
B.An architect must follow IBM's documented upgrade path, often through intermediate versions, and verify service-specific upgrade prerequisites
C.Cloud Pak for Data does not support upgrades; a clean install is mandatory
D.Only the control plane is upgraded; services remain on the original version
Explanation: IBM publishes a supported upgrade path matrix for Cloud Pak for Data. Customers may need to step through intermediate versions to reach v4.7, and each service has its own pre-upgrade checks (e.g., Db2 backups, WKC reindex, Watson Studio runtime updates) that must be completed.

About the IBM Cloud Pak for Data v4.7 Architect Exam

IBM Certified Architect - Cloud Pak for Data V4.7 (C1000-173) validates the ability to design Data and AI solutions on Cloud Pak for Data v4.7 in hybrid and multi-cloud environments. The exam covers planning, security, AI services, analytic services, data governance, and data source services on Red Hat OpenShift.

Questions

62 scored questions

Time Limit

90 minutes

Passing Score

41/62 (~66%)

Exam Fee

$200 (IBM / Pearson VUE)

IBM Cloud Pak for Data v4.7 Architect Exam Content Outline

19%

Plan for a Cloud Pak for Data Implementation

Service selection, cluster sizing, backup and restore, HA/DR, multi-tenancy, migration, storage, SaaS vs software, managed vs self-managed OpenShift, multi-cloud integration, logging/monitoring.

16%

Security Requirements

Certificate management, identity/access/authorization, auditing and audit integration, asset interchange security, API and automation, multi-cloud security, air-gapped environments.

17%

Architect with AI Services

Solutions with Watson Assistant, Watson Discovery, Watson Pipelines, Watson OpenScale, and IBM Match 360 with Watson.

16%

Architect with Analytic Services

Solutions with DataStage (including Remote Engine), Data Refinery, and Db2 Big SQL.

19%

Architect with Data Governance Services

Solutions with Watson Knowledge Catalog (categories, business terms, classifications, data classes), Data Privacy (data protection rules), and Knowledge Accelerators.

13%

Architect with Data Source Services

Solutions with Data Replication (CDC), IBM Data Virtualization (Watson Query), watsonx.data, and Db2 services (OLTP, Warehouse, Big SQL).

How to Pass the IBM Cloud Pak for Data v4.7 Architect Exam

What You Need to Know

  • Passing score: 41/62 (~66%)
  • Exam length: 62 questions
  • Time limit: 90 minutes
  • Exam fee: $200

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

IBM Cloud Pak for Data v4.7 Architect Study Tips from Top Performers

1Tied largest domains: master Plan (19%) and Data Governance (19%) — sizing, HA/DR, storage, and WKC artifacts
2Know the Cloud Pak for Data services map: Watson Studio, WKC, DataStage, Db2 family, Watson Query, watsonx.data, OpenScale
3Practice OpenShift fundamentals: operators, CRDs, NetworkPolicies, storage classes (Storage Fusion, OCS, NFS, Portworx)
4Understand SaaS vs software trade-offs and air-gapped install steps including private container registry mirroring
5Study WKC governance artifacts (categories, business terms, classifications, data classes), data protection rules, and metadata enrichment

Frequently Asked Questions

How many questions are on the IBM C1000-173 exam?

The exam has 62 questions delivered in 90 minutes. You need 41 correct answers (about 66%) to pass and earn the IBM Certified Architect - Cloud Pak for Data V4.7 credential.

How much does the IBM C1000-173 exam cost?

The exam fee is $200 USD when scheduled through Pearson VUE. IBM occasionally offers discounted vouchers via Partner Plus and IBM Training campaigns.

What is the largest domain on C1000-173?

Plan for a Cloud Pak for Data Implementation and Architect with Data Governance Services are tied as the largest domains at 19% each. Focus on sizing, HA/DR, storage, multi-tenancy, and Watson Knowledge Catalog.

Which Cloud Pak for Data v4.7 services are tested?

Watson Studio, Watson Machine Learning, Watson Knowledge Catalog, Data Virtualization (Watson Query), DataStage, Data Refinery, Db2 (OLTP, Warehouse, Big SQL), Watson Assistant, Watson Discovery, Watson Pipelines, Watson OpenScale, IBM Match 360, watsonx.data, Data Replication, and Knowledge Accelerators.

Is hands-on Cloud Pak for Data experience required?

IBM expects experience designing CP4D v4.7 solutions on Red Hat OpenShift. Candidates should have hands-on time with OpenShift, governance setup in WKC, DataStage flows, and Watson Studio/Watson Machine Learning.