All Practice Exams

100+ Free AWS GenAI Developer Pro Practice Questions

Pass your AWS Certified Generative AI Developer — Professional (AIP-C01) exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
~55-65% Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

Which Amazon Bedrock feature provides a fully managed Retrieval Augmented Generation (RAG) workflow that ingests documents from Amazon S3, chunks and embeds them, and stores vectors in a configurable vector database?

A
B
C
D
to track
2026 Statistics

Key Facts: AWS GenAI Developer Pro Exam

75

Exam Questions

AWS (65 scored + 10 unscored)

750/1000

Passing Score

AWS (scaled)

180 min

Exam Duration

AWS

$300

Exam Fee

AWS USD

31%

Foundation Model Integration

Largest domain

3 years

Validity

AWS recertification

The AWS AIP-C01 exam has 75 questions (65 scored + 10 unscored) in 180 minutes with a passing score of 750/1000 and a $300 fee. Domains: Foundation Model Integration, Data Management, and Compliance (31%); Implementation and Integration (26%); AI Safety, Security, and Governance (20%); Operational Efficiency and Optimization (12%); and Testing, Validation, and Troubleshooting (11%). General availability followed a beta that ended March 31, 2026. Available at Pearson VUE / PSI testing centers and online proctored. Certification is valid for 3 years.

Sample AWS GenAI Developer Pro Practice Questions

Try these sample questions to test your AWS GenAI Developer Pro exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which Amazon Bedrock feature provides a fully managed Retrieval Augmented Generation (RAG) workflow that ingests documents from Amazon S3, chunks and embeds them, and stores vectors in a configurable vector database?
A.Bedrock Agents
B.Bedrock Knowledge Bases
C.Bedrock Studio
D.Bedrock Model Evaluation
Explanation: Bedrock Knowledge Bases is the managed RAG service. It connects to Amazon S3 (and other supported sources), automatically chunks and embeds content with a chosen embedding model (Titan Embeddings or Cohere Embed), and writes vectors to OpenSearch Serverless, Aurora pgvector, MongoDB Atlas, Pinecone, or Redis Enterprise. At inference time it retrieves and injects context into the prompt. Agents orchestrate multi-step actions, Studio is a low-code playground, and Model Evaluation benchmarks model quality.
2A team is building a low-latency RAG chatbot expected to handle 5,000 queries per second with sub-100 ms vector search. They want a fully managed serverless option that scales automatically. Which Bedrock Knowledge Bases vector store should they choose?
A.Amazon OpenSearch Serverless (vector engine)
B.Amazon Aurora PostgreSQL with pgvector
C.Amazon DocumentDB
D.Amazon Neptune
Explanation: OpenSearch Serverless with the vector engine is fully managed, scales automatically based on OCU usage, and is the default high-throughput option for Bedrock Knowledge Bases. Aurora pgvector is a strong option but requires capacity planning and is provisioned. DocumentDB and Neptune are not supported as Bedrock Knowledge Bases vector stores.
3Which Amazon Bedrock model family is best suited for highly multilingual generation with strong performance on European and Asian languages while being a first-party AWS model?
A.Anthropic Claude 3.5 Sonnet
B.Meta Llama 3.1
C.Amazon Nova Pro
D.AI21 Jurassic-2
Explanation: Amazon Nova Pro is a first-party AWS multimodal foundation model with strong multilingual capabilities (200+ languages) and is exclusive to Bedrock. Claude 3.5 Sonnet excels at reasoning but Anthropic owns it; Llama 3.1 is Meta's open model; Jurassic-2 was deprecated. Nova Pro and Nova Lite are part of the Amazon Nova family announced at re:Invent 2024.
4A developer wants the SAME application code to switch between Anthropic Claude, Meta Llama, and Amazon Nova on Bedrock without rewriting message-handling logic. Which API should they use?
A.InvokeModel
B.Converse
C.RetrieveAndGenerate
D.InvokeAgent
Explanation: The Bedrock Converse API provides a unified, model-agnostic message interface (system, user, assistant turns) with consistent tool-use semantics, removing the need for model-specific request/response shapes that InvokeModel requires. RetrieveAndGenerate is part of Knowledge Bases, and InvokeAgent calls a Bedrock Agent.
5When configuring a Bedrock Knowledge Base, which chunking strategy preserves complete logical units like paragraphs by using a model to identify natural breakpoints rather than fixed token counts?
A.Fixed-size chunking
B.No chunking
C.Semantic chunking
D.Hierarchical chunking
Explanation: Semantic chunking uses an embedding model to detect topic shifts and create variable-length chunks at natural boundaries, preserving meaning. Fixed-size cuts at token thresholds (may split paragraphs). Hierarchical chunking creates parent/child chunks at multiple granularities and is better for navigation. No chunking treats the whole document as one chunk and rarely fits context windows.
6Which embedding model is provided by AWS as a first-party option for Bedrock Knowledge Bases and supports 1024, 512, and 256 output dimensions for cost/quality tradeoffs?
A.Titan Text Embeddings V2
B.Cohere Embed English v3
C.OpenAI text-embedding-3-large
D.Amazon Comprehend Topic Modeler
Explanation: Amazon Titan Text Embeddings V2 (released 2024) supports flexible 256/512/1024 dimensions, allowing teams to trade vector storage cost for retrieval quality. It is the AWS first-party embedding model on Bedrock. Cohere Embed v3 is also available but is a third-party model. OpenAI is not on Bedrock. Comprehend Topic Modeler does not produce vector embeddings.
7A regulated financial services customer must keep their Bedrock prompts and responses entirely within their AWS account and not used for AWS service improvement. What guarantee does Bedrock provide by default?
A.Customer data is automatically used to train AWS proprietary models unless opted out
B.Customer prompts and responses are NOT used to train any underlying foundation models or shared with model providers
C.Customer data is shared with the model provider for quality monitoring
D.Customer must purchase Provisioned Throughput to keep data private
Explanation: Amazon Bedrock contractually guarantees that customer data (prompts, completions, embeddings, fine-tuning data) is not used to train any underlying foundation models and is not shared with the third-party model providers. This is a default property of the service across all on-demand and Provisioned Throughput modes — no purchase or opt-out is required.
8A team has 10,000 unlabeled internal engineering documents and wants to adapt a Bedrock foundation model to better understand their domain vocabulary, without supervised input/output pairs. Which Bedrock customization technique should they choose?
A.Fine-tuning
B.Continued pre-training
C.RAG with Knowledge Bases
D.Prompt engineering only
Explanation: Continued pre-training (also called domain adaptation) consumes UNLABELED domain text and updates model weights to learn domain vocabulary and style. Fine-tuning requires labeled JSONL prompt/completion pairs. RAG augments at inference but does not change weights. Prompt engineering alone won't teach new vocabulary at scale. Continued pre-training is supported on select Bedrock models (e.g., Titan Text, Llama).
9After fine-tuning a foundation model in Bedrock, what is REQUIRED to invoke the resulting custom model for inference?
A.Nothing extra; on-demand inference is automatic
B.Purchase Provisioned Throughput (model units) for the custom model
C.Deploy the model to a SageMaker endpoint
D.Re-import the model weights via Bedrock Custom Model Import
Explanation: Fine-tuned and continued-pretrained models on Bedrock can ONLY be served via Provisioned Throughput — you must purchase Model Units (commit hourly, 1-month, or 6-month). On-demand pricing is not available for custom models. They run inside Bedrock; SageMaker hosting is not used. Custom Model Import is a separate feature for bringing in externally-trained Llama/Mistral/Flan models.
10A team needs to ground a Bedrock Knowledge Base on documents that contain confidential personal data subject to GDPR. They require that the embeddings, chunks, and inference happen in eu-central-1 only. Which configuration is correct?
A.Create the Knowledge Base in us-east-1 because Bedrock RAG is global
B.Create the Knowledge Base, embedding model, and vector store all in eu-central-1, and invoke models via the eu-central-1 Bedrock endpoint
C.Bedrock Knowledge Bases automatically pin data residency to the S3 source bucket region
D.Use VPC endpoints to override the regional configuration
Explanation: Bedrock is regional. To meet GDPR data residency, the source S3 bucket, the Knowledge Base, the embedding model invocation, and the vector store (e.g., OpenSearch Serverless collection) must all be created in the desired EU region, and inference must call the regional Bedrock endpoint. Bedrock is not global, S3 region does NOT auto-pin Bedrock processing, and VPC endpoints don't change region.

About the AWS GenAI Developer Pro Exam

The AWS Certified Generative AI Developer — Professional (AIP-C01) validates the skills to design, build, secure, and operate production-grade generative AI applications on AWS. It covers Amazon Bedrock (Anthropic Claude, Meta Llama, Mistral, Amazon Nova, Stability AI), Bedrock Knowledge Bases for RAG with OpenSearch Serverless / Aurora pgvector / MongoDB Atlas / Pinecone / Redis, Bedrock Agents (action groups, advanced prompts, prompt flows), Guardrails for Bedrock (denied topics, content filters, PII redaction, contextual grounding), prompt engineering, fine-tuning and continued pre-training, model evaluation, Amazon Q Business and Q Developer, and SageMaker JumpStart for custom foundation models.

Questions

75 scored questions

Time Limit

180 minutes

Passing Score

750/1000 (scaled)

Exam Fee

$300 (Amazon Web Services)

AWS GenAI Developer Pro Exam Content Outline

31%

Foundation Model Integration, Data Management, and Compliance

Choose foundation models in Amazon Bedrock (Anthropic Claude, Meta Llama, Mistral, Amazon Nova, Stability AI); design RAG with Bedrock Knowledge Bases over OpenSearch Serverless, Aurora pgvector, MongoDB Atlas, Pinecone, or Redis; manage chunking, embeddings, and vector indexes; ensure data residency, retention, and responsible AI compliance

26%

Implementation and Integration

Apply prompt engineering (zero-shot, few-shot, chain-of-thought, ReAct); build Bedrock Agents with action groups, knowledge bases, and prompt flows; orchestrate multi-step agentic workflows; integrate Bedrock InvokeModel and Converse APIs into Lambda, Step Functions, AppSync, and API Gateway

20%

AI Safety, Security, and Governance

Configure Guardrails for Bedrock (denied topics, content filters, PII redaction, contextual grounding, word filters); apply IAM least-privilege for Bedrock; secure with KMS, VPC endpoints (PrivateLink), and cross-account roles; track lineage with Bedrock Model Cards; log invocations to CloudWatch and S3 for audit

12%

Operational Efficiency and Optimization for GenAI Applications

Choose on-demand vs Provisioned Throughput; apply prompt caching, response streaming, and batch inference; optimize cost with smaller models, distillation, and Bedrock Intelligent Prompt Routing; monitor latency, token usage, and throttling with CloudWatch and Bedrock invocation logs

11%

Testing, Validation, and Troubleshooting

Run Bedrock Model Evaluation (automatic and human); design offline and online evaluation pipelines; troubleshoot hallucinations, prompt injections, and RAG retrieval failures; validate fine-tuned and continued pre-trained models; debug Agents traces and prompt flow execution logs

How to Pass the AWS GenAI Developer Pro Exam

What You Need to Know

  • Passing score: 750/1000 (scaled)
  • Exam length: 75 questions
  • Time limit: 180 minutes
  • Exam fee: $300

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

AWS GenAI Developer Pro Study Tips from Top Performers

1Master Amazon Bedrock end-to-end: model providers (Anthropic, Meta, Mistral, Amazon, Stability), Converse API, Knowledge Bases, Agents, Guardrails, Model Evaluation, and Provisioned Throughput
2Know Bedrock Knowledge Bases vector store options — OpenSearch Serverless, Aurora pgvector, MongoDB Atlas, Pinecone, Redis Enterprise — and when to choose each based on scale, latency, and cost
3Practice all four Guardrails policies — denied topics, content filters, PII redaction (BLOCK vs ANONYMIZE), and contextual grounding — and the 2026 word-filter and image-content additions
4Understand fine-tuning options on Bedrock: continued pre-training (unlabeled data), fine-tuning (labeled JSONL), and Provisioned Throughput requirement for hosting custom models
5Build a Bedrock Agent with action groups (Lambda + OpenAPI), knowledge bases, advanced prompts (orchestration vs pre/post-processing), and prompt flows; review trace logs for debugging
6Compare on-demand pricing (per token), Provisioned Throughput (model units), Batch inference (50% discount), and Bedrock Intelligent Prompt Routing for cost optimization
7Distinguish Amazon Q Business (RAG over enterprise data with built-in connectors) from Amazon Q Developer (code generation in IDE / CLI / Console) — both appear on the exam

Frequently Asked Questions

What is the AWS AIP-C01 exam?

The AWS Certified Generative AI Developer — Professional (AIP-C01) is a professional-level AWS certification that validates skills to design, build, secure, and operate production-grade generative AI applications on AWS. It became generally available in 2026 after the beta period ended March 31, 2026, and covers Amazon Bedrock, Bedrock Knowledge Bases, Bedrock Agents, Guardrails, Amazon Q, and SageMaker JumpStart.

How many questions are on the AIP-C01 exam?

The AIP-C01 exam contains 75 questions (65 scored and 10 unscored) delivered in 180 minutes. Question types are multiple choice (one correct answer) and multiple response (two or more correct answers). The passing score is 750 on a scaled 100-1000.

What is the AIP-C01 exam fee?

The exam fee is $300 USD, the standard AWS Professional-level price. The exam is delivered at Pearson VUE or PSI testing centers and via online proctoring. AWS-certified candidates receive a 50% retake voucher; the certification is valid for 3 years.

What experience does AWS recommend for AIP-C01?

AWS recommends 2+ years building production applications on AWS, plus 1+ year of hands-on experience implementing generative AI solutions, plus general AI/ML or data engineering background. Familiarity with AWS compute, storage, networking, security (IAM, KMS, VPC), IaC (CDK/CloudFormation), and observability (CloudWatch) is expected.

What is the largest domain on the AIP-C01 exam?

Foundation Model Integration, Data Management, and Compliance is the largest domain at 31%, followed by Implementation and Integration at 26%. Together they cover 57% of the exam, so deep expertise in Bedrock model selection, RAG with Bedrock Knowledge Bases, prompt engineering, and Bedrock Agents is essential.

How should I prepare for the AIP-C01 exam?

Plan 80-120 hours of study over 8-12 weeks. Use the official AWS Skill Builder AIP-C01 exam-prep plan, build hands-on with Bedrock playgrounds, deploy a RAG app with Bedrock Knowledge Bases and OpenSearch Serverless, configure Guardrails, build an Agent with action groups and prompt flows, run Bedrock Model Evaluation jobs, and complete 100+ practice questions. Aim for 75%+ on practice tests.