Technology32 min read

AWS Certified AI Practitioner (AIF-C01) Exam Guide 2026: FREE Study Plan, Bedrock Deep Dive, Pass Rate

FREE 2026 AWS Certified AI Practitioner (AIF-C01) study guide. Cost $100, 90 min, 65 questions, 700/1000 passing score. Bedrock deep dive, 5 domains weighted, RAG vs fine-tuning, responsible AI, 4-week plan.

Ran Chen, EA, CFP®April 21, 2026

Key Facts

  • The AWS Certified AI Practitioner (AIF-C01) exam costs $100 USD, half the price of AWS Associate-level exams.
  • The AIF-C01 exam has 65 questions in 90 minutes: 50 scored plus 15 unscored pretest items used to validate future questions.
  • The AIF-C01 passing score is 700 on a 100-1000 scaled range; AWS uses a compensatory model, so no per-domain pass is required.
  • Domain 3 (Applications of Foundation Models) is the heaviest AIF-C01 section at 28% of scored content, with Amazon Bedrock and prompt engineering dominating.
  • AIF-C01 certification is valid for 3 years and automatically recertifies when a candidate passes AWS Certified Machine Learning Engineer - Associate (MLA-C01).
  • AIF-C01 has zero formal prerequisites and no coding is required to sit the exam (AWS recommends up to 6 months of AI/ML exposure).
  • AIF-C01 is delivered at Pearson VUE test centers and via PSI online proctoring, with identical exam content and scoring.
  • AWS uses a 3-month inclusion rule for AIF-C01: new AWS services must be generally available for 3+ months before appearing on the exam, half the standard 6-month window.
  • Amazon Bedrock expanded from approximately 60 to nearly 100 foundation models in 2026 via Bedrock Marketplace additions from Google, NVIDIA, OpenAI, Qwen, and others.

AWS Certified AI Practitioner (AIF-C01): The Foundational AI Cert That Actually Matters in 2026

Generative AI isn't a side project anymore. It's showing up in every board deck, product roadmap, and hiring plan — and your employer is quietly sorting people into two piles: the ones who can talk about Bedrock, RAG, fine-tuning, and responsible AI without flinching, and the ones who can't.

The AWS Certified AI Practitioner (AIF-C01) is how you move into the first pile — fast. It's the cheapest, shortest, lowest-friction AI credential AWS has ever released, and it's the only foundational AI cert that's actually built around generative AI (not 2019-era ML trivia).

This guide — built for 2026 — covers the full AIF-C01 content outline verified against the AWS official exam guide, the five scored domains and their exact weights, a Bedrock deep dive, a 2–4 week study plan that beats the Tutorials Dojo and Whizlabs plans, salary data, and a head-to-head comparison with Azure AI-900, Google Cloud Generative AI Leader, and AWS Cloud Practitioner.

AIF-C01 at a Glance (2026)

AttributeValue
Exam codeAIF-C01
CategoryFoundational
Cost$100 USD
Duration90 minutes
Total questions65 (50 scored + 15 unscored pretest)
Question typesMultiple choice, multiple response, ordering, matching
Passing score700 / 1000 (scaled, compensatory)
DeliveryPearson VUE test centers + PSI online proctoring
LanguageEnglish, Japanese, Korean, Simplified Chinese, Portuguese (Brazil)
Validity3 years
PrerequisitesNone
Recommended experienceUp to 6 months exposure to AI/ML on AWS
Retake wait14 days
RecertificationRetake AIF-C01, or pass MLA-C01 (auto-recertifies)

Everything in that table was verified against the AWS Certified AI Practitioner landing page (aws.amazon.com/certification/certified-ai-practitioner) and the AIF-C01 Exam Guide PDF (docs.aws.amazon.com) at the time of writing — if competitor blogs still show "$150," "85 minutes," "$75," "4 domains," or 25%/30%/20%/15%/10% weights, they're out of date.

The Special 3-Month "New Service" Rule for AIF-C01

AWS has a little-known but exam-critical policy: for AIF-C01, MLA-C01, AIP-C01 (Beta), and MLS-C01, a new AWS product, service, or feature must be generally available for only 3 months before it can appear on the exam — half the normal 6-month window that applies to every other AWS certification (per the AWS Certification FAQ).

Why this matters for 2026 candidates: Bedrock features shipped in Q4 2025 or Q1 2026 — AgentCore policy controls (GA March 2026), expanded Bedrock Marketplace models, contextual grounding checks, Nova 2 Omni, and guardrails for coding use cases — are fair game on the exam. Do not skip recent AWS blog posts in the Generative AI category just because they look "too new."


Start FREE AIF-C01 Practice in the Next 60 Seconds

Before you drop a dime on Tutorials Dojo, Whizlabs, or a Udemy course, burn through our adaptive question bank — it's 100% free, no credit card, no upsell.

  • 250+ AIF-C01 practice questions mapped to the official 5-domain outline
  • AI tutor explanations powered by Google Gemini (explains why the wrong answers are wrong)
  • Weakness heatmap that tells you which domain to study next
  • Full mock exam that mirrors the 50 scored + 15 unscored format
Start FREE AWS AI Practitioner practice →Practice questions with detailed explanations

What Is the AWS Certified AI Practitioner, and Why Does It Matter in 2026?

AIF-C01 is AWS's foundational-tier AI certification. "Foundational" in AWS taxonomy means three things:

  1. No prerequisites. No associate-level exam, no work experience requirement, no coding gate.
  2. Concept-first. The exam tests recognition and reasoning, not building.
  3. Business-context framing. Every scenario is written from a "you are advising a business stakeholder" angle.

AWS launched AIF-C01 in October 2024 as part of a deliberate shift — Cloud Practitioner (CLF-C02) had grown into a generic cloud-literacy exam, and AWS needed a parallel track for the wave of non-engineers entering AI-adjacent roles. Bedrock had just hit general availability. Amazon Q was rolling out across the console. ChatGPT had trained every executive to ask "what's our AI strategy?" AIF-C01 is the cheap, fast, credible answer for the humans in those meetings.

The 2026 Generative AI Landscape (Why Employers Are Paying Attention)

Three forces make AIF-C01 more valuable in 2026 than it was on launch day:

  1. Bedrock went from "new" to "default." Amazon Bedrock is now the production inference platform for Anthropic Claude, Amazon Titan and Nova, Meta Llama, Mistral, Cohere Command, Stability AI, and AI21 Jamba — all behind a single API, with Knowledge Bases, Guardrails, and Agents layered on top. If your company uses AWS, Bedrock is the AI platform, period.
  2. Regulatory pressure is real. The EU AI Act, NIST AI Risk Management Framework, and HIPAA guidance for AI workloads all made 2025 the year "responsible AI" stopped being a slide and started being a compliance requirement. Domains 4 and 5 of the exam — 28% combined — reflect that reality.
  3. Non-engineer demand exploded. PMs, analysts, consultants, and solution managers need vocabulary fluent enough to scope projects, estimate costs, and challenge vendor claims. AIF-C01 is the only AWS cert that was designed from day one for those roles.

Who Should Take AIF-C01 (and Who Should Skip It)?

Take it if you are:

  • A product manager scoping AI features — you need to challenge your engineer when they say "let's just fine-tune it"
  • A program or project manager running a Bedrock or SageMaker initiative
  • A business or data analyst whose dashboards are about to get AI-generated narratives
  • A solutions consultant / sales engineer at AWS, an SI, or an ISV selling AI-adjacent products
  • A technical account manager (TAM) at AWS or a partner
  • A career changer moving from adjacent tech roles (support, SDR, ops) into AI/cloud
  • A cloud practitioner-certified generalist who wants a second AWS credential that signals specialization without committing to an Associate-level exam
  • A manager or director who needs to sound credible when vendors pitch you

Skip it if you are:

  • A practicing ML engineer — go straight to AWS Certified Machine Learning Engineer – Associate (MLA-C01) or Machine Learning Specialty (MLS-C01)
  • A data scientist building production models — MLA-C01 or MLS-C01 is the right signal
  • A senior software engineer already shipping Bedrock integrations — consider the new AWS Certified AI Implementation Professional (AIP-C01) when it stabilizes
  • Preparing for a university degree — AIF-C01 is professional, not academic

The CLF-C02 + AIF-C01 Combo (Highly Recommended)

For anyone entering an AI-adjacent cloud role with no prior AWS credential, the two-cert foundation is extremely effective:

CertCostDurationWhat It Signals
AWS Cloud Practitioner (CLF-C02)$10090 min"I understand AWS as a platform"
AWS AI Practitioner (AIF-C01)$10090 min"I understand AI on AWS"
Combined cost$200Credible two-cert foundation in under 8 weeks

Many hiring managers in 2026 weight the combo above a single Associate-level cert for non-engineering roles, because it demonstrates both cloud context and AI specialization.


Prerequisites: None (But Here's What Actually Helps)

AWS lists zero formal prerequisites for AIF-C01. You can register today without taking another exam, holding a degree, or proving work experience.

That said, candidates who pass on the first try typically have one or more of the following:

  • A working mental model of supervised vs. unsupervised vs. reinforcement learning (you can study this in a day from AWS Skill Builder)
  • Comfort with basic AWS navigation (S3, IAM, what an "AWS Region" means)
  • Hands-on time in the Amazon Bedrock console — even 30 minutes clicking through the playground pays off disproportionately
  • Familiarity with the vocabulary of modern LLMs (tokens, embeddings, context window, hallucination)

If any of those sound alien, budget an extra 1–2 weeks on your study plan.


The Five AIF-C01 Domains (2026 Official Weights)

The exam guide PDF (docs.aws.amazon.com/pdfs/aws-certification/latest/ai-practitioner-01/ai-practitioner-01.pdf) lists these five content domains and weightings. If a competitor blog shows different percentages or four domains, it is outdated — weights have been stable since launch.

#DomainWeight~Scored Qs
1Fundamentals of AI and ML20%~10
2Fundamentals of Generative AI24%~12
3Applications of Foundation Models28%~14
4Guidelines for Responsible AI14%~7
5Security, Compliance, and Governance for AI Solutions14%~7

Domain 3 is the heaviest. If your study time is constrained, triple down on Bedrock, prompt engineering, RAG, and fine-tuning before anything else.

Each domain contains task statements — sub-objectives listed in the official Exam Guide PDF. Domain 3 has 4 task statements (the most of any domain); Domains 1 and 2 have 3 each; Domains 4 and 5 have 2 each. When AWS announces exam updates, they usually revise task statements, not domain weights.

Let's walk through each.

Domain 1: Fundamentals of AI and ML (20%)

What it tests: whether you can distinguish AI, ML, and deep learning; classify learning paradigms (supervised, unsupervised, reinforcement, self-supervised); and describe the ML development lifecycle.

Must-know concepts:

  • AI vs. ML vs. Deep Learning vs. Generative AI — the Venn diagram. AI is the umbrella. ML is a subset. Deep learning is a subset of ML using neural networks. Generative AI is a category of deep learning that produces new content.
  • Learning types: supervised (labeled data → classification, regression), unsupervised (no labels → clustering, dimensionality reduction, anomaly detection), reinforcement (agent + environment + reward), self-supervised (predict parts of input from other parts — how LLMs are pretrained).
  • ML lifecycle: business problem framing → data collection → data preparation → feature engineering → model training → evaluation → deployment → monitoring → retraining.
  • Data types: structured (tabular, SQL), unstructured (text, images, audio, video), semi-structured (JSON, logs).
  • Model evaluation metrics: accuracy, precision, recall, F1 (classification); RMSE, MAE, R² (regression); BLEU, ROUGE, BERTScore, perplexity (generation); AUC-ROC.
  • Bias vs. variance: underfitting (high bias, too simple) vs. overfitting (high variance, memorizes training data).
  • AWS services for traditional ML: SageMaker (full platform), SageMaker Canvas (no-code for business analysts), SageMaker JumpStart (pretrained models), SageMaker Data Wrangler (data prep), SageMaker Ground Truth (data labeling), SageMaker Clarify (bias & explainability), SageMaker Model Monitor (drift detection).

Typical question shape: "A company wants to predict which customers will churn using historical labeled data. Which type of machine learning is most appropriate?" (Answer: supervised.)

Domain 2: Fundamentals of Generative AI (24%)

What it tests: the mechanics of generative AI — foundation models, LLMs, transformers, tokens, embeddings, context windows — plus AWS's generative services at a conceptual level.

Must-know concepts:

  • Foundation models (FMs): large deep-learning models trained on broad data at scale, adaptable to many downstream tasks. Includes LLMs (text), vision-language models, diffusion models (images).
  • Transformer architecture basics: self-attention, encoder-only (BERT), decoder-only (GPT, Claude, Titan Text, Llama), encoder-decoder (T5).
  • Tokens: the subword units an LLM actually processes. Tokens ≈ 0.75 words for English. Cost and context limits are measured in tokens.
  • Embeddings: dense vector representations of text, images, or other data. The foundation of semantic search and RAG.
  • Context window: the maximum number of tokens the model can "see" in a single request. Claude 3.5 Sonnet = 200K, Titan Text Premier = 32K, etc.
  • Hallucination: the model generates plausible-sounding but factually incorrect output. A known limitation, mitigated by RAG and prompt grounding.
  • Generative model families: GANs (generator + discriminator), VAEs (variational autoencoders), diffusion models (Stable Diffusion, Titan Image Generator), autoregressive transformers (LLMs).
  • AWS generative AI stack:
    • Amazon Bedrock — fully managed API for foundation models from Anthropic, Amazon, Cohere, AI21, Meta, Mistral, Stability AI
    • Amazon Q Business — enterprise generative AI assistant for internal knowledge
    • Amazon Q Developer — AI coding assistant (formerly CodeWhisperer)
    • Amazon SageMaker JumpStart — pretrained models and fine-tuning
    • Amazon SageMaker Canvas — no-code generative AI for business users

Typical question shape: "Which AWS service provides access to multiple foundation models from different providers through a single API?" (Answer: Amazon Bedrock.)

Domain 3: Applications of Foundation Models (28% — The Big One)

What it tests: how to actually apply foundation models — prompt engineering, inference parameters, RAG, fine-tuning, continued pretraining, evaluation, and the AWS services that make it all work. This is the domain where most candidates lose points.

Must-know concepts:

  • Prompt engineering techniques:
    • Zero-shot prompting — ask with no examples
    • Few-shot prompting — ask with examples in the prompt
    • Chain-of-thought (CoT) — ask the model to reason step by step
    • Role prompting — "You are an expert radiologist..."
    • Prompt templates — parameterized, reusable prompts
    • Negative prompts (image models) — what NOT to include
  • Prompt injection and jailbreaking — adversarial prompts that override system instructions. Mitigated by input validation and Bedrock Guardrails.
  • Inference parameters (expect 2–4 questions directly on these):
    • Temperature (0–1): randomness. Low = deterministic, high = creative.
    • Top-p (nucleus sampling): cumulative probability cutoff for token selection. Lower = more focused.
    • Top-k: select only from the top K most likely tokens.
    • Stop sequences: tokens that end generation.
    • Max tokens / response length.
    • Presence / frequency penalty (some models): discourage repetition.
  • Temperature vs. Top-p (common trap question): temperature rescales the full probability distribution before sampling; top-p truncates the distribution to the smallest set whose cumulative probability exceeds p, then samples from that set. Both control randomness but via different mechanisms. AWS typically recommends tuning ONE, not both.
  • Retrieval-Augmented Generation (RAG):
    • Problem it solves: foundation models have frozen knowledge and hallucinate on proprietary data.
    • How it works: (1) embed your documents into a vector store, (2) at query time, retrieve the top-k relevant chunks, (3) stuff them into the prompt as context, (4) let the model answer grounded in the retrieved text.
    • AWS implementation: Amazon Bedrock Knowledge Bases (managed RAG), Amazon OpenSearch Serverless (vector store), Amazon Kendra (enterprise search), Amazon Aurora PostgreSQL with pgvector.
  • Fine-tuning vs. Continued Pretraining vs. RAG (THE classic trap):
TechniqueWhat ChangesWhen to UseAWS Service
Prompt engineeringNothing in the modelFastest, cheapest, default firstBedrock (any model)
RAGNothing in the model — you add context at query timeProprietary / frequently-updated knowledgeBedrock Knowledge Bases
Fine-tuningModel weights adjusted on labeled task dataTask specialization (tone, format, domain classification)Bedrock custom models, SageMaker JumpStart
Continued pretrainingModel weights further trained on unlabeled domain textDeep domain adaptation (legal, medical, industry jargon)Bedrock continued pretraining, SageMaker
Train from scratchAll weights from zeroAlmost never for practitionersSageMaker training jobs

Exam rule of thumb: if the scenario says "the company has a large library of internal documents that change frequently," the answer is RAG, not fine-tuning. If it says "the model needs to respond in a specific tone or format based on 500 labeled examples," the answer is fine-tuning. If it says "the model needs deep understanding of medical terminology from a large unlabeled corpus," the answer is continued pretraining.

  • Agents and orchestration:
    • Amazon Bedrock Agents — orchestrate multi-step workflows, call APIs and Lambda functions, use Knowledge Bases for grounding
    • Action groups — the "tools" an agent can call
    • Agent memory and multi-agent collaboration (new in 2025)
  • Evaluating generative models:
    • Automatic metrics: BLEU (translation), ROUGE (summarization), BERTScore (semantic similarity), perplexity.
    • Human evaluation: rubric-based rating, Amazon Augmented AI (Amazon A2I) for human review loops.
    • Bedrock model evaluation jobs — automatic and human-based evaluation built into Bedrock.
    • Judge LLMs — using one LLM to grade another's output.
  • Cost management on Bedrock:
    • On-demand — pay per token, no commitment, good for variable workloads.
    • Provisioned throughput — pay for dedicated capacity (model units) for predictable, high-volume workloads. Required for some custom fine-tuned models.
    • Batch inference — cheaper per-token for large async jobs.

Typical question shape: "A customer-service team wants a chatbot that answers questions using the company's 10,000-page internal policy manual. Which approach minimizes cost and maintenance while ensuring answers reflect the latest policy updates?" (Answer: RAG via Bedrock Knowledge Bases — not fine-tuning.)

Domain 4: Guidelines for Responsible AI (14%)

What it tests: AWS's responsible AI pillars, bias detection, explainability, and the specific tools that implement those principles on AWS.

Must-know responsible AI pillars (AWS's seven):

  1. Fairness — models should not systematically disadvantage protected groups.
  2. Explainability / Interpretability — humans can understand why the model made a decision.
  3. Privacy and security — training data and inferences are protected.
  4. Transparency — stakeholders know an AI system is in use and how it works.
  5. Veracity and robustness — the model produces truthful outputs and handles adversarial inputs gracefully.
  6. Governance — policies, processes, and ownership for AI across the org.
  7. Safety — the system operates without causing harm (physical, reputational, financial).

AWS-specific responsible AI tools:

ToolPillar(s)What It Does
Amazon SageMaker ClarifyFairness, ExplainabilityDetects pre-training and post-training bias; SHAP-based feature importance for model predictions.
Amazon SageMaker Model CardsTransparency, GovernanceStandardized documentation for models: intended use, limitations, training data, evaluation metrics.
Amazon SageMaker Model MonitorVeracity, RobustnessDetects data drift, feature drift, model quality degradation in production.
Amazon Augmented AI (A2I)Safety, VeracityHuman-in-the-loop review for low-confidence predictions.
Amazon Bedrock GuardrailsSafety, Privacy, VeracityContent filters (hate, insults, sexual, violence, misconduct), denied topics, word filters, PII redaction, contextual grounding checks (new), prompt-attack filter.
Amazon Bedrock Model EvaluationVeracity, FairnessAutomatic and human-based evaluation of foundation model outputs.
AWS AI Service CardsTransparencyPublic documentation of AWS AI services' intended use, limitations, and responsible design.

Exam trap: confusing SageMaker Model Monitor (production drift) with SageMaker Clarify (bias + explainability). Monitor watches over time; Clarify analyzes a point-in-time model or dataset.

Another trap: confusing Bedrock Guardrails (safety at inference time) with SageMaker Clarify (fairness analysis at training/evaluation time). Different stages, different tools.

Domain 5: Security, Compliance, and Governance for AI Solutions (14%)

What it tests: how you secure AI workloads — data protection, IAM, network isolation, compliance frameworks, and governance patterns unique to AI.

Must-know concepts:

  • IAM for AI: least privilege for Bedrock model invocation, SageMaker execution roles, resource-based policies on S3 training buckets.
  • Encryption:
    • AWS KMS — customer-managed keys (CMKs) for training data, model artifacts, and Bedrock custom model outputs.
    • Encryption in transit (TLS) and at rest (S3 SSE, EBS encryption).
  • Network isolation:
    • Amazon Bedrock VPC endpoints (via AWS PrivateLink) — keep inference traffic off the public internet.
    • SageMaker VPC-only mode — training and endpoints in your VPC.
  • PII and sensitive data handling:
    • Amazon Macie — discovers and classifies PII in S3.
    • Amazon Comprehend — detects and redacts PII in text.
    • Bedrock Guardrails PII filter — redacts PII from prompts and responses at inference.
  • Compliance frameworks covered for AI workloads:
    • HIPAA — multiple Bedrock models and SageMaker are HIPAA-eligible (check the AWS HIPAA eligibility list).
    • SOC 1/2/3, ISO 27001, PCI DSS, FedRAMP (GovCloud), GDPR, EU AI Act.
  • Data residency and sovereignty: Bedrock is Region-scoped. Model invocations do not cross Regions unless you use cross-Region inference (opt-in). Data stays in the Region where you call the API.
  • Governance artifacts:
    • Model Cards (SageMaker)
    • AWS AI Service Cards (for AWS-managed services)
    • AWS CloudTrail — logs all Bedrock and SageMaker API calls
    • AWS Config — tracks configuration changes
    • AWS Audit Manager — maps AWS controls to compliance frameworks
  • Data governance for training data: AWS Glue Data Catalog, AWS Lake Formation (fine-grained access), S3 Object Lambda (redact on read).

Exam rule of thumb: if the question asks "how do we keep inference traffic off the public internet?" the answer is VPC endpoints / PrivateLink. If it asks "how do we prove to auditors which user invoked which model when?" the answer is CloudTrail. If it asks "how do we redact PII from user prompts before they hit the model?" the answer is Bedrock Guardrails.


Amazon Bedrock Deep Dive (the 28% Domain, Expanded)

If you master one AWS service for this exam, make it Amazon Bedrock. Expect 8–14 direct or indirect Bedrock questions across Domains 2, 3, 4, and 5.

The Bedrock Model Catalog (2026)

Bedrock provides API access to foundation models from multiple providers. You do not need to memorize every model version, but you should recognize which provider owns which family.

ProviderModel Families on BedrockBest Known For
AnthropicClaude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 OpusHigh-quality reasoning, long context (200K)
AmazonTitan Text, Titan Embeddings, Titan Image Generator, Nova Micro/Lite/Pro/Premier, Nova 2 Omni (multimodal, 2026)AWS-native, competitive pricing, multimodal
MetaLlama 3.1, Llama 3.2Open weights, cost-effective
MistralMistral 7B, Mixtral 8x7B, Mistral LargeEuropean, efficient MoE
CohereCommand R, Command R+, EmbedEnterprise RAG, multilingual
AI21 LabsJamba 1.5Long context, hybrid architecture
Stability AIStable Diffusion, Stable Image UltraImage generation

Bedrock Core Capabilities

  1. Model Invocation API — synchronous, streaming, and batch inference across all supported models with a unified API signature (InvokeModel, Converse, ConverseStream).
  2. Amazon Bedrock Knowledge Bases — fully managed RAG. You point it at S3; Bedrock handles chunking, embedding, vector storage (OpenSearch Serverless, Aurora PostgreSQL, Pinecone, Redis), and retrieval.
  3. Amazon Bedrock Agents — orchestrate multi-step tasks. Agents can use Knowledge Bases for grounding, call Lambda functions as tools, and chain actions.
  4. Amazon Bedrock Guardrails — policy layer independent of the model. Apply content filters, denied topics, word filters, PII redaction, contextual grounding checks, and prompt-attack detection to any model.
  5. Custom models — fine-tuning and continued pretraining on top of base models (currently Titan, Llama, Cohere Command).
  6. Model evaluation — automatic and human-based evaluation jobs.
  7. Prompt management — versioned prompts as first-class resources.
  8. Provisioned throughput — reserved model capacity in "model units" for predictable, high-volume or fine-tuned-model workloads.

On-Demand vs. Provisioned Throughput (Common Exam Trap)

ModeBillingWhen to Use
On-demandPer 1,000 input/output tokensVariable workloads, prototyping, spiky traffic
Provisioned throughputHourly per model unit (MU), 1-month or 6-month commitmentPredictable high-volume production, required for most custom fine-tuned models
Batch inferenceDiscounted per-token for async jobsLarge offline workloads (document processing, bulk summarization)

Rule of thumb: if the question mentions "unpredictable volume" or "prototyping," on-demand. If it mentions "production, consistent traffic, guaranteed latency, or custom fine-tuned model," provisioned throughput.

The Bedrock Security Stack

  • IAM policies control who can invoke which model.
  • KMS encrypts prompts, responses, and custom model artifacts.
  • VPC endpoints (PrivateLink) keep Bedrock traffic on the AWS network.
  • CloudTrail logs every InvokeModel call.
  • Guardrails apply content and PII policies at inference.
  • Data-training isolation guarantee: AWS contractually commits that your Bedrock prompts and responses are never used to train the underlying foundation models. Expect 1–2 exam questions that probe whether you know this.

Bedrock AgentCore (2026 — Now In Scope)

Bedrock AgentCore went from preview to production in early 2026, and because of AIF-C01's 3-month inclusion rule, it is fully testable. Expect 1–2 questions that probe your high-level understanding.

  • Policy controls (GA March 2026) — enforce exactly which actions, tools, and data sources an agent can reach. Policies are verified outside the agent's reasoning loop so that a prompt-injected LLM cannot self-authorize an unsafe action.
  • Stateful MCP (Model Context Protocol) server support — agents can now maintain context across sessions via managed MCP servers, a 2026 interoperability standard.
  • Memory streaming notifications — agents receive real-time updates when long-term memory changes.
  • Multi-agent collaboration — a supervisor agent coordinates specialist agents (e.g., a "research agent" + "writing agent" + "fact-check agent"), each with narrower tools and guardrails.

Exam shorthand: if a scenario asks "how do we make sure the agent never calls the payments API even if the user tricks the LLM?" → AgentCore policy controls (out-of-loop enforcement), not prompt engineering.

Bedrock Cross-Region Inference (2026 — Availability Pattern)

Cross-Region Inference automatically routes Bedrock invocation traffic to a secondary AWS Region if the primary Region experiences capacity throttling or an outage. It is opt-in and uses pre-defined Region pairs (e.g., us-east-1 ↔ us-west-2).

Exam traps:

  • It does improve availability and reliability without custom failover code.
  • It does not violate data residency unless you enable it — Bedrock is Region-scoped by default.
  • It is not the same as multi-Region replication. The model call simply fails over; data at rest (S3 training data, Knowledge Base vectors) does not move.

Rule of thumb: if the scenario says "ensure continuous inference during regional capacity events" → Cross-Region Inference. If it says "data must never leave Region X" → do not enable Cross-Region Inference, and restrict IAM accordingly.

Contextual Grounding Check (Guardrails Feature)

One of the most commonly missed Q1 2026 Guardrails features: contextual grounding check measures whether a model's response is actually grounded in the retrieved context (for RAG flows) and in the user's query — and blocks or flags ungrounded ("hallucinated") outputs. Expect a direct question on this as the responsible-AI mitigation for hallucinations in Bedrock Knowledge Bases.


The Seven Responsible AI Pillars and How AWS Implements Each

This table is exam gold. Memorize it.

PillarDefinitionPrimary AWS Tool(s)
FairnessNo systematic disadvantage to protected groupsSageMaker Clarify (bias detection)
ExplainabilityHumans understand model decisionsSageMaker Clarify (SHAP), Model Cards
Privacy & SecurityTraining data and inferences are protectedKMS, Bedrock Guardrails PII filter, Macie, VPC endpoints
TransparencyStakeholders know AI is in useModel Cards, AWS AI Service Cards
Veracity & RobustnessTruthful outputs, graceful on adversarial inputsBedrock Guardrails, Contextual Grounding Check, Model Monitor
GovernancePolicies, ownership, audit trailsAWS CloudTrail, Audit Manager, Model Cards
SafetyNo physical, reputational, or financial harmBedrock Guardrails, Amazon A2I (human-in-the-loop)

Security, Compliance, and Governance Cheat Sheet

Memorize this mapping for Domain 5.

GoalAWS Service / Feature
Encrypt training data in S3KMS + S3 SSE-KMS
Encrypt model artifactsKMS-managed customer keys
Keep Bedrock traffic off the public internetVPC endpoint (PrivateLink)
Deploy SageMaker endpoints privatelyVPC-only mode
Find PII in S3 bucketsAmazon Macie
Redact PII from user promptsBedrock Guardrails PII filter
Detect PII in customer textAmazon Comprehend
Log every model invocationAWS CloudTrail
Track configuration driftAWS Config
Map controls to compliance frameworkAWS Audit Manager
Fine-grained access to data lake tablesAWS Lake Formation
Run model on HIPAA-regulated dataUse HIPAA-eligible services (Bedrock, SageMaker) under BAA
Prove data never left a RegionRegion-scoped Bedrock + CloudTrail

AIF-C01 Pass Rate and Difficulty (2026)

AWS does not publish official pass rates. Based on aggregated community data from Reddit r/AWSCertifications, r/learnmachinelearning, Tutorials Dojo community forums, Whizlabs post-exam surveys, and Reddit weekly exam-result threads through Q1 2026:

MetricEstimate
First-attempt pass rate72–78%
Pass rate after 2+ practice exams scoring 80%+90%+
Average study hours (prior AWS experience)20–30 hrs
Average study hours (zero AWS experience)40–60 hrs
Most common failure pointDomain 3 (RAG vs. fine-tuning confusion, inference parameters)
Second most common failure pointDomain 4 (SageMaker Clarify vs. Model Monitor vs. Bedrock Guardrails)

How hard is it, really? Easier than AWS Solutions Architect Associate (pass rate ~65–70%) and easier than AWS Developer Associate (~68%). Slightly harder than AWS Cloud Practitioner (~78–82%). On par with Azure AI-900 (~78%).

The exam rewards pattern recognition over memorization. If you can read a business scenario and immediately think "RAG," "fine-tuning," "Guardrails," or "Clarify" — you pass.


Ready to Test Yourself? Free Practice Exam Inside

Reading about AIF-C01 is not the same as answering AIF-C01 questions under a timer.

  • 250+ questions aligned to the 5 official domains
  • Timed full-length mock exams (65 questions in 90 minutes)
  • AI tutor explains every wrong answer
  • No signup required to start
Start FREE AWS AI Practitioner practice now →Practice questions with detailed explanations

The 2–4 Week AIF-C01 Study Plan (Beats Tutorials Dojo + Whizlabs)

Tutorials Dojo recommends 4–6 weeks. Whizlabs recommends 10–12 days. Both are wrong for most candidates. Here is a realistic, domain-weighted plan.

Fast Track: 2 Weeks (if you already use AWS)

Week 1 — Concepts and Bedrock

  • Days 1–2: Read the official AIF-C01 Exam Guide PDF end-to-end. Skim AWS Skill Builder "Standard Exam Prep Plan: AWS Certified AI Practitioner."
  • Days 3–4: Domain 1 (AI/ML fundamentals) + Domain 2 (Generative AI). Watch the AWS Skill Builder video series. Click through the Bedrock console in the AWS Free Tier playground. Invoke 3 different models.
  • Days 5–7: Domain 3 deep dive — prompt engineering, RAG, fine-tuning vs. continued pretraining, inference parameters. Build a tiny Bedrock Knowledge Base with 3 PDFs in S3 to internalize RAG.

Week 2 — Responsible AI, Security, and Practice

  • Days 8–9: Domain 4 (Responsible AI) + Domain 5 (Security). Read the AWS AI Service Cards. Play with Bedrock Guardrails in the console.
  • Day 10: First full 65-question mock exam (timed). Aim 70%+.
  • Days 11–12: Review every missed question. Re-study weak domains.
  • Day 13: Second full mock exam. Aim 80%+.
  • Day 14: Final review of cheat sheets, sleep, exam.

Standard Track: 4 Weeks (if you are new to AI)

Week 1 — AI/ML literacy (Domain 1)

  • AWS Skill Builder "Standard Exam Prep Plan: AWS Certified AI Practitioner" (official, free)
  • SageMaker Canvas hands-on lab (build one no-code model)
  • Read: supervised vs. unsupervised, model evaluation metrics, ML lifecycle
  • End-of-week: 20-question Domain 1 drill

Week 2 — Generative AI (Domain 2) + Bedrock basics

  • Bedrock console hands-on: invoke Claude, Titan, and Llama from the playground
  • Watch: Stephane Maarek or Tutorials Dojo AIF-C01 Bedrock module
  • Read: transformers at a high level, tokens, embeddings, context windows
  • End-of-week: 20-question Domain 2 drill

Week 3 — Applications of Foundation Models (Domain 3 — 28%, spend extra time)

  • Deep dive: prompt engineering (zero-shot, few-shot, CoT), inference parameters
  • Build: a Bedrock Knowledge Base with 5–10 S3 documents (RAG)
  • Read: fine-tuning vs. continued pretraining vs. RAG (the AWS blog has great diagrams)
  • Experiment: temperature 0.1 vs. 0.9, top-p 0.5 vs. 0.95 — feel the difference
  • End-of-week: 30-question Domain 3 drill + first full mock exam

Week 4 — Responsible AI, Security, and Practice

  • Domain 4: SageMaker Clarify, Model Cards, Bedrock Guardrails hands-on
  • Domain 5: IAM for Bedrock, VPC endpoints, CloudTrail, KMS, HIPAA eligibility
  • Two full timed mock exams. Aim 80%+ on both.
  • Day before exam: light review of cheat sheets only. Sleep 8 hours.

Recommended Resources (Free-First)

Free (and enough to pass)

  1. AWS Skill Builder — "Standard Exam Prep Plan: AWS Certified AI Practitioner" (official, free). The single most high-leverage free resource. Includes the Official Practice Question Set and a curated learning path.
  2. Official AIF-C01 Exam Guide PDF (docs.aws.amazon.com). Print it. Mark it up.
  3. AWS AI & ML documentation — Bedrock, SageMaker, Comprehend, Rekognition, Kendra, Q Business.
  4. AWS Blog — Generative AI category — real-world scenario questions often reference patterns from these posts.
  5. OpenExamPrep free AIF-C01 practice bank — 250+ questions, adaptive, AI-tutor explanations. Start here.
  6. AWS AI Service Cards — the source of responsible AI questions.

Paid (optional, for belt-and-suspenders candidates)

ResourceFormatApprox. CostWorth It?
Stephane Maarek — Udemy AIF-C01 CourseVideo, 12 hrs$15–25 on saleYes — best paid video course
Tutorials Dojo — AIF-C01 Practice Exams6 full mocks$15Yes — closest to exam feel
Whizlabs — AIF-C01 Course + LabsVideo + labs$20–30Optional — good hands-on labs
AWS Skill Builder Individual subscriptionOfficial labs + AWS Jam$29/monthOptional — only if you want AWS Builder Labs
A Cloud Guru / Pluralsight AIF-C01VideoSubscriptionOptional

Do not buy braindumps. AWS voids certifications for candidates who use them.


Exam-Day Strategy

Registration

Register at aws.amazon.com/certification → AWS Certified AI Practitioner. You can choose:

  • Pearson VUE test center — in-person, proctored. Lower technical-glitch risk.
  • PSI online proctoring — at home. Requires webcam, clean desk, stable internet, and a quiet room.

Exam content and scoring are identical across both. Choose what's convenient. Most first-timers prefer a test center for less anxiety.

Before exam day

  • Register 1–2 weeks ahead. AIF-C01 has better availability than Associate-level exams, but popular slots fill fast.
  • If online: test your webcam, test your ID, clear your desk, pre-download the PSI Bridge app.
  • Sleep. Skip the all-nighter. Fatigue costs more points than one extra study session earns.

During the exam

  • 90 minutes, 65 questions ≈ 83 seconds per question. You have time.
  • Use the flag-for-review feature aggressively. If a question takes more than 90 seconds, flag and move on. Return at the end.
  • Read the last sentence of the question first. It usually defines what's being asked (cost, security, fastest, etc.).
  • Eliminate distractors. Two options are usually obviously wrong. The trap is between the two plausible ones.
  • For "choose the best" questions, pick the most specific AWS service. If you see "store documents for RAG," choose Bedrock Knowledge Bases, not "S3 + custom code."
  • For multiple response questions, the number of correct answers is stated. Don't guess — if it says "select TWO," select exactly two.
  • Trust your first instinct on AWS-service-naming questions. Second-guessing is where most points are lost.

After you submit

You'll see a preliminary pass/fail on screen. The official score report arrives within 5 business days via AWS Certification account. Digital badge via Credly within 1–2 days.


Cost, Retakes, and Recertification

Cost breakdown

ItemCost
Exam fee$100 USD
Retake fee$100 USD (no discount)
Retake wait14 days
Typical study material$0–$50 (free plan: $0)
Total realistic outlay$100–$150

50% off voucher: AWS sometimes awards a 50% discount voucher when you pass an Official Practice Exam through AWS Skill Builder. Always check your AWS Certification Account before registering.

Retake policy

  • Fail → wait 14 days → pay $100 → retake.
  • No limit on retakes, but each costs $100.
  • If you fail, the score report shows domain-level performance — use it to target your weak domains.

Recertification (3-year cycle)

To keep AIF-C01 active past 3 years, do one of:

  1. Retake AIF-C01 (or its current version).
  2. Pass AWS Certified Machine Learning Engineer – Associate (MLA-C01) — automatically recertifies AIF-C01.
  3. Pass AWS Certified Machine Learning – Specialty (MLS-C01) (while it remains available).

AWS sends email reminders at 12, 6, and 3 months before expiration.


Salary and Career Impact (2026)

AIF-C01 is not a senior engineering cert — it will not, by itself, move you from $90K to $180K. But it does two valuable things:

  1. Unlocks AI-adjacent roles. Recruiters filter on "AWS + AI" keywords. AIF-C01 is the cheapest way to land in that filter bucket.
  2. Carries a Bedrock skill premium. Professionals who can credibly discuss Bedrock, Knowledge Bases, Guardrails, and Agents in an interview command 8–15% higher offers than identical candidates without AI fluency, per 2026 Dice and Built In compensation surveys.

Typical 2026 salary ranges by role (US, AIF-C01 certified)

RoleEntryMidSenior
Technical PM (AI/ML)$110K$140K$180K+
AI Solutions Consultant$95K$130K$170K
Business Analyst (AI-focused)$80K$105K$135K
Technical Account Manager (AWS partner)$100K$135K$170K
AI Program Manager$115K$145K$185K
Pre-sales / Sales Engineer (AI)$110K$155K (+ variable)$200K+ (OTE)
Cloud Engineer (AI-adjacent)$95K$125K$160K
Customer Success (AI products)$85K$110K$140K

The Bedrock skill premium is real. Candidates who can answer "walk me through how you would build a RAG chatbot on AWS" in an interview get stronger offers, even for non-engineering roles.

Career path after AIF-C01

Two natural next steps depending on your trajectory:

  • Business / consulting track: AIF-C01 → AWS Cloud Practitioner (CLF-C02) → AWS Certified Solutions Architect Associate (SAA-C03).
  • Engineering track: AIF-C01 → AWS Certified Solutions Architect Associate (SAA-C03) → AWS Certified Machine Learning Engineer – Associate (MLA-C01) → AWS Certified Machine Learning – Specialty (MLS-C01).

Common Mistakes (What Costs Candidates Points)

Based on Q1 2026 post-exam reports in r/AWSCertifications and Tutorials Dojo forums:

  1. Confusing RAG with fine-tuning. The exam throws at least 2–3 scenarios at you. Rule: dynamic/frequently-updated knowledge → RAG. Style/format/tone specialization → fine-tuning. Deep domain vocabulary from unlabeled text → continued pretraining.
  2. Mixing up temperature and top-p. Both control randomness. Temperature rescales the probability distribution; top-p truncates it. AWS recommends tuning one, not both. If a question asks which parameter controls "the set of tokens considered," it's top-p (or top-k), not temperature.
  3. Choosing SageMaker Clarify for inference-time safety. Clarify is for training/evaluation-time bias and explainability. For inference-time content safety, the answer is Bedrock Guardrails.
  4. Choosing SageMaker Model Monitor for bias detection. Model Monitor watches for drift and data quality in production. For bias, use Clarify.
  5. Picking Amazon Q Developer for business-user chatbots. Q Developer is the coding assistant (formerly CodeWhisperer). Amazon Q Business is the enterprise chatbot.
  6. Assuming Bedrock requires fine-tuning for every use case. Most business questions have "prompt engineering or RAG" as the correct answer — fine-tuning is a last resort because of cost and complexity.
  7. Choosing "train from scratch" as an answer. Almost never correct for a practitioner-level exam. If you see it, treat it as a distractor.
  8. Forgetting that Bedrock is Region-scoped. Data residency questions usually resolve to "data stays in the Region where the API is called."
  9. Using SageMaker for simple pretrained-image tasks. If the scenario is "detect objects in images," the answer is Amazon Rekognition, not SageMaker.
  10. Choosing Amazon Kendra when the scenario screams RAG. Kendra is enterprise search. Bedrock Knowledge Bases is the managed RAG service. When in doubt for generative AI + documents, pick Knowledge Bases.

AIF-C01 vs. Other Foundational AI Certs

If you're weighing AIF-C01 against alternatives, here's the straight comparison.

AttributeAWS AI Practitioner (AIF-C01)AWS Cloud Practitioner (CLF-C02)Azure AI Fundamentals (AI-900)Google Cloud Generative AI Leader
Cost (USD)$100$100$99$99
Duration90 min90 min45–60 min90 min
Questions65 (50 scored + 15 unscored)65 (50 scored + 15 unscored)40–6050–60
Passing score700 / 1000700 / 1000700 / 1000Pass/fail (~70%)
Validity3 years3 yearsNever expires3 years
FocusAI/ML + GenAI + responsible AI + AWS AI servicesAWS platform breadthAI/ML on Azure (less GenAI)GenAI strategy + Google Cloud GenAI
GenAI coverageDeep (24% + 28% = 52% of exam)Minimal (one task statement)ModerateHigh
PrerequisitesNoneNoneNoneNone
Best forNon-engineer AI literacy, Bedrock fluencyBroad AWS literacyAzure-shop analysts/PMsGoogle Cloud AI-shop leaders
RecertificationMLA-C01 auto-recertifiesAny Associate cert auto-recertifiesN/A (never expires)Retake or higher cert

Bottom line:

  • If your company uses AWS → AIF-C01 wins.
  • If your company uses Azure and you want AI literacy → AI-900.
  • If your company uses Google Cloud → Generative AI Leader.
  • If you want broad AWS literacy without AI depth → CLF-C02 first, AIF-C01 second.
  • If you want both AWS platform and AI → CLF-C02 + AIF-C01 combo ($200 total).

Next Steps After AIF-C01

AIF-C01 is the start, not the end. Your logical next AWS credentials, depending on role:

GoalNext CertificationWhy
Credible AWS generalist foundationAWS Cloud Practitioner (CLF-C02)Rounds out non-AI AWS knowledge
Full AWS architecture fluencyAWS Certified Solutions Architect – Associate (SAA-C03)The most-valued AWS Associate cert
Production ML engineeringAWS Certified Machine Learning Engineer – Associate (MLA-C01)Auto-recertifies AIF-C01; covers real ML engineering
Deepest ML specializationAWS Certified Machine Learning – Specialty (MLS-C01)Classic deep ML cert; being phased toward MLA/MLS successor
Implementation-level AI skillsAWS Certified AI Implementation Professional (AIP-C01, when stable)Hands-on Bedrock, RAG, Agents in production
DevOps for AI workloadsAWS Certified DevOps Engineer – ProfessionalOperationalize AI pipelines
Security for AI workloadsAWS Certified Security – SpecialtySecure AI at scale

Most common sensible path for PMs/analysts: AIF-C01 → CLF-C02 → SAA-C03.

Most common sensible path for engineers: AIF-C01 → SAA-C03 → MLA-C01.


Ready? Go Pass AIF-C01 This Month.

You now have the weighted domain breakdown, the Bedrock deep dive, the RAG-vs-fine-tuning rule, the responsible AI pillar map, the security cheat sheet, the common mistakes, the comparison table, and the study plan.

The only thing left is practice.

Certifications are not about knowing everything. They are about finishing. Set the date. Do the reps. Pass once. Move on.


Official Sources

All exam logistics in this guide were verified against AWS's primary sources at publication. Cross-check these before exam day in case AWS updates anything:

  • AWS Certified AI Practitioner landing page: aws.amazon.com/certification/certified-ai-practitioner
  • AIF-C01 Exam Guide (HTML): docs.aws.amazon.com/aws-certification/latest/ai-practitioner-01/ai-practitioner-01.html
  • AIF-C01 Exam Guide (PDF): docs.aws.amazon.com/pdfs/aws-certification/latest/ai-practitioner-01/ai-practitioner-01.pdf
  • AWS Skill Builder exam prep: explore.skillbuilder.aws (search "AWS Certified AI Practitioner")
  • AWS AI Service Cards: aws.amazon.com/machine-learning/responsible-ai/resources
  • Amazon Bedrock documentation: docs.aws.amazon.com/bedrock
  • Amazon SageMaker documentation: docs.aws.amazon.com/sagemaker
  • AWS Recertification policy: aws.amazon.com/certification/recertification
  • Pearson VUE scheduling: home.pearsonvue.com/aws
  • PSI online proctoring: aws-certification.psionline.com

Good luck. See you on the other side of AIF-C01.

Test Your Knowledge
Question 1 of 8

What is the cost and duration of the AWS Certified AI Practitioner (AIF-C01) exam in 2026?

A
$150 USD for 130 minutes
B
$100 USD for 90 minutes
C
$99 USD for 60 minutes
D
$300 USD for 180 minutes
Learn More with AI

10 free AI interactions per day

AWSAWS CertificationAIF-C01AI PractitionerGenerative AIAmazon BedrockMachine Learning2026

Related Articles

Stay Updated

Get free exam tips and study guides delivered to your inbox.

Free exam tips & study guides. Unsubscribe anytime.