1.5 Domain Weights and Study Prioritization
Key Takeaways
- Applications of Foundation Models is the largest AIF-C01 domain at 28 percent.
- Fundamentals of GenAI is 24 percent, and Fundamentals of AI and ML is 20 percent.
- Guidelines for Responsible AI and Security, Compliance, and Governance for AI Solutions are each 14 percent.
- Weights guide study allocation, but scenarios often combine domains such as Bedrock, responsible AI, IAM, cost, and data readiness.
Weighted study map
A candidate who studies every domain for the same amount of time may feel organized while ignoring the official weighting signal. The AIF-C01 exam guide lists five domains with different weights. Applications of Foundation Models is the largest at 28 percent, followed by Fundamentals of GenAI at 24 percent, Fundamentals of AI and ML at 20 percent, Guidelines for Responsible AI at 14 percent, and Security, Compliance, and Governance for AI Solutions at 14 percent.
Weights are not an exact question counter. A 28 percent domain does not tell you the order of questions or the exact number of items you will see. It tells you where AWS expects more exam emphasis. Use the percentages to plan study time, practice review, and hands-on exploration. Then adjust based on your error log.
| Domain | Weight | Study priority | Scenario cue |
|---|---|---|---|
| Fundamentals of AI and ML | 20% | Build the core language of models, data, lifecycle, metrics, and use-case fit. | Is AI appropriate, and what kind of learning or data is involved? |
| Fundamentals of GenAI | 24% | Learn foundation models, LLMs, tokens, embeddings, prompts, inference, and customization concepts. | Can a generative model solve the task, and what risks follow? |
| Applications of Foundation Models | 28% | Spend the most time on Bedrock, Amazon Q, RAG, agents, guardrails, evaluation, and service selection. | Which AWS foundation model path fits the business requirement? |
| Guidelines for Responsible AI | 14% | Study fairness, explainability, privacy, safety, transparency, accountability, and human review. | What could go wrong for people, users, or the organization? |
| Security, Compliance, and Governance | 14% | Study IAM, encryption, logging, monitoring, privacy, policies, and governance lifecycle. | Who can access what, where data goes, and how controls are verified? |
Why the largest domain needs scenario depth
Applications of Foundation Models is the largest domain because it sits closest to current business demand. Teams want copilots, knowledge assistants, summarizers, content generators, search experiences, and task automation. On AWS, that pulls in Amazon Bedrock, model selection, Amazon Q, SageMaker AI, SageMaker Canvas, Knowledge Bases, Agents, Guardrails, embeddings, vector search, RAG, model evaluation, and cost/performance tradeoffs.
A shallow plan would memorize that Bedrock is associated with foundation models. A serious plan asks which model capability is needed, whether retrieval from enterprise content is required, whether a vector store or embeddings are part of the design, whether an agent should call tools, whether guardrails should constrain outputs, how to evaluate quality, and how latency and cost affect the choice. That is the level of practitioner decision-making the domain weighting deserves.
Do not starve the 14 percent domains
Responsible AI and Security, Compliance, and Governance are each 14 percent, but they are not small in real projects. A solution can be technically correct and still unacceptable if it exposes sensitive prompts, lacks human review, produces biased recommendations, ignores transparency, or grants broad permissions. These domains often appear as constraints inside larger foundation model scenarios.
For example, a human resources team wants AI to screen resumes. The AI/ML domain asks whether the use case is appropriate and what data exists. The GenAI domain asks whether a foundation model is suitable. The foundation model application domain asks whether to use a managed service, RAG, guardrails, or evaluation. Responsible AI asks about fairness, explainability, and human oversight. Security and governance ask about IAM, data privacy, logging, retention, and compliance review.
Study allocation model
| Weekly study block | Suggested share | Work product |
|---|---|---|
| Foundation model applications | 30% | Service-selection table, Bedrock/RAG/agent/guardrail scenarios, cost notes. |
| GenAI fundamentals | 25% | Prompting, tokens, embeddings, inference parameters, model customization map. |
| AI/ML fundamentals | 20% | Use-case fit matrix, lifecycle notes, metrics and data type drills. |
| Responsible AI | 12.5% | Risk checklist for fairness, privacy, safety, transparency, and human review. |
| Security and governance | 12.5% | IAM/shared responsibility/logging/encryption/governance checklist. |
This allocation is close to the official weights, with a slight rounding for practical scheduling. After each practice set, update the allocation. If you repeatedly miss responsible AI questions, increase responsible AI time even though its official weight is lower. If you know AI/ML vocabulary but miss Bedrock service boundaries, move time toward foundation model applications.
Cross-domain decision tree
- Identify the business outcome: classify, summarize, generate, forecast, recommend, extract, translate, converse, or search.
- Decide whether AI is suitable: check data quality, deterministic needs, risk, value, and governance readiness.
- Pick the concept family: traditional ML, managed AI service, generative AI, foundation model application, or no-AI path.
- Map to AWS services at practitioner depth: Bedrock, Amazon Q, SageMaker AI, SageMaker Canvas, or a managed AI service as appropriate.
- Add responsible AI controls: fairness, explainability, safety, privacy, transparency, accountability, and human review.
- Add security and governance controls: IAM, encryption, logging, monitoring, data retention, compliance evidence, and cost tracking.
Priority traps
The first trap is studying GenAI as if it replaced all AI/ML foundations. You still need supervised, unsupervised, and reinforcement learning concepts, data types, model lifecycle, evaluation metrics, and when AI is not appropriate. The second trap is studying security as generic cloud trivia. In AIF-C01, security must be tied to AI prompts, model access, data privacy, logging, monitoring, and governance.
The third trap is ignoring business value. The exam is not only asking which service can do a task. It can ask whether the task should be automated, whether the organization has suitable data, whether human review is needed, or whether a managed AWS service avoids unnecessary build complexity. Use domain weights to decide volume, but use scenarios to decide quality.
A learner has only one extra evening before review week. Which official AIF-C01 domain weight suggests the highest-priority area for additional scenario practice?
A resume-screening AI scenario includes model fit, fairness, human review, IAM, and logging. What does this show about domain weights?
Which pair of domains are each weighted 14 percent in the official AIF-C01 exam guide?