10.7 Full AIF-C01 Business Simulation

Key Takeaways

  • A full business simulation should connect AI fundamentals, generative AI, foundation model applications, responsible AI, and security governance in one decision packet.
  • The practitioner role is to identify fit, ask evidence-based questions, choose managed AWS services where appropriate, and avoid overbuilding when a simpler workflow works.
  • The best answer in scenario work often depends on constraints: data quality, risk level, latency, cost, permissions, human review, and current AWS service availability.
  • Original practice questions should build reasoning skill without claiming to reproduce real exam items.
  • A final review habit is to document assumptions, state tradeoffs, and choose the lowest-risk path that meets the business goal.
Last updated: May 2026

Full business simulation: AI steering committee packet

You are advising an AI steering committee at a regional healthcare services company. The committee has four proposals: an employee knowledge assistant, a patient contact-center summarizer, a document intake workflow for insurance forms, and a predictive dashboard for appointment no-shows. The organization uses AWS for core applications, has a small cloud operations team, and has business analysts who can use managed tools but no dedicated ML engineering team for the first release. The committee wants a practical recommendation, not a technology showcase.

This simulation combines the five AIF-C01 domains. You must identify whether AI is useful, decide whether generative AI or traditional ML fits, choose AWS services at a practitioner level, describe responsible AI concerns, and request security and governance evidence. Do not treat this as a real exam question set. Treat it as a business lab where every answer needs a reason, an assumption, and a failure-mode check.

ProposalLikely AI patternAWS services to considerMain risk question
Employee knowledge assistantEnterprise search or RAGAmazon Q Business, Amazon Kendra, Bedrock Knowledge Bases, S3, IAM Identity CenterCan permissions and authoritative sources be enforced?
Contact-center summarizerGenAI summarization and classificationAmazon Transcribe, Amazon Bedrock, Guardrails, Comprehend, CloudWatchAre transcripts redacted, reviewed, and monitored for unsafe summaries?
Insurance document intakeDocument extraction and assisted reviewTextract, Bedrock, Amazon A2I, S3, KMS, Macie, Audit ManagerDoes a human approve high-impact or low-confidence outputs?
No-show prediction dashboardTraditional ML or no-code predictionSageMaker Canvas, SageMaker AI, QuickSight, S3, GlueAre labels valid, features appropriate, and actions fair to patients?

Start with the employee knowledge assistant. The problem is finding trusted internal information. If the organization wants a managed employee assistant with connectors and permission-aware answers, Amazon Q Business may be a strong first path. If builders need a custom application experience with generated answers from a specific corpus, Bedrock Knowledge Bases may fit. If the main need is ranked enterprise search rather than generated answers, Amazon Kendra can be considered. The committee should not approve indexing every shared folder until owners remove stale and restricted content.

For the contact-center summarizer, the flow may start with Amazon Transcribe to convert calls to text, then Bedrock to summarize, classify reason codes, and draft follow-up notes. Guardrails can help with unsafe content, prompt attacks, and sensitive information rules. The summary should be agent-reviewed before it becomes part of a customer record. The committee should ask what happens when audio quality is poor, when a customer gives protected information, when a summary omits a complaint, or when the model suggests an action the agent is not allowed to take.

For insurance document intake, Textract is the likely extraction service for forms and tables, while Bedrock can help summarize extracted text and flag missing fields. Amazon A2I or an internal review queue can route low-confidence or high-impact cases to humans. S3 with KMS, Macie, IAM, CloudTrail, and retention controls matter because insurance documents can contain sensitive personal information. The committee should reject any proposal where generated summaries become final determinations without review evidence.

For appointment no-show prediction, a traditional ML pattern may be more appropriate than generative AI. The business wants a risk score or dashboard, not creative text. SageMaker Canvas can let analysts experiment with a no-code prediction approach if the data is prepared. SageMaker AI can support a custom path later if builders need control. The fairness review is essential: using features that proxy for sensitive status can create harmful outreach patterns. The action should be supportive, such as reminder options, not punitive denial of care.

Committee decision workflow:

  1. State the business outcome and non-AI baseline for each proposal.
  2. Classify the AI type: search, RAG, generation, extraction, recommendation, forecasting, classification, or decision support.
  3. Check data readiness, data permissions, and source ownership.
  4. Choose the simplest AWS service path that meets the need and skill level.
  5. Define human review, refusal behavior, and escalation for high-risk outputs.
  6. Estimate cost and performance drivers before launch.
  7. Require monitoring for quality, safety, usage, drift, errors, and business outcome.
  8. Approve one or two bounded pilots instead of funding every idea at full scale.

A strong recommendation might approve the employee knowledge assistant for a narrow HR and IT corpus, approve the contact-center summarizer as agent-assist only, defer the document intake workflow until sample-document handling and reviewer evidence are designed, and require a fairness review before the no-show model pilot. This is not anti-AI. It is disciplined sequencing. The organization learns safely, proves value, and avoids turning unmanaged experiments into production obligations.

Simulation failure modes:

  • The committee picks Bedrock for every problem, including cases where search, Textract, Personalize, SageMaker Canvas, or a rules-based workflow is better.
  • The team assumes managed AWS services remove responsibility for IAM, privacy, logging, human review, and data quality.
  • The pilot measures demo quality but not business outcome.
  • Sensitive data appears in prompts, responses, feedback, or logs without retention and access review.
  • A model is changed without rerunning the evaluation set.
  • A prediction dashboard influences customers or employees without appeal, explanation, or fairness review.
  • Costs rise because pilots lack budgets, tags, owners, and shutdown criteria.

Review prompts before the quiz:

  • Which proposal is the safest first pilot and why?
  • Which proposal needs the most human review before production?
  • Which proposal is generative AI, and which is traditional prediction?
  • Which AWS managed service reduces build effort without removing governance responsibility?
  • What assumption would change your recommendation if it turned out to be false?
Test Your Knowledge

A steering committee wants generated answers from HR and IT documents, but only after content owners remove stale and restricted files. What is the best practitioner recommendation?

A
B
C
D
Test Your Knowledge

A proposal predicts appointment no-show risk so staff can send extra reminders. Which AI pattern is most likely?

A
B
C
D
Test Your Knowledge

Which statement best captures the final simulation mindset?

A
B
C
D