1.4 Target Candidate Boundaries and Out-of-Scope Tasks

Key Takeaways

  • The target candidate has up to 6 months of exposure to AI/ML technologies on AWS and uses, but does not necessarily build, AI/ML solutions.
  • AWS lists business analyst, IT support, marketing, product/project manager, line-of-business or IT manager, and sales roles as examples.
  • Recommended AWS knowledge includes core services, Amazon Bedrock, Amazon SageMaker AI, IAM, shared responsibility, and pricing models.
  • Out-of-scope tasks include coding models, data engineering implementation, feature engineering implementation, hyperparameter tuning, pipeline deployment, mathematical model analysis, and implementing security or governance frameworks.
Last updated: May 2026

Candidate boundary scenarios

A line-of-business manager is asked to approve an AI feature that summarizes customer emails. An IT support analyst is asked whether a chatbot should connect to internal knowledge articles. A sales professional needs to explain why a customer might choose a managed foundation model service instead of building a model from scratch. These are AIF-C01-shaped situations. The candidate uses AI/ML solutions and participates in decisions, but does not necessarily build the solution.

The AWS exam guide describes the target candidate as someone with up to 6 months of exposure to AI/ML technologies on AWS. That phrase sets both a floor and a ceiling. The floor is that you need real familiarity with AWS AI/ML terms, services, and business applications. The ceiling is that the exam is not asking you to become the person who codes algorithms, engineers features, tunes model hyperparameters, or deploys ML pipelines as the main job.

Role example from AWSTypical AIF-C01 responsibilityWhat not to overclaim
Business analystConnect AI use cases to measurable business outcomes and data readiness.Owning model architecture or training code.
IT supportUnderstand access, support paths, user impact, and escalation signals.Designing custom ML infrastructure.
Marketing professionalEvaluate content generation, personalization, brand risk, and review workflows.Performing statistical model analysis.
Product or project managerDefine requirements, tradeoffs, risks, and success metrics.Tuning model hyperparameters.
Line-of-business or IT managerApprove governance, budget, security, and service fit.Implementing governance frameworks alone.
Sales professionalExplain AWS AI/ML value at a responsible conceptual level.Claiming guaranteed outcomes or live exam knowledge.

Recommended AWS knowledge

The exam guide recommends knowledge of core AWS services such as Amazon EC2, Amazon S3, AWS Lambda, Amazon Bedrock, and Amazon SageMaker AI. It also recommends the AWS shared responsibility model, IAM for securing and controlling access, and AWS service pricing models. This mix is a strong clue about exam tone. You need cloud fluency because AI workloads run inside cloud cost, access, data, and security boundaries.

For example, a generative AI use case may be about Amazon Bedrock, but the scenario can still depend on S3 data, IAM permissions, logging, encryption, or cost. A recommendation-system scenario may not require you to code the algorithm, but it may require you to understand whether the organization has suitable data, whether the output should be monitored, and whether a managed AWS service or a custom ML path is more appropriate.

In scope versus out of scope

Task areaPractitioner-level in scopeOut of scope for the target candidate
Model conceptsExplain training, inference, model evaluation, bias, and fit at a high level.Developing or coding AI/ML models or algorithms.
DataRecognize data quality, labeled data, privacy, and suitability issues.Implementing data engineering or feature engineering.
OptimizationKnow that tuning affects performance, cost, and accuracy.Hyperparameter tuning or model optimization as a builder task.
DeploymentUnderstand lifecycle stages and monitoring questions.Building and deploying AI/ML pipelines or infrastructure.
StatisticsInterpret common evaluation and business metrics conceptually.Conducting mathematical or statistical model analysis.
GovernanceIdentify policy, review, privacy, and compliance concerns.Implementing security, compliance, or governance frameworks.

The approval conversation

A practitioner should be able to run an approval conversation. Start with the business problem. Is the goal classification, summarization, forecasting, recommendation, search, extraction, translation, speech, image analysis, or decision support? Then ask whether AI is actually useful. If the process requires a deterministic answer every time, has poor data, has weak business value, or carries high unmanaged risk, AI may not be appropriate.

Next, ask which AWS path is plausible. A managed AI service may fit speech, text extraction, translation, or image recognition. Amazon Bedrock may fit foundation model and generative AI use cases. Amazon SageMaker AI may be relevant when the team needs broader ML development capability. The exam does not require you to deploy every option, but it does expect you to recognize the decision boundary.

Then ask who owns risk. IAM, shared responsibility, pricing, data privacy, monitoring, human review, content safety, and governance are not separate from AI. A model output can be wrong, biased, unsafe, or too expensive. A prompt can include sensitive data. A user can have too much access. A service can be technically impressive and still fail the business requirement.

Study boundary checklist

  • Learn service selection at the level of business problem, data type, managed service fit, and governance risk.
  • Learn model lifecycle terms without turning them into coding tasks.
  • Learn IAM, shared responsibility, and pricing enough to evaluate AI use cases responsibly.
  • Practice explaining when AI is not appropriate.
  • Do not spend core AIF-C01 study time on algorithm implementation, feature engineering code, hyperparameter tuning labs, or deep mathematical derivations.
  • Do not ignore builder terms entirely; know what they mean and when to involve a specialist.

A strong AIF-C01 candidate is not shallow. The depth is in judgment, not implementation ownership. You should be able to sit between a business sponsor, an AWS builder, a security reviewer, and an end user, then ask enough precise questions to keep the solution aligned with AWS capabilities and organizational risk.

Test Your Knowledge

A product manager is asked to evaluate whether a proposed AI summarization feature should use AWS services and what risks need review. How does that task fit AIF-C01?

A
B
C
D
Test Your Knowledge

Which task is explicitly out of scope for the AIF-C01 target candidate in the source brief?

A
B
C
D
Test Your Knowledge

Which AWS knowledge area is recommended for AIF-C01 candidates?

A
B
C
D