9.1 Shared Responsibility, IAM, and Least Privilege for AI

Key Takeaways

  • The AWS shared responsibility model still applies to AI solutions: AWS secures the cloud infrastructure, while customers govern identities, data, prompts, application behavior, and allowed use.
  • Least privilege for AI includes human users, application roles, service roles, model access, data stores, action tools, logging destinations, and administrative workflows.
  • IAM policies, resource policies, permissions boundaries, Service Control Policies, and approval processes can prevent broad model, data, or tool access before an application reaches production.
  • Practitioners should ask who can invoke a model, which data can be included, which actions can be triggered, and how risky outputs are reviewed.
  • Managed AI services reduce operational burden, but they do not remove customer responsibility for governance, privacy, and business risk.
Last updated: May 2026

Shared responsibility for AI workloads

The shared responsibility model is a core AWS concept, and it becomes more important when an AI solution touches sensitive data or business decisions. AWS is responsible for security of the cloud, such as the facilities, physical infrastructure, managed service infrastructure, and foundational service operations. The customer is responsible for security in the cloud, including identities, permissions, data classification, application design, prompts, outputs, and governance choices.

For a managed service such as Amazon Bedrock, AWS removes the need to manage model-serving infrastructure, but the organization still decides who may use a model, what data may be sent, which Regions are approved, and how outputs are reviewed. For a broader ML platform such as Amazon SageMaker AI, the customer may have more control over notebooks, training jobs, model artifacts, endpoints, and data pipelines, which also means more configuration decisions to govern.

AI responsibility areaAWS roleCustomer role
Managed service infrastructureOperates and protects the underlying AWS service infrastructure.Selects the service and configures access, Regions, logging, and data use.
Identity and accessProvides IAM, AWS Organizations, and service integration points.Grants least privilege to users, applications, and service roles.
DataProvides storage, encryption features, and security services.Classifies data, limits prompt content, controls retention, and protects datasets.
Model useProvides model access paths and service controls.Chooses approved models, evaluates outputs, and defines acceptable use.
Application actionsProvides services such as Lambda, API Gateway, and Bedrock Agents.Restricts tools, validates inputs, and requires approval for risky actions.
MonitoringProvides services such as CloudTrail, CloudWatch, and AWS Config.Enables appropriate logs, reviews findings, and responds to incidents.

A practitioner should treat every AI workflow as a chain of permissions. A user or application invokes a model. The model may retrieve data from Amazon S3, a knowledge base, or another source. An agent may call an action group backed by AWS Lambda or an API. Logs may be written to CloudWatch Logs or S3. Each link needs a clear owner and an access decision. Broad access at any link can turn a small assistant into a large data exposure.

Least privilege means granting only the access needed for a defined task. A customer support assistant that answers shipping policy questions does not need permission to read payroll documents. A summarization service does not need administrator access to the AWS account. A Bedrock agent that creates support tickets should not have permission to cancel orders unless that action is part of the approved workflow and has confirmation controls.

IAM policy design is usually owned by builders and administrators, but practitioners should understand the decision pattern. Human users may need console permissions for experimentation in a sandbox. Runtime applications need roles that can invoke only approved models and read only approved data sources. Service roles for SageMaker AI, Lambda, or Bedrock Agents should be scoped to the resources they actually use. Administrative permissions should be separated from day-to-day use.

AWS Organizations and Service Control Policies can add guardrails above individual accounts. For example, a company might restrict unapproved Regions, prevent use of certain services in production accounts, or block broad administrative permissions except for a security team. These controls are not a substitute for IAM policies inside the account, but they help keep experiments from drifting into unsupported production behavior.

Model access is also an access decision. A user who can open the AWS console may still be blocked from invoking a specific model if IAM, model access requirements, AWS Marketplace permissions, Region availability, or organizational controls do not allow it. That is useful in governance scenarios. The correct answer is not always to give broader access; it may be to create an approved model catalog, a sandbox account, or a request process.

The same logic applies to data. A team may build a RAG assistant with Amazon Bedrock Knowledge Bases, but indexing a document does not mean every employee should retrieve it. Access boundaries can be enforced through source separation, application authorization, metadata filters, IAM, or separate knowledge bases. The practitioner question is simple: does the assistant preserve the same data boundaries the organization already expects?

Least privilege review checklist:

  • Identify every principal: end users, administrators, runtime roles, service roles, and automation roles.
  • Identify every resource: models, S3 buckets, knowledge bases, vector stores, Lambda functions, APIs, logs, and keys.
  • Separate sandbox exploration from production usage.
  • Limit model invocation to approved models and Regions where policy requires it.
  • Restrict data access by business need, not by convenience.
  • Require confirmation or human review for financial, legal, safety, or customer-visible actions.
  • Review permissions on a cadence because AI use cases often expand after a pilot.

Scenario judgment matters. If a marketing team wants an AI copy generator for public campaign drafts, the IAM risk may be modest, but brand and review risk still matter. If a legal team wants contract summarization, sensitive data and retention controls matter more. If an IT team wants an agent to reset access, least privilege and approval workflow become central. The same AWS concepts apply, but the risk level changes with the use case.

For AWS Skill Builder or console practice, focus on reading access patterns rather than writing complex policies from memory. Notice which principal is calling the service, which resource is touched, and which log would show the action. That practitioner habit helps you spot whether a proposed AI solution has a reasonable ownership model.

Test Your Knowledge

A team uses Amazon Bedrock for a managed internal assistant. Which responsibility still belongs to the customer?

A
B
C
D
Test Your Knowledge

A Bedrock agent creates support tickets through a Lambda function. What is the best least privilege approach?

A
B
C
D
Test Your Knowledge

A practitioner reviews a proposed RAG assistant that indexes HR, legal, and product documents. What question should be asked early?

A
B
C
D