All Practice Exams

100+ Free AI-103 Practice Questions

Pass your Microsoft Certified: Azure AI App and Agent Developer Associate (AI-103) exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
100+ Questions
100% Free
1 / 10
Question 1
Score: 0/0

Which Azure resource is the top-level container in Microsoft Foundry that holds shared infrastructure (storage, key vault, container registry, AI service connections) for one or more AI projects?

A
B
C
D
to track
2026 Statistics

Key Facts: AI-103 Exam

700/1000

Passing Score

Microsoft

40-60 Q

Typical Questions

Microsoft

100 min

Exam Duration

Microsoft

$165

US Exam Fee

Microsoft

6 domains

Skills Areas

Inferred from AI-103T00 + AI-102 lineage

Annual

Free Renewal

Microsoft Learn

AI-103 is the 2026 successor/companion to AI-102 for the Azure AI App and Agent Developer Associate credential. Expect roughly 40-60 questions in 100 minutes, a 700/1000 passing score, and a strong emphasis on Microsoft Foundry, the Foundry Agent Service, Microsoft Agent Framework, multi-agent orchestration, RAG, evaluation, content safety, and the modern Azure AI SDKs (azure-ai-projects, azure-ai-inference).

Sample AI-103 Practice Questions

Try these sample questions to test your AI-103 exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which Azure resource is the top-level container in Microsoft Foundry that holds shared infrastructure (storage, key vault, container registry, AI service connections) for one or more AI projects?
A.Microsoft Foundry hub
B.Azure Cognitive Services account
C.Azure Resource Group
D.Azure Subscription
Explanation: A Microsoft Foundry hub is the top-level Azure resource that provides shared infrastructure (storage, key vault, container registry, and connections to AI services) for one or more Foundry projects. Projects are where teams actually build and deploy AI apps and agents.
2You are building a generative AI app in a Foundry project. Which authentication mechanism is recommended for production code calling Azure OpenAI from Azure-hosted compute?
A.API key embedded in source
B.Microsoft Entra ID with managed identity and the Cognitive Services OpenAI User role
C.Anonymous access
D.Shared admin password
Explanation: For production, use Microsoft Entra ID with a managed identity assigned to the compute (App Service, Container Apps, Functions, AKS) and grant it the Cognitive Services OpenAI User data-plane role. This implements keyless, RBAC-based access and avoids embedding secrets.
3Which Azure OpenAI model family is best suited for fast, low-cost summarization of short customer messages where reasoning depth is not required?
A.o3 reasoning models
B.GPT-4.1 mini or gpt-4o-mini
C.DALL-E 3
D.Whisper
Explanation: GPT-4.1 mini (or gpt-4o-mini) deliver low latency and low cost per token for routine generation tasks like short summarization where deep multistep reasoning isn't required. The o-series reasoning models are slower and more expensive and better suited for complex multi-step problems.
4Which Azure OpenAI model class is optimized for complex multistep reasoning, math, code, and chain-of-thought workloads?
A.Embedding models (text-embedding-3-large)
B.o-series reasoning models (e.g., o1, o3)
C.Whisper
D.TTS
Explanation: Azure OpenAI o-series reasoning models (o1, o3 family) are designed for complex multistep reasoning, math, code, and STEM problems. They use additional inference-time compute and produce higher-quality reasoning at the cost of latency and price.
5Which Foundry feature provides a low-code visual designer for building, debugging, and evaluating sequences of LLM calls, Python steps, and tool nodes?
A.Azure Logic Apps
B.Prompt flow
C.Azure Data Factory
D.Azure Bastion
Explanation: Foundry prompt flow lets you visually design, debug, and evaluate flows that combine LLM nodes, Python tools, and other connectors. It supports variants, batch evaluation, and tracing, and stores its definition in flow.dag.yaml for source control.
6What does RAG (retrieval-augmented generation) primarily improve in a generative AI application?
A.Image generation latency
B.Grounding answers in current, organization-specific knowledge that the base model wasn't trained on
C.GPU utilization on training clusters
D.Database backup speed
Explanation: RAG retrieves relevant chunks from a knowledge store at query time and supplies them to the LLM as context. This grounds responses in current, organization-specific data that the model never saw during training, reducing hallucinations and enabling accurate domain answers without fine-tuning.
7Which Azure service is the most common managed vector store used as the retrieval layer for Foundry RAG applications?
A.Azure SQL Database
B.Azure AI Search with vector profiles and HNSW indexes
C.Azure Files
D.Azure NetApp Files
Explanation: Azure AI Search is the first-class managed vector store for Foundry RAG. It supports HNSW vector indexes, hybrid (keyword + vector) search, semantic ranking, security trimming, and integrated vectorization, making it well-suited for production RAG.
8You want Azure AI Search to automatically chunk documents and generate embeddings during indexing. Which feature should you enable?
A.Manual ingestion only
B.Integrated vectorization (skillsets with embedding skill and split skill)
C.Geo-replication
D.Full-text only
Explanation: Azure AI Search integrated vectorization configures a skillset that splits documents into chunks and calls an embedding model (e.g., Azure OpenAI text-embedding-3) to populate vector fields automatically during indexing. This eliminates the need to write custom ingestion code.
9Which retrieval strategy in Azure AI Search combines BM25 keyword search and vector similarity, then re-ranks the merged results using a Microsoft cross-encoder model?
A.Pure keyword
B.Hybrid search with semantic ranker
C.Geo-spatial search
D.Faceted search
Explanation: Hybrid search runs BM25 and vector queries in parallel and fuses results. When semantic ranking is enabled, the top results are re-ordered by a Microsoft cross-encoder model. This combination typically outperforms keyword-only or vector-only retrieval for RAG.
10Which Azure OpenAI capability lets a model decide to invoke a developer-defined function (tool) and have the host app execute it before returning a final answer?
A.Function calling (tool calling)
B.Embeddings
C.Whisper
D.Content filters
Explanation: Function calling (also called tool calling) lets you describe one or more functions to the model. The model can then return a structured request to invoke a function with arguments. The host app executes the function and returns the result so the model can produce the final answer.

About the AI-103 Exam

The AI-103 exam validates the skills needed to plan, build, deploy, and operate generative AI apps and production-ready agents on Microsoft Foundry, integrating Azure OpenAI, Azure AI Search, Document Intelligence, Speech, Language, Content Safety, and responsible AI controls.

Questions

50 scored questions

Time Limit

100 minutes

Passing Score

700/1000

Exam Fee

$165 USD (Microsoft / Pearson VUE)

AI-103 Exam Content Outline

20-25%

Plan and Manage an Azure AI Solution

Provision Foundry hubs and projects, choose models and deployment options (standard, PTU, managed compute), configure managed identities and RBAC, secure with private endpoints, configure content safety filters, and instrument with Azure Monitor and tracing.

20-25%

Implement Generative AI Solutions

Build Foundry projects and prompt flows, use Azure OpenAI chat completions (function calling, structured outputs, streaming, multimodal), implement RAG, evaluate quality with built-in and custom evaluators, and operationalize generative deployments.

10-15%

Implement an Agentic Solution

Build agents with Microsoft Foundry Agent Service and the Microsoft Agent Framework, configure built-in tools (Azure AI Search, file search, code interpreter, Bing/web grounding, OpenAPI), orchestrate multi-agent workflows, and protect agents from prompt injection.

10-15%

Implement Computer Vision Solutions

Use Azure AI Vision for OCR (Read API), image and object analysis, custom Image Analysis models, spatial analysis, video indexing with Azure AI Video Indexer, and (under Limited Access) Azure AI Face for recognition scenarios.

15-20%

Implement Natural Language Processing Solutions

Build with Azure AI Language (CLU, Question Answering, PII detection, summarization, key phrase extraction, sentiment), Translator and Custom Translator, Azure AI Speech (speech-to-text, text-to-speech with SSML, custom speech, custom neural voice, Voice Live).

15-20%

Implement Knowledge Mining and Information Extraction Solutions

Build Azure AI Search indexes with vector profiles, integrated vectorization, hybrid search, semantic ranking, and security trimming. Use Azure AI Document Intelligence prebuilt and custom models, and Azure AI Content Understanding for multi-modal extraction.

How to Pass the AI-103 Exam

What You Need to Know

  • Passing score: 700/1000
  • Exam length: 50 questions
  • Time limit: 100 minutes
  • Exam fee: $165 USD

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

AI-103 Study Tips from Top Performers

1Build at least one end-to-end Foundry agent that uses Azure OpenAI gpt-4o or gpt-4.1, Azure AI Search file search, function calling, and a content filter so the studio UX, agent definitions, and tools become muscle memory.
2Master both the unified inference SDK (azure-ai-inference) for chat/embeddings and the project SDK (azure-ai-projects) for agents, deployments, and evaluations.
3Memorize when to use OCR (Read), Document Intelligence prebuilt vs custom, Content Understanding, and Azure AI Search; the exam will test choosing the right service per scenario.
4Get fluent with hybrid search + semantic ranker in Azure AI Search and with Azure OpenAI On Your Data, including how citations are returned.
5Practice configuring content filters per deployment, enabling Prompt Shields and protected material detection, and integrating risk and safety evaluators in evaluations.
6Understand the Foundry Agent Service primitives: agents, threads, runs, tools (file search, code interpreter, Azure AI Search, web grounding, OpenAPI), and how Microsoft Agent Framework orchestrates multi-agent workflows.

Frequently Asked Questions

What is the AI-103 exam?

AI-103 is the Microsoft exam for the Azure AI App and Agent Developer Associate credential introduced in 2026. It validates the skills needed to plan, build, deploy, and operate generative AI apps and production-ready agents on Microsoft Foundry, integrating Azure OpenAI, Azure AI Search, Document Intelligence, Speech, Language, and Content Safety with responsible AI practices.

How is AI-103 different from AI-102?

AI-103 supersedes/supplements AI-102 (which retires June 30, 2026) and is purpose-built for the agentic and Foundry-centric AI engineering work that has become the dominant pattern. Compared to AI-102, AI-103 places much heavier weight on the Foundry Agent Service, Microsoft Agent Framework, multi-agent orchestration, modern Azure AI SDKs (azure-ai-projects, azure-ai-inference), and Foundry evaluations and observability.

How many questions are on AI-103 and how long do you get?

Microsoft role-based associate exams typically deliver about 40-60 questions. For AI-103, plan for a 100-minute exam duration and a 700 out of 1000 passing score, with possible interactive (case-study or scenario) item types as Microsoft typically uses on associate exams.

Is AI-103 still in beta?

AI-103 is the 2026 generation of the Azure AI Engineer associate certification. The associated AI-103T00-A training course is published on Microsoft Learn, and the exam was scheduled to enter beta in April 2026 with general availability targeted for June 2026. Always check the official Microsoft Learn page for the latest beta or live status before scheduling.

How long should I study for AI-103?

Plan for about 80-140 hours over 6-10 weeks depending on prior Azure AI experience. Effective preparation includes building at least one production-style Foundry agent that uses Azure OpenAI, Azure AI Search, function calling, and content safety, with evaluations and tracing enabled, plus 100+ practice questions.

Does AI-103 certification expire?

Yes. Microsoft associate certifications expire 12 months after you earn them. You can renew at no cost by passing a free online renewal assessment on Microsoft Learn before the expiration date.