6.2 Azure Container Instances and Container Apps
Key Takeaways
- Azure Container Instances is best for simple, fast, per-container execution without orchestrator management.
- Azure Container Apps is better for microservices, revisions, ingress, autoscaling, and event-driven container workloads.
- Both services can pull from ACR, but identity, registry credentials, and network reachability must be configured correctly.
- ACI container groups share lifecycle, network identity, and resource allocation boundaries.
- AZ-104 questions often hinge on whether the workload needs simple execution or platform features such as scaling and revisions.
Two serverless container choices
Azure Container Instances, or ACI, runs containers directly in Azure with minimal platform decisions. You define an image, CPU, memory, ports, environment variables, restart policy, and optional volume mounts. ACI is useful for batch jobs, simple APIs, scheduled administrative tasks, test containers, and troubleshooting tools that do not need a full orchestrator.
Azure Container Apps is a higher-level application platform for containerized services. It adds managed environments, revisions, ingress, traffic splitting, scale rules, secrets, identities, and integration patterns suited to microservices and event-driven apps. It is not AKS. You do not manage Kubernetes nodes, but the service provides more application behavior than ACI.
| Requirement | Better answer | Reason |
|---|---|---|
| Run a single container quickly for a short job | Azure Container Instances | Fast deployment and simple lifecycle. |
| Run a multi-container group with shared localhost and shared lifecycle | Azure Container Instances | ACI container groups package containers together. |
| Expose a containerized API with revisions and traffic splitting | Azure Container Apps | Revisions and ingress are built into the app model. |
| Scale to zero based on HTTP or event demand | Azure Container Apps | Container Apps supports scale rules and minimum replica settings. |
| Need full Kubernetes control plane and node pools | AKS | Neither ACI nor Container Apps is the right answer. |
Portal path for ACI: Azure portal > Container instances > Create. Portal path for Container Apps: Azure portal > Container Apps > Create, usually selecting or creating a Container Apps environment. For the exam, know that the environment is a boundary for networking and shared configuration around Container Apps.
Deploying Azure Container Instances
An ACI deployment can be created from the portal, Azure CLI, ARM, or Bicep. The minimum administrator inputs are resource group, name, image, OS type, CPU, memory, and networking exposure. If the image is private, provide registry credentials or configure identity-based access where supported.
az container create \
--resource-group rg-compute \
--name aci-report-worker \
--image examacr104.azurecr.io/jobs/report:v1 \
--registry-login-server examacr104.azurecr.io \
--registry-username <registry-user> \
--registry-password <registry-password> \
--cpu 1 \
--memory 2 \
--restart-policy OnFailure
Restart policy is exam-relevant. Always is common for services that should keep running. OnFailure is useful for jobs that should retry when the process exits unsuccessfully. Never is useful when the container should run once and stop even if it fails. If a question describes a one-time task that keeps restarting after completion, suspect the restart policy.
ACI supports container groups. Containers in the same group share a host, network namespace, lifecycle, and can communicate on localhost. That pattern is useful for a main app plus a sidecar helper. It is not the same as independent scaling of each container. If one container in a tightly coupled group fails, the group lifecycle behavior matters.
Common ACI checks:
az container show -g rg-compute -n aci-report-worker --query instanceView.state
az container logs -g rg-compute -n aci-report-worker
az container attach -g rg-compute -n aci-report-worker
Deploying Azure Container Apps
Container Apps uses a managed environment. The app definition includes container image, resources, ingress, secrets, environment variables, registry authentication, and scale settings. The service creates revisions when revision-scope properties change, such as the container image. You can run single revision mode for simple deployments or multiple revision mode when you want traffic splitting and gradual rollout.
az containerapp env create \
--name cae-prod \
--resource-group rg-compute \
--location eastus
az containerapp create \
--name orders-api \
--resource-group rg-compute \
--environment cae-prod \
--image examacr104.azurecr.io/apps/orders:v1 \
--target-port 8080 \
--ingress external \
--min-replicas 1 \
--max-replicas 5
Ingress can be external or internal. External ingress exposes the app publicly through the managed endpoint. Internal ingress restricts access inside the environment or connected network design. If the scenario requires no public access, external ingress is the wrong choice even if NSG language appears elsewhere in the question.
Container Apps scaling is based on replicas. Minimum replicas control whether the app can scale to zero. Maximum replicas cap scale-out. HTTP concurrency and event sources can drive scale decisions. If the question says the app must process queue messages only when messages exist and cost should be minimized when idle, Container Apps is more likely than ACI.
Secrets, environment variables, and registry settings
Do not hard-code database passwords in container images or plain environment variables. Container Apps has a secrets feature, and environment variables can reference secrets. ACI also supports secure environment variables in deployment definitions. The administrator should treat image configuration, secrets, and runtime scale settings as deploy-time platform settings, not as values baked into the Dockerfile.
For private ACR pulls, choose managed identity where possible. Assign the identity AcrPull, then configure the app to use that identity for the registry. If a question describes rotating registry passwords as an operational burden, identity-based pull is the stronger answer.
Troubleshooting and exam traps
ACI is simpler, but simpler also means fewer application platform features. It does not provide Container Apps revisions or traffic splitting. Container Apps provides those features, but it requires an environment and more configuration. Do not pick Container Apps just because the word container appears; match the workload behavior.
When a container starts and immediately stops, inspect logs and restart policy. When a container cannot pull the image, inspect ACR permissions, registry name, image tag, and network restrictions. When a Container App is reachable internally but not from the internet, check ingress mode. When a new image is pushed but the app still runs old code, update the app image so a new revision is created.
A common case study pattern gives three workloads: a nightly import job, an event-driven API, and a custom container web application. ACI is a good fit for the import job. Container Apps is a good fit for the event-driven API with scale rules. App Service for Containers may be a good fit for the custom web app when the team wants familiar App Service features such as deployment slots, custom domains, certificates, backups, and plan-based scaling.
A containerized queue processor should scale to zero when no messages exist and scale out when work arrives. Which service best matches this requirement?
Several containers must share localhost communication and the same lifecycle in a simple serverless container deployment. Which ACI concept applies?
A new image tag is pushed to ACR, but a Container App continues serving the old version. What is the likely missing step?