4.7 Storage Case Lab

Key Takeaways

  • Case questions require ordering storage decisions across account design, network access, identity, data protection, and operations.
  • Least privilege in storage usually means combining RBAC, SAS scoping, firewall rules, private endpoints, and short-lived secrets.
  • Recovery requirements must be translated into specific features such as versioning, soft delete, snapshots, backup, redundancy, or account failover.
  • Troubleshooting storage failures starts by classifying the symptom as DNS, network, authentication, authorization, feature compatibility, or data state.
  • CLI and portal fluency matters because AZ-104 questions often describe implementation tasks rather than definitions.
Last updated: May 2026

Case lab: Contoso storage modernization

Contoso is moving several storage workloads to Azure. The application team needs blob storage for customer documents. The finance team needs a secure file share for monthly reports. A vendor needs temporary upload access for migration files. Security requires private access where practical, no anonymous blob access, customer-managed keys for production data, and recovery from accidental deletion or overwrite. Operations wants repeatable CLI commands and a troubleshooting checklist.

Start with workload separation. Put production customer documents in a dedicated StorageV2 account because the data has specific encryption, network, lifecycle, and recovery requirements. Put finance Azure Files in a storage account designed for file shares, with premium if performance requires it or standard if normal departmental documents are enough. Keep migration landing data separate if vendor access is temporary and riskier. Separate accounts make firewall rules, key rotation, private endpoints, cost reporting, and deletion policies easier to reason about.

WorkloadAccount or service choiceAccess modelProtection model
Customer documentsStorageV2 blob accountRBAC and private endpointGZRS if regional recovery needed, versioning, soft delete, CMK
Finance reportsAzure Files shareIdentity-based SMBSnapshots or Azure Backup, NTFS ACLs, private endpoint
Vendor uploadsLanding containerShort-lived SASSoft delete, limited lifecycle retention
Archive exportsBlob containerRBAC for adminsLifecycle to cool, cold, or archive

Create the production blob account with secure defaults. Disable public blob access, require TLS 1.2 or later, choose redundancy based on the recovery requirement, and configure customer-managed keys through Key Vault if required. If a regional outage must allow reads from the paired region, use RA-GZRS or RA-GRS depending on zone requirements. If the requirement only says survive a zone failure in the primary region, ZRS may be enough and cheaper than geo-redundancy.

Example CLI skeleton:

az group create --name rg-contoso-storage --location eastus

az storage account create \
  --name stcontosodocs01 \
  --resource-group rg-contoso-storage \
  --location eastus \
  --kind StorageV2 \
  --sku Standard_GZRS \
  --min-tls-version TLS1_2 \
  --allow-blob-public-access false \
  --default-action Deny

az storage container create \
  --account-name stcontosodocs01 \
  --name customer-docs \
  --auth-mode login

Then add data protection. Enable blob soft delete for accidental deletion. Enable container soft delete if container deletion is a realistic risk. Enable versioning for overwrite recovery. Add lifecycle rules to move older documents to cooler tiers when access drops, but do not delete data before the retention requirement. If compliance says data cannot be modified or deleted for a fixed period, add immutable storage policy; soft delete alone does not satisfy that requirement.

Network design comes next. For application subnets in Azure, choose private endpoints if the security requirement says private IP access or public network access must be disabled. Create private endpoints for the blob subresource, and for dfs as well if Data Lake Storage Gen2 style access is used by tools. Link the proper private DNS zones to VNets. For Azure Files, create a private endpoint for the file subresource, not blob. If using service endpoints instead, remember the endpoint remains public and the storage firewall authorizes the subnet.

Access-control choice table:

RequirementDo thisAvoid this
App runs in Azure and should have ongoing blob accessManaged identity plus Storage Blob Data Contributor at container scopeAccount key in app settings
Vendor needs upload for two daysShort-lived SAS with create/write, HTTPS, optional IP rangeSubscription Contributor or account key
Finance users need mapped driveIdentity-based Azure Files, share RBAC, NTFS ACLsShared account key distributed to users
Break-glass storage administrationPrivileged role through just-in-time processPermanent broad Owner for many users
Revoke many vendor SAS tokens earlyService SAS tied to stored access policy where supportedLong unmanaged SAS tokens

Portal implementation path: create storage account from Storage accounts > Create. Configure Networking to selected networks or disabled public network access with private endpoint connections. Configure Encryption for customer-managed keys. Configure Data protection for soft delete and versioning. Configure Lifecycle management rules under Data management. Assign data-plane RBAC through Access Control (IAM) at account, container, or share scope. For Azure Files identity, configure the directory service integration and then manage NTFS ACLs from a mounted client.

Troubleshooting case 1: the application receives 403 from a VM after the firewall is enabled. Check whether the VM subnet is allowed through service endpoint rules or whether DNS resolves to a private endpoint. Confirm the managed identity still has the right data-plane role. A 403 can mean network denial or authorization denial, so inspect the error details and logs instead of assuming RBAC.

Troubleshooting case 2: the vendor upload SAS fails immediately. Check token start time and clock skew, expiry, permissions, signed resource, HTTPS requirement, IP restriction, and whether the storage firewall allows the vendor source. If the account disallows shared key and the SAS is key-signed, regenerate the plan with an allowed authorization method or use a user delegation SAS for Blob Storage where appropriate.

Troubleshooting case 3: finance users can mount the share but cannot edit a folder. That points past DNS and port 445 because the mount worked. Check Storage File Data SMB Share Contributor or Elevated Contributor assignment, then check NTFS ACLs on the folder. If key-based mount works but identity mount fails, investigate identity integration rather than share quota.

Troubleshooting case 4: lifecycle policy did not archive old blobs. Confirm the rule is enabled, prefix matches, blob type is block blob, age condition is met, and the action applies to base blobs rather than only versions. If versioning is enabled, old versions may remain and continue billing even when the current blob moves tier.

A strong exam answer usually names both the feature and the reason. Do not simply say use private endpoint; say use a private endpoint for the correct storage subresource and configure private DNS. Do not simply say use SAS; say short-lived SAS with only required permissions, HTTPS-only, and stored access policy if revocation is needed. Do not simply say enable backup; match the recovery problem to soft delete, versioning, share snapshots, Azure Backup, or redundancy. That level of precision is what separates AZ-104 implementation knowledge from entry-level cloud vocabulary.

Test Your Knowledge

A vendor needs temporary upload access to one blob container and must not receive ongoing account-wide permission. Which option best fits?

A
B
C
D
Test Your Knowledge

Users can mount an Azure file share but cannot modify files in one folder. Which area is most likely responsible?

A
B
C
D
Test Your Knowledge

Security requires private IP access to Blob Storage from a VNet and public access should be disabled. What must be included?

A
B
C
D