4.1 Storage Account Design, Redundancy, and Encryption
Key Takeaways
- Choose storage account kind, performance, replication, and access tier from workload requirements before creating dependent services.
- Replication choices protect against different failure scopes, so LRS, ZRS, GRS, GZRS, RA-GRS, and RA-GZRS are not interchangeable.
- Azure Storage encrypts data at rest by default, and administrators must know when Microsoft-managed keys, customer-managed keys, infrastructure encryption, and encryption scopes apply.
- Some settings, such as hierarchical namespace and redundancy conversions, affect compatibility, migration planning, and recovery behavior.
- Exam scenarios often combine cost, region availability, failover, read access, compliance, and performance constraints.
Design decisions that matter in production
A storage account is a management boundary for blobs, files, queues, and tables, but it is also a security, networking, redundancy, billing, and encryption boundary. When an exam question asks where to put data, do not jump straight to a blob container or file share. First decide whether a separate storage account is needed for isolation, replication, network rules, private endpoints, customer-managed keys, lifecycle policy, diagnostics, or ownership.
The most common general-purpose account for new workloads is StorageV2. It supports Blob Storage, Azure Files, queues, tables, standard performance, access tiers, lifecycle management, and most current data protection features. Premium accounts are selected for specific performance profiles, such as premium block blobs, premium file shares, or premium page blobs. A premium file share account is not just a faster version of a standard account; it has different performance provisioning behavior and is often used when low latency SMB or NFS file workloads need predictable throughput.
| Requirement | Likely storage design choice | Why it matters |
|---|---|---|
| General blob and file workload | General-purpose v2, standard | Broad feature support and normal cost profile |
| Low-latency file workload | Premium file shares | Provisioned performance and SSD-backed shares |
| Analytics namespace with directories and ACLs | StorageV2 with hierarchical namespace | Enables Azure Data Lake Storage Gen2 behavior |
| Cheapest locally durable blob storage | Standard account with LRS | Three local copies in one region, no cross-region protection |
| Read from secondary region during outage | RA-GRS or RA-GZRS | Secondary endpoint can be used for read operations |
| Zone failure tolerance in primary region | ZRS or GZRS | Data is synchronously replicated across availability zones |
Redundancy is a scenario skill. Locally redundant storage, or LRS, keeps three copies in one data center or storage scale unit within a region. It is usually the lowest-cost option, but it does not protect against a zonal or regional outage. Zone-redundant storage, or ZRS, synchronously copies data across availability zones in a supported region. It is a strong fit when the application must survive a zone failure without cross-region replication.
Geo-redundant storage, or GRS, adds asynchronous replication to a paired secondary region. Because replication is asynchronous, a regional disaster can include some data loss within the recovery point objective. Geo-zone-redundant storage, or GZRS, combines ZRS in the primary region with asynchronous replication to the secondary region. Read-access variants, RA-GRS and RA-GZRS, expose a secondary read endpoint. That does not mean writes automatically move to the secondary region; account failover is a separate operation, and failover can be disruptive.
| Option | Primary copies | Secondary region | Read secondary | Typical exam clue |
|---|---|---|---|---|
| LRS | Three local copies | No | No | Lowest cost, local durability only |
| ZRS | Three zones | No | No | Survive datacenter or zone failure |
| GRS | Local copies plus async secondary | Yes | No | Regional disaster protection, no normal secondary reads |
| RA-GRS | Local copies plus async secondary | Yes | Yes | App must read from paired region during outage |
| GZRS | Zone copies plus async secondary | Yes | No | Zone and regional protection |
| RA-GZRS | Zone copies plus async secondary | Yes | Yes | Highest standard redundancy with secondary reads |
Encryption at rest is automatic for Azure Storage. By default, Microsoft-managed keys protect the data, and this is enough for many workloads. Customer-managed keys use Azure Key Vault or managed HSM when the organization requires key ownership, rotation control, or separation of duties. Infrastructure encryption adds a second encryption layer for supported account types. Encryption scopes allow different containers or blobs to use distinct encryption settings, which is useful when tenants, departments, or compliance groups share an account but need key separation.
Portal path example: Storage accounts > Create > Basics for subscription, resource group, name, region, performance, and redundancy. Then review Advanced for hierarchical namespace, SFTP, NFS, TLS minimum version, shared key access, and blob public access. After creation, use Settings > Configuration for account-level security defaults, Data management > Redundancy for replication details, and Security + networking > Encryption for key settings.
CLI example:
az storage account create \
--name staz104prod01 \
--resource-group rg-storage-prod \
--location eastus \
--sku Standard_GZRS \
--kind StorageV2 \
--min-tls-version TLS1_2 \
--allow-blob-public-access false
az storage account show \
--name staz104prod01 \
--resource-group rg-storage-prod \
--query '{sku:sku.name,kind:kind,tls:minimumTlsVersion,publicBlob:allowBlobPublicAccess}'
Troubleshooting often starts with the error scope. If a client cannot create a container, check identity and account-level permissions before network rules. If a client can reach the account but receives an encryption or key error, inspect Key Vault access, key state, purge protection, managed identity assignment, and whether the storage account can reach the key. If a workload fails after enabling hierarchical namespace, confirm that the application and SDK support Data Lake Storage Gen2 semantics. Some older tools assume flat blob namespace behavior.
A practical design sequence is: identify service type, performance tier, region, redundancy, networking, identity, encryption, data protection, and monitoring. This order prevents common mistakes. For example, choosing LRS for a compliance workload that requires regional recovery is not fixed by blob soft delete. Enabling customer-managed keys without a Key Vault recovery plan can create an availability dependency. Putting unrelated production and test workloads in one account can make firewall rules and key rotation harder than necessary.
Case scenario: a finance team stores quarterly exports in blobs. The data must stay available during a single zone failure, must not be publicly accessible, must be encrypted with organization-managed keys, and must be recoverable if a user deletes a file. The base design is a StorageV2 account with ZRS or GZRS depending on regional disaster requirements, public blob access disabled, customer-managed keys in Key Vault, blob soft delete, versioning, and role-based access for administrators. If the question adds read access from a paired region during a regional outage, move from GRS or GZRS to RA-GRS or RA-GZRS.
A workload must survive an availability zone failure in the primary region without relying on a paired region. Which redundancy option best matches the requirement?
An organization requires control over key rotation for data at rest in a storage account. What should you configure?
Which design choice enables directory and ACL behavior for Azure Data Lake Storage Gen2 workloads?