8.3 Backend Pools, Health Probes, and Load Balancing Rules
Key Takeaways
- Backend pools define the instances that can receive traffic from a load balancer rule.
- Health probes determine whether a backend instance remains eligible for new flows.
- Load balancing rules bind frontend IP, frontend port, backend pool, backend port, protocol, and probe.
- Inbound NAT rules are for management or direct instance access, not application load distribution.
- Probe failures are one of the fastest ways to identify why a backend is not receiving traffic.
The packet path
A load-balanced connection follows a simple path: client to frontend IP and port, load balancing rule to backend pool and backend port, then backend instance response. Health probes run separately to decide whether each backend is eligible for new flows. If any object in that chain is wrong, the service may appear down even though one or more VMs are running.
Think of the objects as a contract. The frontend IP and port describe what clients use. The backend pool describes possible destinations. The backend port describes where the application listens on each destination. The health probe describes how Azure decides whether the destination is healthy. The NSG and route table decide whether packets are allowed and delivered.
| Object | Purpose | Common failure |
|---|---|---|
| Frontend IP configuration | Address clients connect to | DNS points elsewhere or wrong frontend selected. |
| Backend pool | Targets that can receive traffic | NIC, VMSS, or IP not associated with pool. |
| Health probe | Tests backend health | App does not listen on probe port or returns wrong status. |
| Load balancing rule | Maps frontend traffic to backend pool and port | Protocol or port mismatch. |
| NSG rule | Allows or denies traffic to backend | Missing inbound allow for app or probe. |
| Route table | Controls next hop | UDR sends traffic to wrong appliance or blackhole. |
Backend pools
A backend pool can reference VM NICs, VM Scale Set instances, or IP addresses depending on the load balancer configuration. For administrators, the key is membership. A VM is not automatically in the pool just because it is in the same subnet. It must be associated with the backend pool, or the load balancer has no reason to send traffic to it.
VM Scale Sets pair naturally with backend pools because new instances can join the pool as the scale set grows. For standalone VMs, verify the NIC configuration. If the backend pool is IP-based, verify that the IP address is correct and still assigned to the intended instance.
Useful commands:
az network lb address-pool list -g rg-network --lb-name lb-web-public -o table
az network nic ip-config show \
-g rg-compute \
--nic-name vm1-nic \
-n ipconfig1 \
--query loadBalancerBackendAddressPools
Health probes
A health probe is not a user request. It is Azure's periodic test to decide whether a backend should receive new flows. Probes can use TCP, HTTP, or HTTPS depending on configuration. A TCP probe succeeds when the backend accepts a TCP connection on the probe port. An HTTP or HTTPS probe expects a successful HTTP status from the configured path.
Probe design should match application truth. If the app listens on port 8080 but the probe checks port 80, the backend may be marked unhealthy. If the app requires authentication on /, an HTTP probe to / may fail. Use a lightweight health endpoint such as /health that returns success only when the app is ready to serve traffic.
Probe traffic also needs to be allowed. Azure uses a platform probe source that is commonly represented by the AzureLoadBalancer service tag in NSG rules. If an NSG blocks the probe, the backend is unhealthy even when the app is listening.
Probe troubleshooting tree:
Backend not receiving traffic
|-- Is the backend in the backend pool?
| |-- No: add the NIC, VMSS, or IP to the pool.
| |-- Yes: continue.
|-- Is the health probe showing healthy?
| |-- No: test listener, path, port, NSG, and app response.
| |-- Yes: continue.
|-- Does the load balancing rule reference this pool and probe?
| |-- No: correct the rule binding.
| |-- Yes: continue.
|-- Can client traffic reach the frontend and pass NSG rules?
| |-- No: fix DNS, public IP, NSG, route, or firewall.
| |-- Yes: inspect application logs and return path.
Load balancing rules
A rule binds the frontend to the backend. The protocol can be TCP or UDP. The frontend port is what clients connect to, and the backend port is where Azure sends traffic on the backend instance. They can be the same or different. The rule also references a health probe; only healthy backend instances are used for new flows.
Session persistence, also called distribution mode or affinity in some interfaces, controls whether related flows from a client tend to go to the same backend. Use it only when the application needs it. Better application designs store session state outside individual VMs, but exam scenarios sometimes describe legacy apps that require client IP affinity.
Floating IP is used for specific high availability patterns, such as certain database or network virtual appliance designs. It is not a generic setting to turn on for web servers. HA ports can load balance all ports for internal scenarios such as network virtual appliances, but they must be used deliberately.
Example rule creation:
az network lb probe create \
-g rg-network \
--lb-name lb-web-public \
-n hp-web \
--protocol Http \
--port 8080 \
--path /health
az network lb rule create \
-g rg-network \
--lb-name lb-web-public \
-n rule-web-https \
--protocol Tcp \
--frontend-port 443 \
--backend-port 8080 \
--frontend-ip-name LoadBalancerFrontEnd \
--backend-pool-name be-web \
--probe-name hp-web
Inbound NAT rules
Inbound NAT rules map a frontend port to a specific backend instance and port. They are useful for direct administration or unique per-instance access, such as connecting to VM1 on a translated management port. They do not distribute traffic across a pool. If the requirement says distribute application traffic across two VMs, use a load balancing rule, not inbound NAT.
In production, direct RDP or SSH from the internet is often avoided in favor of Azure Bastion, just-in-time access, or private management paths. For exam purposes, recognize the function: NAT rules target a specific instance; load balancing rules target a pool.
Practical diagnostics
Start at the backend. On Linux, use ss -lntp or curl localhost:8080/health. On Windows, use netstat -ano, Test-NetConnection, and the application logs. Then test from another VM in the same VNet or subnet. After that, test through the frontend IP.
Commands:
az network lb probe show -g rg-network --lb-name lb-web-public -n hp-web
az network lb rule show -g rg-network --lb-name lb-web-public -n rule-web-https
az network watcher test-ip-flow \
-g rg-network \
--direction Inbound \
--protocol TCP \
--local 10.20.2.4:8080 \
--remote 203.0.113.10:51515 \
--vm vm-web-1
Exam traps
Do not point the probe at a port that only some instances use. Do not forget that a backend can be running but excluded because the probe fails. Do not use DNS round robin when health-based distribution is required. Do not select inbound NAT when the requirement says load balance across the backend pool.
A final scenario pattern: a team deploys two VMs, adds both to a backend pool, and configures a frontend rule, but only one VM receives traffic. If the probe path returns 200 on one VM and 404 or 500 on the other, the load balancer is working correctly by excluding the unhealthy backend.
A VM is in the correct subnet but never receives traffic from the load balancer. What must be verified first?
An HTTP health probe checks / on port 80, but the app listens on port 8080 and exposes /health. What should you change?
Which load balancer feature maps a frontend port to one specific backend VM for direct access rather than distributing across a pool?