8.2 Public and Internal Load Balancer Design

Key Takeaways

  • Azure Load Balancer is a Layer 4 service that distributes TCP and UDP traffic by frontend IP, rule, backend pool, and health probe.
  • A public load balancer exposes a public frontend IP, while an internal load balancer uses a private frontend IP inside a VNet.
  • Standard Load Balancer is the normal production choice and requires explicit network security rules for traffic flow.
  • Load balancer design must match client location, protocol, port, availability zone, and backend placement.
  • Do not confuse Azure Load Balancer with Application Gateway, Traffic Manager, Front Door, or DNS round robin.
Last updated: May 2026

What Azure Load Balancer does

Azure Load Balancer distributes inbound and outbound TCP and UDP flows at Layer 4. It does not inspect HTTP paths, cookies, host headers, or TLS certificates. The administrator defines a frontend IP, a backend pool, a health probe, and one or more load balancing rules. When a client connects to the frontend IP and port, Azure distributes the flow to healthy backend instances based on the rule.

A public load balancer has a public frontend IP address. It is used when clients on the internet, or external clients routed through a public path, must reach backend resources. An internal load balancer has a private frontend IP address in a subnet. It is used when clients inside a VNet, peered VNet, VPN, or ExpressRoute-connected network need a stable private service endpoint.

Design questionPublic Load BalancerInternal Load Balancer
Frontend IP typePublic IP addressPrivate IP address from a subnet
Typical clientsInternet clients or public ingress pathsVNet, peered VNet, VPN, ExpressRoute clients
Common usePublic TCP or UDP services, inbound NAT, outbound SNATPrivate application tiers, internal appliances, database listener patterns
DNS record typePublic DNS A record points to public IPPrivate DNS A record points to private frontend IP
Security focusPublic exposure, NSGs, allowed source rangesInternal segmentation, routes, NSGs, hybrid access

Standard versus Basic

Standard Load Balancer is the expected production answer. It supports zone-aware and zone-redundant designs, larger scale, stronger SLA patterns when configured correctly, HA ports, and integration with Standard public IP addresses. Basic Load Balancer is legacy and limited. In new AZ-104 scenarios, choose Standard unless a question explicitly describes an existing Basic deployment.

Standard Load Balancer is secure by default in an important way: inbound traffic requires explicit NSG allowance to the backend. If a backend VM is healthy but clients cannot connect, check NSG rules before assuming the load balancer rule is broken. Public IP SKU must also match. Standard Load Balancer uses Standard public IP addresses, not Basic public IP addresses.

Choosing public or internal

Start with the client. If the client is a public user connecting to a TCP service on the internet, a public load balancer may fit. If the client is an application tier in another subnet, an on-premises client over VPN, or a VM in a peered VNet, use an internal load balancer when the service should remain private.

Scenario examples:

ScenarioLikely designWhy
Two Linux VMs host a public TCP service on port 443Public Standard Load BalancerInternet clients need a public frontend.
Web tier in subnet A calls an API tier in subnet BInternal Standard Load BalancerThe API endpoint should stay private.
On-premises clients over ExpressRoute call a private applicationInternal Standard Load Balancer plus private DNSHybrid clients should resolve a private name to a private frontend IP.
Need URL path routing between /api and /imagesApplication Gateway or Front Door, not Load BalancerAzure Load Balancer is Layer 4, not HTTP Layer 7.
Need global DNS-based endpoint selectionTraffic Manager or Front Door, not regional Load BalancerAzure Load Balancer is regional.

Frontend IP and backend placement

The frontend IP is the address clients connect to. A public load balancer frontend uses a public IP resource. An internal load balancer frontend uses an IP from a subnet. Backend pools can contain VM NICs or IP addresses depending on the configuration. VM Scale Sets are common backend targets because they provide multiple identical instances.

Zone design matters. A zone-redundant Standard public IP can survive a single zone failure for the frontend. Backends should also be distributed across availability zones or availability sets according to the workload requirement. A zone-redundant frontend does not magically make a single backend VM highly available.

The load balancer does not install or start the application. Each backend must listen on the backend port and respond to the health probe. If the frontend listens on port 80 and the backend port is 8080, the application must listen on 8080 unless a local proxy maps it.

DNS for load-balanced services

DNS should point users to the frontend, not to individual backend VMs. For public services, create a public DNS A record that points to the public IP address or use a label on the public IP if appropriate. For internal services, create a private DNS A record that points to the internal load balancer frontend IP.

Do not create multiple A records for backend VMs as a replacement for a load balancer when the requirement includes health-based distribution. DNS round robin does not know whether the application on a VM is healthy. Azure Load Balancer uses health probes to remove unhealthy instances from rotation.

Design troubleshooting tree

Need to choose a load balancing service
|-- Is the traffic HTTP or HTTPS and needs URL, host, cookie, or TLS decisions?
|   |-- Yes: consider Application Gateway or Front Door.
|   |-- No: continue.
|-- Is the endpoint regional TCP or UDP?
|   |-- Yes: Azure Load Balancer fits.
|   |-- No: consider DNS/global service options.
|-- Are clients on the internet?
|   |-- Yes: public Load Balancer with public frontend IP.
|   |-- No: internal Load Balancer with private frontend IP.
|-- Must backend health affect traffic distribution?
|   |-- Yes: configure a health probe and rules.
|   |-- No: simple DNS may be enough, but rarely for production app tiers.

Portal and CLI flow

Portal path: Azure portal > Load balancers > Create. Choose SKU, type, frontend IP configuration, backend pool, inbound rules, and probes. For a public design, create or select a Standard public IP. For an internal design, choose the VNet, subnet, and private frontend address.

CLI outline:

az network lb create \
  -g rg-network \
  -n lb-web-public \
  --sku Standard \
  --public-ip-address pip-web

az network lb create \
  -g rg-network \
  -n lb-api-internal \
  --sku Standard \
  --vnet-name vnet-prod \
  --subnet subnet-app \
  --frontend-ip-name fe-api \
  --private-ip-address 10.20.2.10

Exam traps

If the scenario says the load balancer should route based on URL path, Azure Load Balancer is not the answer. If it says TCP or UDP and regional distribution, it often is. If public users cannot connect to a Standard public load balancer, remember NSGs must allow traffic to the backends. If internal users cannot connect by name, check private DNS before changing the load balancer type.

A final practical distinction: an internal load balancer is not a security boundary by itself. It has a private IP, but NSGs, route tables, service firewalls, and segmentation still control which clients can reach it.

Test Your Knowledge

An application tier in a VNet must expose a private TCP endpoint to web servers in another subnet. Which design best fits?

A
B
C
D
Test Your Knowledge

A service must route HTTPS requests to different backend pools based on URL path. Which Azure service is more appropriate than Azure Load Balancer?

A
B
C
D
Test Your Knowledge

A public Standard Load Balancer is configured correctly, but inbound client traffic is still blocked. What Azure control should you check on the backend subnet or NIC?

A
B
C
D