3.2 Azure AI Face Service
Key Takeaways
- Azure AI Face provides face detection, face verification (1:1), face identification (1:N), face grouping, and liveness detection.
- Face detection returns face rectangles, landmarks (27 points), and attributes (age, emotion, glasses, head pose, blur, occlusion, noise).
- Face verification compares two faces and returns a confidence score indicating whether they belong to the same person (1:1 matching).
- Face identification matches an unknown face against a PersonGroup or LargeFaceList of known faces (1:N matching).
- Access to face identification and verification is restricted — you must apply for and receive Microsoft approval before using these features.
Azure AI Face Service
Quick Answer: Azure AI Face provides face detection (locate faces and attributes), verification (1:1 — are these the same person?), identification (1:N — who is this person?), and liveness detection (is this a live person vs. a photo/video?). Identification and verification require Microsoft approval for access.
Face Detection
Face detection locates faces in an image and returns face rectangles, landmarks, and optional attributes:
from azure.ai.vision.face import FaceClient
from azure.ai.vision.face.models import (
FaceDetectionModel,
FaceRecognitionModel,
FaceAttributeTypeDetection03,
FaceAttributeTypeRecognition04
)
from azure.core.credentials import AzureKeyCredential
client = FaceClient(
endpoint="https://my-face.cognitiveservices.azure.com/",
credential=AzureKeyCredential("<your-key>")
)
# Detect faces with attributes
with open("photo.jpg", "rb") as f:
detected_faces = client.detect(
image_content=f.read(),
detection_model=FaceDetectionModel.DETECTION_03,
recognition_model=FaceRecognitionModel.RECOGNITION_04,
return_face_id=True,
return_face_landmarks=True,
return_face_attributes=[
FaceAttributeTypeDetection03.HEAD_POSE,
FaceAttributeTypeDetection03.BLUR,
FaceAttributeTypeDetection03.MASK
]
)
for face in detected_faces:
rect = face.face_rectangle
print(f"Face at: ({rect.left}, {rect.top}), "
f"size: {rect.width}x{rect.height}")
if face.face_attributes:
print(f" Head pose: {face.face_attributes.head_pose}")
Detection Models
| Model | Features | Speed | Use Case |
|---|---|---|---|
| detection_01 | Legacy model, face attributes | Fast | Backward compatibility |
| detection_02 | Improved accuracy for small/side faces | Medium | General detection |
| detection_03 | Best accuracy, blur/mask/occlusion attributes | Medium | Production use |
Recognition Models
| Model | Accuracy | Use Case |
|---|---|---|
| recognition_03 | High | General face recognition |
| recognition_04 | Highest (latest) | Best accuracy for verification and identification |
On the Exam: Always pair detection_03 with recognition_04 for the best results. Questions may ask which model combination provides the highest accuracy.
Face Verification (1:1)
Face verification compares two faces to determine if they belong to the same person:
# Detect faces in two images
face1 = client.detect(image_content=image1_bytes,
return_face_id=True)[0]
face2 = client.detect(image_content=image2_bytes,
return_face_id=True)[0]
# Verify if they are the same person
result = client.verify_face_to_face(
face_id1=face1.face_id,
face_id2=face2.face_id
)
print(f"Same person: {result.is_identical}")
print(f"Confidence: {result.confidence:.2f}")
Use cases: Identity verification (ID card photo vs. selfie), access control, KYC (Know Your Customer) processes.
Face Identification (1:N)
Face identification matches an unknown face against a group of known faces:
Step 1: Create a PersonGroup
# Create a person group
client.create_person_group(
person_group_id="employees",
name="Company Employees",
recognition_model=FaceRecognitionModel.RECOGNITION_04
)
Step 2: Add People and Their Face Images
# Add a person to the group
person = client.create_person_group_person(
person_group_id="employees",
name="Jane Doe"
)
# Add face images for the person (multiple images improve accuracy)
with open("jane_photo1.jpg", "rb") as f:
client.add_person_group_person_face(
person_group_id="employees",
person_id=person.person_id,
image_content=f.read()
)
Step 3: Train the PersonGroup
# Train the person group (required after adding/removing faces)
client.train_person_group(person_group_id="employees")
# Check training status
status = client.get_person_group_training_status(
person_group_id="employees"
)
print(f"Training status: {status.status}")
Step 4: Identify Unknown Faces
# Detect a face in a new image
unknown_face = client.detect(
image_content=unknown_photo_bytes,
return_face_id=True
)[0]
# Identify against the person group
results = client.identify(
face_ids=[unknown_face.face_id],
person_group_id="employees"
)
for result in results:
if result.candidates:
best_match = result.candidates[0]
print(f"Match: person {best_match.person_id}")
print(f"Confidence: {best_match.confidence:.2f}")
else:
print("No match found")
Liveness Detection
Liveness detection verifies that the face input is from a live person (not a photo, video, or deepfake):
| Check | Description |
|---|---|
| Passive liveness | Analyzes a single image for signs of spoofing (photo/screen artifacts) |
| Active liveness | Requires the user to perform an action (turn head, blink) to prove liveness |
| Liveness with verification | Combines liveness detection with face verification against a reference photo |
On the Exam: Liveness detection is a Responsible AI safeguard. Questions may describe a scenario where an attacker holds up a photo to bypass face verification — the answer is to add liveness detection.
Responsible Use Restrictions
Microsoft restricts access to certain Face API features:
| Feature | Access Level | Application Required |
|---|---|---|
| Face detection | Open | No |
| Face verification | Restricted | Yes — must describe use case |
| Face identification | Restricted | Yes — must describe use case |
| Emotion recognition | Retired | No longer available |
| Age/gender attributes | Retired | No longer available |
Important: Microsoft retired emotion recognition and demographic attributes (age, gender) from the Face API in 2023 as part of their Responsible AI commitments. The AI-102 exam may test whether you know these features are no longer available.
What is the difference between face verification and face identification?
What must you do after adding new face images to a PersonGroup before you can use it for identification?
An attacker holds up a printed photo to bypass a face verification system. Which feature should you implement to prevent this?
Which Face API features has Microsoft retired as part of Responsible AI commitments?