Detection Use Cases and False-Positive Handling
Key Takeaways
- Detection use cases describe the behavior, data sources, logic, severity, and response for a threat scenario.
- False-positive handling improves signal quality without hiding real attacks.
- Tuning should be based on evidence, not analyst fatigue alone.
- Useful detections map to realistic behaviors such as impossible travel, suspicious PowerShell, data exfiltration, or privilege escalation.
- Detection engineering should include testing, documentation, ownership, and periodic review.
Detection Use Cases and False-Positive Handling
A detection use case is a documented security monitoring scenario. It explains what behavior the team wants to detect, which data sources are needed, how the logic works, why it matters, and what an analyst should do when the alert fires.
Detection Use Case Template
| Field | Example |
|---|---|
| Use case name | Suspicious PowerShell from Office process |
| Threat behavior | Malicious document launches script interpreter |
| Data sources | EDR process logs, user identity, file reputation |
| Logic | Parent process is winword.exe and child is powershell.exe with encoded command |
| Severity | High when host is managed and command is encoded |
| Triage steps | Review command, parent document, user activity, network connections |
| Response | Isolate host if malicious behavior is confirmed |
| Tuning notes | Exclude approved macro automation signed by internal team |
Example Use Cases
| Use case | Useful data | Possible false-positive source |
|---|---|---|
| Impossible travel | Identity provider login logs | VPN egress location changes |
| Suspicious PowerShell | EDR process telemetry | Approved admin scripts |
| Data exfiltration | Proxy, firewall, DLP, storage logs | Large approved backup |
| Privilege escalation | Directory and cloud audit logs | Approved access change ticket |
| Malware beaconing | DNS, proxy, EDR network events | Software update check-in |
False Positive Handling
A false positive occurs when a detection fires but the activity is not actually suspicious or the logic matched the wrong condition. The goal is not to silence alerts as quickly as possible. The goal is to improve accuracy while preserving the ability to catch real attacks.
Example alert:
2026-04-29T01:15:00Z alert name="Large outbound transfer" host=FIN-SQL-02 bytes_out=48000000000 dst=203.0.113.210 severity=high
Investigation:
2026-04-29T01:00:00Z backup job=quarterly_finance_archive host=FIN-SQL-02 dst=203.0.113.210 ticket=CHG-771 approved=true
This may be a benign positive or false positive depending on the detection wording. If the rule claims "possible exfiltration" and the transfer was an approved backup, the analyst should document the reason and tune the rule. Good tuning might check for approved backup job names, known backup destinations, change ticket windows, and expected service accounts.
Tuning Methods
| Method | Use carefully because |
|---|---|
| Threshold adjustment | A higher threshold may miss smaller attacks |
| Allowlist | Attackers may abuse trusted tools or destinations |
| Time-window suppression | Attacks can happen during maintenance windows |
| Asset-based severity | Low-value assets can still be entry points |
| User or role context | Privileged users can be compromised too |
Detection Quality Questions
- Does the detection map to a realistic threat behavior?
- Are required logs actually available and reliable?
- Does the alert include enough context for triage?
- Are exceptions documented and reviewed?
- Is there a test event or simulation to confirm the rule still works?
- Is the response action proportional to the confidence and impact?
Common Traps
- Disabling a noisy rule without understanding what it was meant to catch.
- Creating permanent allowlists for entire administrator groups.
- Treating every false positive as a reason to reduce logging.
- Measuring detection success only by alert volume.
- Forgetting to retest detections after changing log sources or agent policies.
What is the best first step before tuning a noisy detection?
A large outbound transfer alert is caused by an approved backup job during a documented change window. What should the analyst do?
Which items should be documented for a detection use case? Select three.
Select all that apply