7.3 Log Review, Code Review, Misuse Cases, and Compliance Checks
Key Takeaways
- Log review evaluates whether events are collected, protected, correlated, and acted on in time to support detection and accountability.
- Code review and misuse case testing find design and implementation weaknesses before attackers or users can abuse them.
- Compliance checks verify adherence to required criteria, but compliance evidence should not be confused with complete security assurance.
- These techniques work best when they are mapped to threat scenarios, data sensitivity, and control objectives.
Reviewing Evidence, Design, and Abuse Paths
Log review is more than opening a dashboard after something goes wrong. It is the planned examination of event records to determine whether control-relevant activity is visible, accurate, timely, protected, and reviewed. Logs support detection, investigation, accountability, audit evidence, and operational troubleshooting. They are only useful if the right events are collected and someone is responsible for acting on them.
A log strategy should start with use cases. Administrative privilege changes, failed authentication bursts, new MFA enrollment, data exports, firewall policy changes, malware alerts, cloud role assumption, database changes, and disabled logging are examples of events that may deserve review. Collecting everything without purpose can overwhelm storage and analysts. Collecting too little leaves investigators blind.
Log quality matters. Events need synchronized time, source identity, asset identity, action, result, source address, affected object, and enough context to reconstruct the story. Logs should be protected from tampering, retained according to legal and business requirements, and monitored for gaps. If administrators can erase their own audit trails, accountability is weak no matter how many events are collected.
Code review examines source code, configuration as code, scripts, infrastructure templates, and sometimes database logic for security flaws. It may be manual, automated, peer-based, tool-assisted, or part of a secure development lifecycle. Automated tools are good at repeatable patterns, but manual review is often needed for authorization logic, business rules, cryptographic misuse, input validation assumptions, and trust boundary mistakes.
Misuse case testing asks how a system could be intentionally abused. Instead of only proving the expected workflow works, it challenges the system with hostile or improper behavior. Could a customer change another customer's record by modifying an identifier? Could a user approve a transaction they created? Could an API be called out of order? Could a low-privilege user force an export through a forgotten endpoint? Misuse cases connect threat modeling to practical testing.
Compliance checks compare controls against required criteria such as laws, regulations, contractual terms, internal policy, standards, or audit frameworks. They may verify encryption settings, retention rules, access reviews, segregation of duties, data residency, change approvals, or incident notification procedures. Compliance checks are necessary in many environments, but they are bounded by the criteria. Passing a checklist does not prove the organization is secure against every relevant threat.
| Technique | Looks at | Strong for | Weak if |
|---|---|---|---|
| Log review | Events and monitoring evidence | Detection, accountability, investigation | Logs are noisy, incomplete, or not acted on |
| Code review | Design and implementation | Preventing flaws before deployment | Review ignores business logic and authorization paths |
| Misuse case testing | Abusive scenarios and negative paths | Finding practical abuse cases | Testers only repeat happy-path requirements |
| Compliance check | Required control criteria | Regulatory and contractual evidence | Treated as full risk assurance |
The best results come from combining techniques. Suppose a healthcare portal allows clinicians to view patient records. Code review can inspect authorization checks. Misuse case testing can attempt access to a patient outside the clinician's relationship. Log review can confirm unauthorized attempts are recorded and alerted. Compliance checks can verify access review and privacy requirements. Each method covers a different question.
Log review programs need escalation criteria. A failed login from one employee may be routine. Hundreds of failures across many accounts, followed by a successful login from a new geography, is different. A cloud storage bucket permission change may be normal during deployment, but the same change outside an approved change window may require investigation. Review procedures should define thresholds, ownership, and response paths.
Code review should be risk-based. A small internal display change does not need the same depth as a new payment approval workflow. High-risk code includes authentication, authorization, cryptography, session management, input parsing, deserialization, file upload, administrative functions, data export, and integrations. Security champions, secure coding standards, peer review, static analysis, and threat model updates can make review repeatable.
Review Planning Checklist
- Define the control objective or threat scenario before choosing evidence.
- Confirm logs include identity, asset, action, result, time, and affected object where practical.
- Protect audit trails from unauthorized alteration and monitor logging failures.
- Use automated code checks for repeatable defects and manual review for design and business logic.
- Build misuse cases from threat models, fraud scenarios, prior incidents, and data sensitivity.
- Treat compliance checks as required evidence, then add risk-based testing where checklist scope is narrow.
Misuse cases are especially valuable because many damaging failures occur in allowed features used the wrong way. A user may not exploit memory corruption; they may abuse weak workflow design. A partner may not break encryption; they may call an API at a higher volume than intended. A finance employee may not bypass login; they may combine permissions that should be separated. Testing must include these business abuse paths.
Compliance checks should be traceable. A reviewer should identify the requirement, control, evidence, sample, date, system, owner, and conclusion. Weak evidence such as an old screenshot with no system context may not support the conclusion. Strong evidence is current, relevant, complete, and tied to the control period. Where compliance and risk diverge, management should document both the requirement status and the residual risk.
At the manager level, the goal is integrated assurance. Logs show what happened, code review reduces what could go wrong, misuse cases test how harm might occur, and compliance checks show whether required obligations are met. None is enough alone. Together they create a defensible view of prevention, detection, accountability, and governance.
A web application passes normal functional tests, but security wants to test whether a user can modify another user's invoice by changing an ID in the URL. What technique is being applied?
Which log review weakness most directly undermines accountability for privileged actions?
What is the main limitation of relying only on compliance checks for security assurance?