9.3 Secure Coding, Code Review, and Application Testing

Key Takeaways

  • Secure coding standards should address common defect classes and the approved patterns teams must use to prevent them.
  • Code review is a control for logic, authorization, secrets, error handling, and maintainability, not only syntax quality.
  • Application testing should combine automated tools with risk-based manual analysis because no single test type sees every weakness.
  • Security findings need severity, exploitability, business impact, owner assignment, retesting, and exception handling.
Last updated: May 2026

Building And Verifying Secure Behavior

Secure coding is the practice of using approved patterns that prevent common weaknesses before they enter the codebase. It should be supported by standards, training, libraries, code review, automated checks, and defect management. A manager should not expect every developer to become a security specialist, but the organization should make secure defaults easy and unsafe patterns visible.

Secure coding standards normally address input validation, output encoding, authentication, authorization, session management, cryptography use, secret handling, error handling, logging, file handling, serialization, memory safety, API security, and dependency use. Standards should tell teams what to do in the local technology stack. A standard that says validate input is less useful than one that names approved validation libraries, central authorization helpers, and logging rules.

Many application flaws are not exotic. Injection occurs when untrusted input is interpreted as commands or queries. Broken authorization occurs when users can act on objects or functions they should not access. Cross-site scripting occurs when untrusted content is rendered in a browser without safe handling. Insecure deserialization, weak session handling, unsafe redirects, sensitive data exposure, and missing rate controls remain common because teams often optimize for feature success paths.

Test or review typeWhat it sees wellWhat it may miss
Peer code reviewLogic, maintainability, authorization intent, risky changesRuntime configuration and some dependency issues
SASTSource patterns, tainted input paths, insecure API useBusiness logic and runtime-only behavior
DASTRunning application behavior from outsideHidden code paths and source-level context
IASTRuntime behavior with code contextCoverage gaps when tests do not exercise paths
SCAVulnerable or risky third-party componentsCustom logic flaws and insecure integration
FuzzingInput handling crashes and unexpected parser behaviorAuthorization and business workflow abuse
Penetration testingChained exploitation and realistic attack pathsFull code coverage and routine regression control

Code review should be risk-aware. A change to text styling does not need the same scrutiny as a change to authentication, payment logic, authorization middleware, cryptographic handling, or administrative functions. The review process should require extra attention for security-sensitive files, new external integrations, data model changes, and changes that alter trust boundaries. Pull request templates can prompt reviewers to consider data exposure, secrets, logging, and rollback.

A common management failure is treating code review as proof that a system is secure. Reviewers may be rushed, may lack context, or may miss runtime effects. Automated scanners may also create false positives and false negatives. A strong program layers controls: secure coding standards, branch protection, peer review, static analysis, dependency scanning, test coverage, dynamic testing, and targeted manual assessment for high-risk workflows.

Application testing should match risk and lifecycle stage. Unit tests can verify authorization functions and input validation. Integration tests can verify service-to-service identity and policy enforcement. Static analysis can run early in development. Dynamic scans can run against deployed test environments. Fuzzing may be valuable for parsers, file processors, protocol handlers, and APIs. Penetration testing is best used for higher-risk systems or major releases where chained attacks and business logic matter.

Misuse and abuse-case testing are especially important for CISSP thinking. Normal user stories may say a customer can update an address. Abuse cases ask whether a customer can update another customer's address, bypass approval, replay a request, automate refund attempts, or force error messages to reveal internal state. Security testing must verify intended denial as well as intended success.

Findings need governance. A critical authorization flaw on an internet-facing application should not be buried in a general defect backlog. Findings should include severity, affected assets, exploitability, business impact, evidence, owner, target remediation date, and retest status. If the business accepts a finding temporarily, the exception should state compensating controls, expiration, and the authority who accepted the residual risk.

Checklist for reviewing a security-sensitive change:

  • Does the change cross a trust boundary or affect privileged behavior?
  • Are authentication and authorization enforced on the server side?
  • Are object-level permissions verified, not inferred from user interface controls?
  • Are inputs validated and outputs encoded in the correct context?
  • Are secrets excluded from code, logs, images, and test data?
  • Are errors safe for users but useful for operators?
  • Are logs sufficient for investigation without exposing sensitive values?
  • Are tests included for misuse, denied access, and edge cases?

Scenario: a team adds a bulk export function for support staff. The feature passes normal tests, but code review shows that access is checked only on the page that displays the export button. A direct API call can request exports for accounts outside the staff member's region. The correct fix is server-side authorization at the export action and object scope. Hiding buttons is not a security control by itself.

Scenario: a static analysis tool flags hard-coded test credentials in a repository. The manager should ensure the secret is revoked or rotated, repository history exposure is assessed, affected environments are checked, and the coding standard is updated if needed. Closing the finding by deleting only the current line may leave the credential active and exposed in version history.

The CISSP emphasis is not that every tool must be used everywhere. It is that the organization chooses a testing strategy proportionate to risk, uses results to drive remediation, and prevents unresolved critical defects from quietly becoming production risk. Testing is evidence for a decision, not a substitute for ownership.

Test Your Knowledge

A web application hides an export button from unauthorized users, but the API still allows direct export requests. What is the core security issue?

A
B
C
D
Test Your Knowledge

Which testing combination best supports a high-risk release?

A
B
C
D
Test Your Knowledge

A hard-coded credential is found in source control. What should the manager require besides deleting the line?

A
B
C
D