Career upgrade: Learn practical AI skills for better jobs and higher pay.
Level up

5.6 Defect Management

Key Takeaways

  • Defect management defines how anomalies are logged, analyzed, classified, handled, tracked, and closed.
  • A reported anomaly may become a confirmed defect, false positive, duplicate, change request, or rejected report.
  • A useful defect report gives enough context for analysis, reproduction, prioritization, fixing, retesting, and process improvement.
  • Severity describes impact; priority describes urgency or business order for fixing.
  • Static testing findings should be handled with a similar level of discipline when appropriate.
Last updated: May 2026

Purpose of Defect Management

One major objective of testing is to find defects, but finding an anomaly is only the start. The team needs an agreed process for logging, analyzing, classifying, deciding what to do, tracking progress, confirming fixes, and closing reports. Without that process, defects get lost, duplicated, argued about, or fixed without evidence.

The word "defect" is often used broadly in test work, but a reported anomaly may later be classified differently. It may be a true defect, a false positive, a duplicate, a documentation issue, an environment problem, a test data problem, a user misunderstanding, or a change request. The defect management workflow resolves that classification.

Typical Defect Lifecycle

A simple lifecycle may include new, open, analyzed, assigned, fixed, ready for retest, retested, closed, reopened, deferred, duplicate, rejected, or awaiting information. Organizations use different names, but the principle is the same: each report has a visible status and a controlled path from discovery to closure.

A typical flow is:

  1. Log the anomaly with evidence.
  2. Triage and classify it.
  3. Decide whether to fix, defer, reject, accept, or convert it.
  4. Assign ownership if work is needed.
  5. Fix and build the change.
  6. Confirm the fix through retesting and regression testing as needed.
  7. Close the report or reopen it if the issue remains.

The process should be followed by all relevant stakeholders. Developers, testers, product owners, support staff, analysts, and managers may all interact with defect reports. The process should also define classification rules so severity, priority, duplicates, rejected reports, and deferred defects are handled consistently.

Defect Report Objectives

A good defect report helps the people responsible for resolution understand the issue. It also supports quality tracking and process improvement. If many high-severity defects cluster around one module, one supplier, one requirement type, or one test level, the team can improve its development and test process.

A poor report says, "Checkout broken." A useful report says what was tested, what version was used, what steps were taken, what data was entered, what was expected, what happened instead, how often it happens, what evidence exists, and why it matters. The report should reduce investigation time, not transfer confusion to someone else.

Typical Defect Report Fields

FieldPurpose
Unique identifierGives the report a stable reference
TitleSummarizes the anomaly clearly
Date, author, organization, roleShows who observed it and when
Test object and environmentIdentifies the product version and conditions
ContextLinks test case, activity, SDLC phase, technique, data, or checklist
Steps and evidenceSupports reproduction and analysis
Expected and actual resultsStates the failure clearly
SeverityShows degree of impact
PriorityShows urgency or order for fixing
StatusShows workflow state
ReferencesLinks to requirements, tests, logs, screenshots, recordings, or dumps

Severity and priority are not the same. Severity is the impact on stakeholders or requirements. Priority is how soon the team should address it. A typo on a legal disclosure page may be low technical severity but high priority before release. A crash in an obsolete admin screen may be high severity but lower priority if the screen is disabled and replacement is scheduled.

Reproduction and Evidence

Reproduction steps should be specific enough that another person can follow them. Include preconditions, input data, user role, environment, configuration, and any timing requirements. For intermittent failures, include frequency, timestamps, logs, screenshots, recordings, or monitoring data.

Expected and actual results should be concrete. "Should work" is too vague. "The invoice total should be $108.25 after tax" is actionable. "The page displays $18.25" is actionable. The difference between expected and actual results is the failure being reported.

Static Testing Findings

Findings from reviews and static analysis can be handled through the same kind of disciplined process. A review comment about an ambiguous requirement or a static analyzer warning may not require the same fields as a dynamic test failure, but it should still be logged, classified, assigned, and closed when the process requires evidence.

The exam trap is assuming every anomaly should be fixed immediately. Triage may decide to fix, defer, reject, mark duplicate, accept the risk, improve the test, or raise a change request. The important point is that the decision is visible and controlled.

Test Your Knowledge

A failure prevents all users from submitting tax forms, but the deadline is months away and a workaround exists. Which defect field describes the business urgency for fixing it?

A
B
C
D
Test Your KnowledgeMulti-Select

Which details belong in a strong dynamic-testing defect report?

Select all that apply

Test object version and test environment
Steps to reproduce, expected result, and actual result
Severity, priority, status, and references to related testware
Only the tester's opinion that the product is poor
No date or author because tools always infer all context