5.6 Defect Management
Key Takeaways
- Defect management defines how anomalies are logged, analyzed, classified, handled, tracked, and closed.
- A reported anomaly may become a confirmed defect, false positive, duplicate, change request, or rejected report.
- A useful defect report gives enough context for analysis, reproduction, prioritization, fixing, retesting, and process improvement.
- Severity describes impact; priority describes urgency or business order for fixing.
- Static testing findings should be handled with a similar level of discipline when appropriate.
Purpose of Defect Management
One major objective of testing is to find defects, but finding an anomaly is only the start. The team needs an agreed process for logging, analyzing, classifying, deciding what to do, tracking progress, confirming fixes, and closing reports. Without that process, defects get lost, duplicated, argued about, or fixed without evidence.
The word "defect" is often used broadly in test work, but a reported anomaly may later be classified differently. It may be a true defect, a false positive, a duplicate, a documentation issue, an environment problem, a test data problem, a user misunderstanding, or a change request. The defect management workflow resolves that classification.
Typical Defect Lifecycle
A simple lifecycle may include new, open, analyzed, assigned, fixed, ready for retest, retested, closed, reopened, deferred, duplicate, rejected, or awaiting information. Organizations use different names, but the principle is the same: each report has a visible status and a controlled path from discovery to closure.
A typical flow is:
- Log the anomaly with evidence.
- Triage and classify it.
- Decide whether to fix, defer, reject, accept, or convert it.
- Assign ownership if work is needed.
- Fix and build the change.
- Confirm the fix through retesting and regression testing as needed.
- Close the report or reopen it if the issue remains.
The process should be followed by all relevant stakeholders. Developers, testers, product owners, support staff, analysts, and managers may all interact with defect reports. The process should also define classification rules so severity, priority, duplicates, rejected reports, and deferred defects are handled consistently.
Defect Report Objectives
A good defect report helps the people responsible for resolution understand the issue. It also supports quality tracking and process improvement. If many high-severity defects cluster around one module, one supplier, one requirement type, or one test level, the team can improve its development and test process.
A poor report says, "Checkout broken." A useful report says what was tested, what version was used, what steps were taken, what data was entered, what was expected, what happened instead, how often it happens, what evidence exists, and why it matters. The report should reduce investigation time, not transfer confusion to someone else.
Typical Defect Report Fields
| Field | Purpose |
|---|---|
| Unique identifier | Gives the report a stable reference |
| Title | Summarizes the anomaly clearly |
| Date, author, organization, role | Shows who observed it and when |
| Test object and environment | Identifies the product version and conditions |
| Context | Links test case, activity, SDLC phase, technique, data, or checklist |
| Steps and evidence | Supports reproduction and analysis |
| Expected and actual results | States the failure clearly |
| Severity | Shows degree of impact |
| Priority | Shows urgency or order for fixing |
| Status | Shows workflow state |
| References | Links to requirements, tests, logs, screenshots, recordings, or dumps |
Severity and priority are not the same. Severity is the impact on stakeholders or requirements. Priority is how soon the team should address it. A typo on a legal disclosure page may be low technical severity but high priority before release. A crash in an obsolete admin screen may be high severity but lower priority if the screen is disabled and replacement is scheduled.
Reproduction and Evidence
Reproduction steps should be specific enough that another person can follow them. Include preconditions, input data, user role, environment, configuration, and any timing requirements. For intermittent failures, include frequency, timestamps, logs, screenshots, recordings, or monitoring data.
Expected and actual results should be concrete. "Should work" is too vague. "The invoice total should be $108.25 after tax" is actionable. "The page displays $18.25" is actionable. The difference between expected and actual results is the failure being reported.
Static Testing Findings
Findings from reviews and static analysis can be handled through the same kind of disciplined process. A review comment about an ambiguous requirement or a static analyzer warning may not require the same fields as a dynamic test failure, but it should still be logged, classified, assigned, and closed when the process requires evidence.
The exam trap is assuming every anomaly should be fixed immediately. Triage may decide to fix, defer, reject, mark duplicate, accept the risk, improve the test, or raise a change request. The important point is that the decision is visible and controlled.
A failure prevents all users from submitting tax forms, but the deadline is months away and a workaround exists. Which defect field describes the business urgency for fixing it?
Which details belong in a strong dynamic-testing defect report?
Select all that apply