Career upgrade: Learn practical AI skills for better jobs and higher pay.
Level up

5.4 Monitoring, Control, and Reporting

Key Takeaways

  • Test monitoring gathers information about progress, quality, risks, coverage, cost, and exit criteria.
  • Test control uses monitoring information to guide corrective actions and keep testing effective and efficient.
  • Metrics should support decisions, not just display activity counts.
  • Test progress reports support ongoing control; test completion reports summarize a completed activity, level, cycle, iteration, or project.
  • Test status communication should be tailored to stakeholder needs, context, formality, and timing.
Last updated: May 2026

Monitoring vs Control

Test monitoring gathers information about testing. It compares actual progress with the test plan, checks whether exit criteria or related tasks are being met, and shows the current quality picture. Monitoring asks, "What is happening, and what does the evidence say?"

Test control uses monitoring information to guide action. It asks, "What should we change now?" Control actions may include reprioritizing tests, adjusting the schedule, adding resources, changing entry or exit decisions after rework, removing blockers, or escalating risks. Monitoring without control produces reports but no correction.

A simple example shows the difference. A dashboard shows that only 40 percent of high-risk test cases have run and the test environment is unstable. That is monitoring. The test manager moves low-risk cosmetic tests later, asks operations to stabilize the environment, and informs stakeholders of risk to the release date. That is control.

Metrics Used in Testing

Metric categoryExamplesDecision supported
Project progressTask completion, resource use, effort spentAre we on schedule and budget?
Test progressCases designed, run, not run, passed, failed, execution timeWhat testing remains?
Product qualityAvailability, response time, mean time to failureIs the product meeting quality expectations?
DefectsOpen defects, fixed defects, severity, priority, densityWhere is quality weak?
RiskResidual risk level, high-risk coverageAre key risks still exposed?
CoverageRequirements, code, acceptance criteria, product risksWhat has been exercised?
CostCost of testing, cost of qualityIs the effort economically justified?

Metrics need interpretation. A large number of passed tests may hide the fact that high-risk features were not tested. A falling defect discovery rate may mean quality is improving, or it may mean testing is shallow, blocked, or repeating the same checks. A high automation count may not matter if the automated checks do not cover important risks.

Test Progress Reports

Test progress reports are created during testing, often daily, weekly, or by iteration. They support ongoing control and give stakeholders enough information to adjust the test schedule, resources, scope, or plan. The audience may include the team, managers, product owners, customers, or compliance staff.

A useful progress report includes the reporting period, progress against plan, notable deviations, impediments and workarounds, relevant metrics, new or changed risks, and testing planned for the next period. It should call out decisions needed from stakeholders, such as whether to defer a feature, extend testing, or accept a residual risk.

Progress reports should be tailored. A development team may need a task board, failed build list, and defect links. Executives may need release confidence, top residual risks, schedule impact, and decisions needed. Auditors may need evidence that required testing activities occurred and that deviations were approved.

Test Completion Reports

A test completion report is prepared when a project, test level, test type, test cycle, iteration, release, or maintenance activity is complete or stopped. It summarizes what was done, evaluates testing and product quality against the original plan, and provides information for later testing.

Typical content includes a test summary, evaluation against test objectives and exit criteria, deviations from the plan, impediments and workarounds, metrics from progress reports, unmitigated risks, defects not fixed, and lessons learned. The report should make residual risk visible, not bury it under pass counts.

The completion report differs from a progress report in timing and purpose. A progress report helps steer active testing. A completion report records the result of a completed or ended activity and supports release decisions, audits, maintenance planning, and future process improvement.

Communicating Status

Status may be communicated verbally, through dashboards, email, chat, online documentation, task boards, burn-down charts, CI/CD dashboards, or formal reports. The best channel depends on the team, distribution, regulatory needs, stakeholder preference, and urgency.

The exam trap is believing there is one universally best reporting format. A co-located Agile team may use a daily standup and board for frequent informal updates. A distributed regulated project may need formal written reports, signed approvals, and versioned evidence. The key is useful, timely, audience-appropriate information.

Test Your Knowledge

A dashboard shows that defect fixes are delayed, and the test lead responds by changing the execution order to cover high-risk areas first. What is the response?

A
B
C
D
Test Your KnowledgeMulti-Select

Which items are typical contents of a test completion report?

Select all that apply

Evaluation against test objectives and exit criteria
Deviations from the test plan
Unmitigated risks, defects not fixed, and lessons learned
A daily task list for tomorrow's active test execution
An instruction to ignore all remaining defects