1.1 What Testing Does
Key Takeaways
- Testing evaluates both software and related work products, not only executable code.
- Testing can reveal failures, expose defects, and provide information for decisions, but it cannot prove that no defects remain.
- A human error can create a defect, a defect can cause a failure, and a root cause explains why the problem happened.
- Static and dynamic testing complement each other by finding different kinds of problems at different times.
- CTFL questions often test the purpose and limits of testing rather than tool commands or project-specific practices.
Testing Is Evaluation
In CTFL v4.0.1, testing is broader than running a finished application and seeing whether it crashes. Testing evaluates work products such as requirements, user stories, designs, source code, testware, configuration, data, and the running software itself. That evaluation can use static techniques, dynamic execution, or both.
A common exam trap is to equate testing with execution only. Dynamic testing executes software and observes actual behavior. Static testing examines a work product without executing code. A review of a requirement for ambiguity is testing even though no program runs.
Testing has several purposes. It can find defects, reduce risk, provide information about quality, build confidence, check whether requirements are met, and support decisions about release or further work. The exact purpose depends on context. Safety-critical medical software, an internal reporting script, and a mobile game do not need the same evidence.
| Term | Exam meaning |
|---|---|
| Error | A human action that produces an incorrect result |
| Defect | A flaw in a work product that may cause a problem |
| Failure | Observed behavior that differs from expected behavior |
| Root cause | The underlying reason the error or defect occurred |
The chain is important. A developer may misunderstand a requirement. That human error may lead to a defect in code or design. When the software is executed under the right conditions, the defect may cause a failure that a tester, user, or monitor observes. Root cause analysis asks why the misunderstanding or process weakness happened.
Testing does not prove the absence of defects. Passing tests provide evidence under the conditions tested. They do not show that all possible inputs, states, environments, timing sequences, and integrations are defect-free. CTFL answers that claim testing guarantees perfection are usually wrong.
Good testing also produces information. A test report may show severe open defects, low coverage of risky areas, or strong evidence that critical acceptance criteria work. Stakeholders use this information to make business and technical decisions. The tester is not just a defect finder; the tester helps make quality and risk visible.
When answering CTFL-style questions, look for wording such as evaluate, work product, defect, failure, confidence, information, and risk. Be cautious with absolute choices such as all defects, complete proof, or only execution. The foundation idea is balanced: testing is powerful, but it is evidence-gathering, not a mathematical guarantee of perfect software.
Which statement best describes testing in CTFL v4.0.1 terms?
Which statements correctly distinguish errors, defects, failures, and root causes?
Select all that apply