4.7 Experience-Based Testing
Key Takeaways
- Experience-based techniques use tester skill, domain knowledge, and defect history.
- Exploratory testing combines learning, test design, execution, and interpretation.
- Error guessing targets likely defects and failure modes.
- Checklist-based testing uses curated prompts to guide coverage without prescribing exact tests.
- Experience-based testing complements systematic black-box and white-box techniques.
Core Idea
Experience-based testing relies on the tester's knowledge, skill, and intuition. It is not random clicking. Good experience-based testing is deliberate, observant, and informed by risks, previous defects, domain rules, similar products, and user behavior.
These techniques are useful when requirements are incomplete, time is limited, the product is changing quickly, or known defect patterns deserve targeted attention. They also find issues that formal models may miss, such as confusing workflows, inconsistent messages, poor recovery, and surprising interactions.
Exploratory Testing
Exploratory testing combines test design, execution, learning, and evaluation at the same time. The tester investigates the product, observes results, adapts the next tests, and records findings.
A common structure is session-based exploratory testing. A test charter defines the mission, scope, risks, and time box.
Example charter:
| Element | Example |
|---|---|
| Mission | Explore checkout error handling |
| Area | Payment, address validation, retry flow |
| Risks | Duplicate charges, unclear errors, lost cart |
| Time box | 60 minutes |
| Notes | Record data used, defects, questions, and coverage gaps |
Exploratory testing is especially strong for learning a new feature and discovering unknown risks. It should still produce evidence. Notes, screenshots, logs, charters, and defect reports make the work repeatable enough for follow-up.
Error Guessing
Error guessing uses knowledge of likely failures to design tests. The tester asks: where have systems like this failed before?
Examples for a date field:
- Leap day such as February 29.
- End of month such as April 30 and May 31.
- Time zone crossing midnight.
- Empty value when the field is required.
- Invalid format such as 2026/31/12.
- Past date when future date is required.
Error guessing is not a replacement for BVA or decision tables. It catches practical problems that may not be explicit in the model. For example, a BVA set for a date range might not include daylight saving time behavior, but an experienced tester might add it.
Checklist-Based Testing
Checklist-based testing uses a list of checks, risks, quality characteristics, or common failures. The checklist guides testing while leaving room for judgment.
A checklist for a file upload feature might include:
| Checklist item | Example test idea |
|---|---|
| File size limits | Try just under and over the maximum |
| File type validation | Try allowed, blocked, and renamed extensions |
| Network interruption | Drop connection during upload |
| Duplicate names | Upload same file twice |
| Malware handling | Confirm blocked or scanned behavior |
| Accessibility | Keyboard-only upload flow |
Checklists are useful for consistency across testers. They also preserve organizational knowledge from previous defects and production incidents.
Strengths and Limits
Experience-based testing is fast to start and effective at finding real-world issues. It works well for usability, reliability, security observations, integration risks, and poorly specified behavior.
Its weakness is uneven coverage. Two testers may explore different paths. Without notes, charters, or checklists, it can be hard to know what was tested. That is why experience-based techniques are best combined with measurable techniques.
Technique Choice Example
Suppose a team is testing a new refund screen. EP and BVA cover refund amount ranges. A decision table covers eligibility rules. State transition testing covers requested, approved, paid, and rejected states. White-box coverage checks new code paths.
Experience-based testing adds realistic pressure: duplicate clicks on Approve, browser back after payment, refunding an already refunded order, currency rounding, stale permissions, and unclear audit messages. These tests come from judgment and defect history rather than directly from a neat rule table.
On the exam, look for words such as learning, simultaneous design and execution, charters, tester experience, common mistakes, defect history, and checklists. Those usually point to experience-based techniques.
A tester uses a one-hour charter to investigate payment retry behavior, adapting tests based on each observation. Which technique is this?
Which test ideas are examples of error guessing for a date input?
Select all that apply