Computer Adaptive Testing
Key Takeaways
- The MLS/MLS(ASCPi) exam uses computer adaptive testing.
- CAT should be understood through official ASCP BOC scoring facts, not third-party practice difficulty scores.
- There is no set number correct required to pass.
- There is no set percentage required to pass.
What CAT Changes And What It Does Not
The brief states that the MLS/MLS(ASCPi) exam uses computer adaptive testing. It does not provide the internal algorithm, and a study guide should not invent one. The safe approach is to explain only the official consequences stated in the brief.
The most important consequence is about passing interpretation. CAT means there is no set number of questions one must answer correctly to pass. It also means there is no set percentage one must achieve to pass. These statements directly block raw-score myths.
CAT does not erase the fixed format facts in the brief. The exam is still 100 multiple-choice questions. The time limit is still 2 hours 30 minutes. All questions still have one best answer. The adaptive model and the fixed exam frame must be held together.
CAT also does not make third-party scoring official. The guardrails say not to treat third-party adaptive practice difficulty scores as ASCP BOC scoring. A practice platform may help a candidate identify weak domains, but it cannot define the official scaled score or claim an outcome.
Use this list to keep CAT claims accurate:
- Say the exam uses computer adaptive testing.
- Say there is no set number correct required to pass.
- Say there is no set raw percentage required to pass.
- Say ASCP BOC uses a scaled score range of 100 to 999.
- Say the minimum passing scaled score is 400.
- Do not say 400 maps to a raw percent.
- Do not predict passing from practice-test percentages.
- Do not treat third-party adaptive difficulty scores as official scoring.
A CAT-aware candidate should focus on official content coverage and reasoning quality. The official domains give the study map. The question-style facts give the thinking map. The scoring facts give the outcome map. None of those maps requires guessing the adaptive algorithm.
Theoretical questions measure applying knowledge, calculating results, and correlating patient results to disease states. Procedural questions measure performing lab techniques and following quality assurance protocols. Those official categories are more useful for preparation than speculation about item selection.
When reviewing practice, classify missed items by content area and by reasoning type. Was the miss about chemistry, hematology, blood banking, microbiology, immunology, urinalysis and other body fluids, or laboratory operations? Was it a calculation, correlation, technique, or quality assurance issue? That keeps the review connected to official facts.
The brief does not provide a pass-rate statistic, raw-score cutoff, or fixed answer-count cutoff. A clean CAT explanation should stop there. The candidate's goal is to meet the official standard as reported on the scaled score system, not to chase an invented raw threshold.
That makes official wording the safest scoring guide.
Which CAT statement is supported by the brief?
How should third-party adaptive practice difficulty scores be treated?
Which CAT claim should not appear in the draft?