11.5 Systems Practice, Program Evaluation, and Quality Improvement
Key Takeaways
- Systems practice asks psychologists to understand how policies, workflows, incentives, culture, and resources affect client outcomes.
- Program evaluation should connect a clear question to appropriate indicators, data sources, stakeholder input, and ethical reporting.
- Quality improvement is strongest when recommendations are feasible, measurable, culturally responsive, and tied to client welfare.
- Part 2 scenarios may require system-level thinking even when the presenting problem looks like an individual case.
Seeing the System Around the Client
Systems practice means recognizing that client outcomes are shaped by more than individual symptoms or clinician technique. Policies, referral pathways, staffing patterns, language access, transportation, insurance rules, waitlists, documentation systems, school discipline practices, and community trust can all affect whether psychological services work. EPPP Part 2-Skills may frame this as consultation, supervision, or program evaluation, but the tested skill is the same: understand the system before recommending change.
A psychologist engaged in systems work should define the level of intervention. Some problems are clinical, such as a client needing a different treatment plan. Some are supervisory, such as inconsistent risk assessment by trainees. Some are organizational, such as a clinic that cannot track missed appointments. Some are community-level, such as a program that fails to reach a linguistic minority group. A good answer targets the level that the facts support.
| Systems question | Data to consider | Risk if ignored |
|---|---|---|
| Who is not being served? | Demographics, referral sources, waitlists, language needs | Inequitable access and missed need |
| Are services effective? | Symptom measures, functioning, retention, client goals | Continuing ineffective practices |
| Are procedures followed? | Chart audits, supervision notes, incident reports | Safety failures and poor continuity |
| Do stakeholders trust the program? | Client feedback, staff input, community advisory data | Low engagement and poor implementation |
| What resources constrain change? | Staffing, training, budget, technology, policy | Recommendations that cannot be sustained |
Program evaluation begins with an answerable question. Instead of asking whether a program is good, ask whether a new intake process reduced time to first appointment, whether a trauma group improved functioning for participants who completed it, or whether supervision changes improved documentation of risk assessment. The narrower question guides the indicators and prevents overclaiming.
Ethical program evaluation requires attention to consent, privacy, data security, conflict of interest, and fair interpretation. If data are collected for clinical operations, clients should not be misled into thinking they are receiving individualized assessment results when they are not. If findings could harm a group or staff member, the psychologist should report accurately while protecting confidentiality and avoiding blame that the data do not support.
Quality improvement recommendations should be practical. A clinic with no bilingual staff may need interpreter protocols, hiring goals, translated materials, referral partnerships, and staff training. A hospital service with inconsistent discharge communication may need a structured checklist, role assignment, electronic health record prompts, and audit feedback. An agency with repeated supervisee documentation problems may need onboarding, templates, direct observation, and supervisor calibration.
Use this evaluation workflow for Part 2 scenarios:
- Define the system problem and the decision the evaluation should support.
- Identify stakeholders, including clients or communities affected by the program.
- Select indicators that match the question and are feasible to collect.
- Protect confidentiality, consent, and data security.
- Analyze patterns without overstating causation.
- Report findings in accessible language with limitations.
- Recommend measurable changes and a follow-up plan.
Systems questions can hide inside individual cases. A supervisee who repeatedly misses risk documentation may reflect a personal competence issue, but it may also reflect poor clinic templates and unclear policy. A client who misses sessions may be ambivalent, but may also face transportation, disability, language, or scheduling barriers. The strongest exam answer considers both individual and system explanations before deciding.
ASPPB's Part 2-Skills orientation is applied decision-making. In systems practice, the defensible decision is rarely to blame one person quickly. It is to define the problem, gather relevant data, include stakeholders, protect ethics, and recommend changes proportionate to the evidence.
A clinic asks whether its new intake process improved access. Which evaluation question is most useful?
Which recommendation best reflects systems practice after repeated discharge-communication failures?
A psychologist evaluating a program finds improvement among clients who completed treatment but high dropout among one language group. What is the best interpretation?