4.5 Fidelity, Adaptation, and Quality Monitoring
Key Takeaways
- Fidelity means delivering the core components of the program as intended.
- Adaptation can be appropriate when it improves fit without undermining essential elements.
- Quality monitoring uses tools such as checklists, observations, facilitator notes, and participant feedback.
- Implementation data help explain whether outcomes reflect the program theory or the way delivery occurred.
Preserving what matters while responding to context
Fidelity is the degree to which a program is delivered as planned. It matters because the objectives, theory, activities, dose, and evaluation measures were designed to work together. If implementation changes the core components, the program may no longer test the original idea. A CHES professional should know what must be preserved and what can be adjusted.
Core components are the elements believed to drive the program's effect. In a skill-building program, core components might include demonstration, participant practice, feedback, and a minimum number of sessions. In a peer education program, core components might include peer facilitator training, structured discussion, and referral information. In a reminder intervention, timing and message content may be core. Planning should identify these elements before delivery begins.
Adaptation is not automatically bad. It may be necessary for cultural relevance, language access, disability access, schedule fit, or setting constraints. Examples of reasonable adaptations include changing examples to local foods, offering sessions at different times, translating and reviewing materials, adding captions, or using a community site instead of an agency office. These changes can improve reach and effectiveness when they preserve the mechanism of change.
Adaptation becomes risky when it removes or weakens the active ingredient. If a curriculum relies on role play to build negotiation skill, replacing role play with a lecture undermines the core method. If an intervention requires three coaching contacts, delivering only one contact changes dose. If peer leaders are central to credibility, replacing them with an unknown outside speaker may reduce fit. The implementation team should review such changes before proceeding.
Fidelity monitoring should be planned and respectful. Tools may include session checklists, facilitator self-reports, observation forms, audio or video review when appropriate, participant feedback, attendance logs, and supervision meetings. The goal is to support quality, not punish facilitators. Monitoring should focus on the agreed elements: content accuracy, sequence, time, participation, practice opportunities, referral steps, and safety procedures.
Quality monitoring also includes responsiveness. If participants are confused, attendance declines, or a partner reports logistical problems, the team should investigate and adjust. The adjustment should be documented with the reason, date, person responsible, and expected effect. Documentation helps later evaluation because it shows what was actually delivered.
Fidelity data are critical when outcomes are weak. If the program did not improve skill, was it because the theory was wrong, the objective was unrealistic, or facilitators skipped practice? Without implementation data, the team cannot answer. Process evaluation and fidelity monitoring explain the delivery side of the story.
The CHES exam may present a scenario where a facilitator changes activities on the spot. The best answer depends on whether the change improves fit while preserving the core elements. Changing a local example usually fits. Dropping the only practice activity for a skill objective usually does not. When in doubt, look for the answer that consults the implementation plan, documents the change, and monitors its effect.
Fidelity also supports fairness. If different sites receive very different versions of a program, participants may not have equal access to the intended benefit. Some variation is normal, but major differences should be justified. Multi-site implementation often requires facilitator training, standard materials, coaching, and shared monitoring tools.
| Change during delivery | Likely classification | Recommended action |
|---|---|---|
| Localizing food examples | Acceptable adaptation | Document and continue |
| Removing all practice | Fidelity concern | Review and correct |
| Changing session time | Access adaptation | Monitor attendance |
| Cutting required sessions | Dose fidelity concern | Reassess feasibility |
| Adding captions | Accessibility improvement | Document and continue |
A facilitator replaces the only hands-on CPR practice activity with a lecture because it is faster. What is the main concern?
Which change is most likely an appropriate adaptation?
Why should fidelity data be collected during implementation?