5.1 Evaluation Purpose and Types

Key Takeaways

  • Area IV is 12% of the current CHES content outline and tests how entry-level specialists judge program quality and results.
  • Formative, process, outcome, and impact evaluation answer different questions and should not be used interchangeably.
  • Good evaluation plans connect questions, indicators, data sources, timing, and intended decisions before data collection begins.
  • Evaluation findings should be used for improvement, accountability, communication, and ethical stewardship of resources.
Last updated: May 2026

Why Area IV Matters

Area IV represents 12% of the current CHES exam outline, so evaluation and research should be studied as an applied decision process. The exam is not asking whether evaluation is good in the abstract. It asks what a health education specialist should measure, when to measure it, and how to use findings responsibly.

Evaluation begins with purpose. A formative evaluation improves a program before or during development. A process evaluation checks whether implementation happened as planned. An outcome evaluation looks for shorter-term changes such as knowledge, skills, attitudes, intentions, behaviors, or environmental supports. An impact evaluation examines broader, longer-term changes such as morbidity, quality of life, policy adoption, or community conditions.

A common exam trap is choosing an outcome measure when the scenario describes a delivery problem. If attendance was low, facilitators skipped activities, or recruitment materials did not reach the priority population, the best next evaluation step is usually process oriented. If the program was delivered as intended and the question is whether participants changed, outcome evaluation is more appropriate.

Evaluation questions should be specific enough to guide evidence. A weak question asks, "Did the program work?" A stronger question asks, "Did participating ninth-grade students increase correct condom-use knowledge from baseline to immediate posttest?" Another asks, "Were at least four of six planned parent sessions delivered with the approved curriculum and trained facilitators?" The first is outcome focused; the second is process focused.

Use the program logic model to keep evaluation anchored. Inputs and activities often lead to process indicators. Outputs such as number reached, sessions completed, materials distributed, and referrals made show dose and reach. Short-term outcomes may include awareness, self-efficacy, and skills. Intermediate outcomes often involve behavior or policy adoption. Long-term impacts usually require more time, larger samples, or surveillance data.

Evaluation is also an ethical responsibility. Programs use public trust, participant time, staff labor, and often grant funding. Collecting data that will never be used can burden communities. Reporting only favorable findings can mislead partners. A CHES-level response should balance practical constraints with transparency, cultural humility, confidentiality, and usefulness.

For exam scenarios, identify the decision maker and the decision. A funder may need accountability evidence. A program manager may need fidelity data. A coalition may need findings for priority setting. Participants may need accessible results that show respect for their contribution. The best evaluation choice fits the decision rather than simply using the most rigorous-sounding design.

When reading practice scenarios, first locate the program stage. If materials are being drafted, think formative. If staff are delivering sessions, think process. If participants are expected to change knowledge, skill, or behavior, think outcome. If the question reaches population health, policy effects, or long-term conditions, think impact. This simple sorting step prevents overreaching and helps you choose measures that match the decision.

Scenario Review Checklist

  • Identify the relevant CHES Area of Responsibility.
  • Locate the program stage in the scenario.
  • Match the answer to evidence, stakeholders, and ethics.
  • Reject choices that are premature, unsupported, or outside scope.
Test Your Knowledge

A curriculum is being pilot tested before full rollout. Staff want to learn whether examples are understandable to the priority population. Which evaluation type fits best?

A
B
C
D
Test Your Knowledge

A grant manager asks whether all planned workshops were delivered with the approved lesson plan. What should be measured first?

A
B
C
D
Test Your Knowledge

Which question is written most clearly for an outcome evaluation?

A
B
C
D