5.5 Analysis, Interpretation, and Data Use

Key Takeaways

  • Analysis should match the question, design, measurement level, sample size, and intended audience.
  • Descriptive statistics summarize what happened; inferential statistics estimate whether observed patterns are likely beyond chance.
  • Qualitative analysis identifies patterns and meaning through systematic coding, comparison, and theme development.
  • Interpretation must address limitations, practical significance, equity implications, and next-step decisions.
Last updated: May 2026

Turning Data Into Decisions

Analysis should be planned before data collection begins. If the evaluation question asks whether knowledge scores improved from pretest to posttest, the evaluator needs matched participant data and a plan for comparing scores. If the question asks why attendance declined, open-ended comments or interviews may need systematic coding. Data collection without an analysis plan often produces information that is difficult to use.

Descriptive statistics summarize data. Frequencies and percentages can describe attendance, completion, demographic categories, correct responses, or reported behaviors. Means and medians summarize scores or counts. Ranges and standard deviations describe spread. For many CHES-level decisions, clear descriptive statistics are essential and more useful than complicated analysis.

Inferential statistics help judge whether observed differences or associations are likely to reflect more than random variation in sampled data. A paired comparison may be used when the same participants complete a pretest and posttest. A comparison between intervention and comparison groups may use different procedures depending on the data. The exam is unlikely to require advanced calculation, but it may ask which analysis fits the design.

Practical significance matters. A statistically significant change may be too small to justify program costs. A non-significant finding in a small pilot may still suggest useful improvement if the pattern is consistent and supported by participant feedback. Interpretation should consider sample size, measure quality, implementation fidelity, context, and whether the change matters to the priority population.

Qualitative analysis should be systematic. The evaluator reads transcripts or notes, develops codes, compares responses, identifies themes, and uses quotations sparingly to illustrate meaning. Strong qualitative reporting explains how data were collected, who participated, how themes were developed, and what limitations apply. It does not treat the loudest comment as the whole story.

Equity-focused interpretation asks who benefited, who was not reached, and whether averages hide differences. A program may improve overall knowledge while failing to reach participants with limited English proficiency. A coalition may meet its attendance target while excluding people who work evenings. CHES candidates should look beyond overall success claims and ask whether findings point to needed adaptations.

Data use closes the evaluation loop. Findings may support program improvement, funding decisions, staff training, partner communication, policy advocacy, or discontinuation of ineffective strategies. A good report does not merely present numbers. It connects findings to recommendations that are feasible, ethical, and grounded in the evidence.

Interpretation should return to the original objective and evaluation question. If the objective was modest, avoid turning findings into broad claims. If the findings reveal a delivery problem, recommend implementation changes before judging outcomes. If data quality is weak, explain what can still be learned and what should be measured next. This discipline keeps evaluation useful instead of decorative.

Scenario Review Checklist

  • Identify the relevant CHES Area of Responsibility.
  • Locate the program stage in the scenario.
  • Match the answer to evidence, stakeholders, and ethics.
  • Reject choices that are premature, unsupported, or outside scope.
Test Your Knowledge

The same 40 participants complete a pretest and posttest knowledge scale. Which analysis logic best fits the design?

A
B
C
D
Test Your Knowledge

What is practical significance?

A
B
C
D
Test Your Knowledge

An average score improved, but participants with limited English proficiency showed little change. What is the best interpretation step?

A
B
C
D