Categories
Educational Administration and Leadership K-12 Education

How generalizable is the What Works Clearinghouse evidence?

Recent cuts in educational research funding have underscored the importance of reliable evidence on what works in education. Since 2002, the What Works Clearinghouse (WWC) has provided a resource on what works in education by reviewing causal research on educational interventions and assessing its quality. However, the WWC has prioritized internal validity (whether changes in outcomes are due to the intervention) over external validity (whether findings can be generalized to other contexts, populations, or outcomes). While this approach strengthens causal claims, it may also limit the applicability of findings beyond the studied settings. To better understand how generalizable WWC evidence is across different contexts, a 2023 study by Betsy Wolf examined the student populations and settings represented in WWC-reviewed research.

To explore this, Wolf created an evidence gap map (EGM), a tool that visualizes the distribution of existing research and identifies areas needing further study. The EGM highlights areas with abundant research and those needing more high-quality studies. An EGM typically organizes study data into a grid, where point size reflects the number of studies and color indicates study quality, offering insights to guide future research and policy decisions. The EGM results revealed disparities in the representation of school types, grade levels, and student demographics in the WWC evidence base. Specifically, WWC-reviewed studies underrepresent private schools and early childhood grades, while public schools in coastal and urban areas are overrepresented. Although student samples generally align with U.S. demographics, some groups are over- or underrepresented.

The WWC evidence base is strongest in mathematics and literacy, with less coverage of other subjects such as science, social-emotional learning, and educator outcomes. Wolf also noted that missing data on student and setting characteristics limits the generalizability of findings. The WWC’s method of assigning evidence tiers favors narrow domains with researcher-created measures over broader ones with standardized measures, raising doubts about replication. That being said, the WWC has been one of the most widely-recognized resources for educational evidence. This article highlights critical gaps in the research landscape and emphasizes the need for broader, more representative studies.

 

Source: Wolf, B. (2025). What works for whom: Exploring the students, settings, and outcomes in What Works Clearinghouse study data. Journal of Research on Educational Effectiveness, 0(0), 1–26. https://doi.org/10.1080/19345747.2024.2427762Read the rest

Categories
Educational Administration and Leadership K-12 Education

A guide for conducting implementation research in education intervention studies

The U.S. Department of Education’s Institute of Education Sciences (IES) has published a guide to help researchers plan, execute, and report findings from implementation research, contributing to the evidence base for improving student outcomes. It outlines four key areas: (1) formulating research questions, (2) developing plans for data collection and analysis, (3) detailing the intervention and its implementation, contexts, and the intervention contrast conditions, and (4) analyzing and reporting the details about the intervention and its implementation.

To support researchers in this process, the guide emphasizes starting with research questions that address intervention components, their variations in implementation, contexts, and contrast conditions. Plans for data collection should specify research goals, measures, data sources, and hypotheses. To describe an intervention comprehensively, researchers should examine both its core activities and supporting strategies, using a logic model, documentation, and expert input. They should systematically consider content, quantity, mode, and quality, ensuring fidelity to the intervention design. Findings should link intervention details to impacts, providing insights for future research and practice.

 

Source (Open Access): Hill, C. J., Scher, L., Haimson, J., & Granito, K. (2023). Conducting implementation research in impact studies of education interventions: A guide for researchers. Toolkit. NCEE 2023-005. National Center for Education Evaluation and Regional Assistance.  https://ies.ed.gov/ncee/pubs/2023005/index.aspRead the rest

Categories
Educational Administration and Leadership K-12 Education

Why to adjust effect sizes for baseline covariates?

Standardized mean difference is the effect size typically used to compare the difference between a treatment and control group on continuous outcomes. However, data provided in primary studies to calculate effect sizes vary. Sometimes, multiple options are available, and it’s not always clear which method is best to use.

In an article from 2021, Joseph Taylor and colleagues provided guidelines for reporting data in primary studies to calculate effect sizes, as well as recommendations on which data should be prioritized for meta-analyses.

The authors’ key recommendations for meta-analysts are:

  • Use effect sizes that adjust for baseline covariates, at the very least the pretest scores of the outcome measure, and possibly demographic variables as well. This produces an effect size estimate that is more interpretable and precise.
  • Avoid using unadjusted means when covariate-adjusted means are available, because effect sizes from unadjusted means introduce imprecision and artificially increase effect size heterogeneity in meta-analyses.
  • In cluster studies that assigned schools or classes, adjust effect sizes and variances for baseline covariates and for clustering.

Following these recommendations requires better reporting of necessary data in primary studies. This includes covariate-adjusted means, unadjusted standard deviations and standard error of the adjusted mean difference for individual level studies. For cluster studies, it also requires reporting the intraclass correlation (ICC) and the standard error of the adjusted mean difference from a model that accounts for clustering. The authors provided an online tool to perform all calculations.

 

Source: Taylor, J. A., Pigott, T., & Williams, R. (2022). Promoting knowledge accumulation about intervention effects: Exploring strategies for standardizing statistical approaches and effect size reporting. Educational Researcher, 51(1), 72–80. https://doi.org/10.3102/0013189X211051319Read the rest