卓越實證概述 Best Evidence in Brief
Learning-by-teaching enhances research question generation

Asking good questions is essential for knowledge construction and scientific learning. Wong and colleagues conducted two experiments to investigate the impact of learning-by-teaching on generating research questions, comparing it with two other generating learning techniques: retrieval practice and concept-mapping. Research questions correspond to the "create" level of Bloom's taxonomy (remember, understand, apply, analyze, evaluate, create), representing the highest level of generating new knowledge through novel research inquiries.

A total of 152 undergraduate students from the National University of Singapore participated in two experiments. They were instructed on generating create-level research questions and then were given a scientific text and randomly assigned to one of three learning methods: (a) constructing a concept map, (b) retrieval practice with study and retrieval intervals, or (c) teaching the text through note preparation, video lecture, and answering preset questions. In Experiment 1, the participants were tested on their ability to generate questions at the create-level and recall the text content immediately after the study session. In Experiment 2, all three groups answered preset study questions during the study period, and the tests were conducted after a 48-hour delay. The findings are as follows:

  • Learning-by-teaching generated more create-level research questions than concept maps or retrieval practice in both experiments.
  • Learning-by-teaching had better content recall than concept mapping, but no significant difference compared to retrieval practice in either experiment.
  • Learning-by-teaching outperformed retrieval practice immediately after the study in content recalled test, but retrieval practice performed better during the 48-hour delayed test.

The results indicated that mere acquisition of factual knowledge is inadequate for enhancing higher-order research question generation. Teaching involves organizing materials, generating elaboration and inferences to aid audience understanding. Therefore, the authors suggested that learners who engage in teaching experience higher levels of generative processing, enabling them to generate new ideas and research questions.

 

Source: Wong, S. S. H., Lim, K. Y. L., & Lim, S. W. H. (2023). To ask better questions, teach: Learning-by-teaching enhances research question generation more than retrieval practice and concept-mapping. Journal of Educational Psychology. https://doi.org/10.1037/edu0000802Read the rest

Reading comprehension strategies for students with reading difficulties

Reading comprehension is an essential skill; hence, it is crucial to identify effective strategies to support children in developing this skill, particularly when they face challenges in reading. In a recent Bayesian network meta-analysis (BNMA), researchers examined the effectiveness of various combinations of text comprehension strategies in interventions for students with reading difficulties across grades 3 to 12.

The meta-analysis included 52 studies and focused on commonly used strategies including main idea, text structure, retell, self-monitoring, graphic organizers, inference, and prediction. Among the 35 possible combinations of strategies examined, the meta-analysis found the main idea-text structure-retell combination to be the most effective in improving reading comprehension (Standard Mean Difference = 1.72). Close behind was the main idea-text structure-self monitoring-graphic organizers combination (SMD = 1.13), followed by the main idea strategy alone (SMD = 1.07). These combinations and individual strategies showed a significant positive impact on enhancing reading comprehension skills.

On the contrary, the least effective combinations were inference-text structure (SMD = -0.61), retell-graphic organizers (SMD = -0.03), and main idea-inference-text structure-prediction-self monitoring (SMD = 0.05), with negative or minimal effects on improving reading comprehension. In addition to strategy effectiveness, the study highlighted the moderator effect of background knowledge instruction, which significantly enhanced the overall effects of the strategies. It emphasized the importance of considering background knowledge instruction to reduce cognitive load and facilitate knowledge retrieval when implementing reading comprehension strategies. Overall, these findings challenge the notion of a single most important strategy and emphasize the interaction and combination of different strategies.

 

Source (Open Access): Peng, P., Wang, W., Filderman, M. J., Zhang, W., & Lin, L. (2023). The Active Ingredient in Reading Comprehension Strategy Intervention for Struggling Readers: A Bayesian Network Meta-analysis. Review of Educational Research, 00346543231171345. https://doi.org/10.3102/00346543231171345Read the rest

What matters when considering effect-size benchmarks?

For consumers of educational research, effect sizes play a key role in understanding what strategies and interventions are likely to have the biggest impact on the learning process and student achievement. In a recent article in Educational Researcher, Kraft replicated his earlier analysis using a larger data set of effect sizes to realistically reflect what constitutes small, medium, and large effects of educational interventions. He argued for the need to reorient how we interpret effect-size benchmarks and more generally how we measure success in the education sector. Central to his approach is the recognition that many education interventions fail to produce substantial impacts on student outcomes, and rather than dismissing these results, they should be integral to interpreting the policy relevance of effect-size benchmarks and crucial to setting realistic expectations for what counts as meaningful impact.

While research designs and contexts differ in multiple ways that make interpreting effect sizes across studies not entirely straightforward, they are still useful in synthesizing large amounts of data, discovering patterns, and making broader inferences. This analysis included 973 studies and 3,426 effect sizes and replicated the findings that the effect-size distribution at the 30th percentile = +0.02, 50th percentile = +0.10, and 70th percentile = +0.21. Further breakdowns showed that 36% of effect sizes from standardized achievement measures in randomized control trials were smaller than +0.05. By anchoring effect-size benchmarks in the reality that a large portion of interventions do not significantly increase student achievement, it contributes to a better understanding of realistic growth.

Kraft’s article underscores the need for a nuanced interpretation of effect-size benchmarks, which are meant to help frame evidence-based policymaking and are intended to be coupled with information about statistical significance and the understanding that both the size and the precision of the effect-size estimates count. However, a singular focus on the magnitude of the effect size sometimes has the education community overlooking the less shiny interventions that produce incremental improvement. Overall, understanding effect sizes across the educational research landscape contributes to a more informed interpretation of the impact of educational interventions, facilitating evidence-based decision-making in educational policy and practice.

 

Source: Kraft, M. A. (2023). The Effect-Size Benchmark That Matters Most: Education Interventions Often Fail. Educational Researcher, 52(3), 183–187. https://doi.org/10.3102/0013189X231155154Read the rest

How to choose a tool for screening in systematic reviews

Title and abstract screening involve a rapid examination of the titles and abstracts of the records identified through a search strategy to determine their relevance for a review. It is one of the most time-consuming tasks in a systematic review.

A recent study conducted by Zhang and Neitzel from the Center for Research and Reform in Education (Johns Hopkins University) reviewed the tools developed to support the screening process. The authors conducted a review of the tools used in systematic reviews published in Review of Educational Research and assessed their features.

Results showed that only 4% of the identified studies reported the use of a screening tool, such as DistillerAI, EPPI-Reviewer, and Covidence. Based on the features analysis conducted by the authors, Covidence, DistillerAI, and EPPI-Reviewer were the top performing tools. In conclusion, the authors proposed a decision tree to support researchers in choosing the right tool to use in their review based on relevant features, such as full-text review function, cost, and machine learning.

 

Source: Zhang, Q., & Neitzel, A. (2023). Choosing the Right Tool for the Job: Screening Tools for Systematic Reviews in Education. Journal of Research on Educational Effectiveness, 0(0), 1–27. https://doi.org/10.1080/19345747.2023.2209079Read the rest

Six recommendations on how to effectively use feedback to improve students’ learning

Offering valuable feedback is essential for educators to encourage student advancement and enrich learning. Effective feedback helps tackle misconceptions and narrow the distance between a student’s current level and desired goals. However, inadequately provided feedback can have adverse consequences and impede progress. Teacher feedback is critical for enhancing student accomplishments but identifying the most efficient forms of guidance remains a challenge.

The Education Endowment Foundation published a report containing six recommendations for teachers to support students’ learning through feedback. These recommendations are the result of integrating empirical research findings and the expertise of academics and practitioners. Each recommendation starts with a vignette, illustrating common challenges faced by teachers, includes case studies of feedback practice to represent current approaches, and suggests techniques and ideas that might work based on the evidence and the panel’s expertise.

The first three recommendations act as the main guiding principles: (1) establish the foundation for effective feedback through high-quality instruction and formative assessment; (2) provide well-timed feedback that emphasizes progress in learning; (3) create a plan for students to receive and apply feedback, including time and opportunities for utilization. Two recommendations suggest teachers carefully consider the delivery method, whether to provide a (4) written or (5) verbal feedback, according to purpose and time-efficiency. The last recommendation is about (6) developing a school policy that emphasizes and illustrates the principles of effective feedback.

The report can be highly valuable for teachers, offering them a guide on how to provide feedback in ways that are most likely to have a positive impact on students.

 

Source (Open Access): Teacher Feedback to Improve Pupil Learning. (2021, October 27). EEF. https://educationendowmentfoundation.org.uk/education-evidence/guidance-reports/feedbackRead the rest