卓越實證概述 Best Evidence in Brief
Examining computer access for students across the U.S.

A new report from the National Center for Education Statistics examines students’ access to computers at home and in school, and students’ use of computers for classroom learning at grades 4, 8, and 12. Associations between students’ computer access and use and student performance on the 2015 NAEP mathematics and reading assessments are also examined.

NAEP is given to a representative sample of students across the country, and results are reported for groups of students with similar characteristics. As part of the 2015 NAEP assessments, students answered a survey question about their access to computers at home, and teachers answered a survey question about the availability of computers for them and their students in school. Students and teachers also answered questions about their use of computers for classroom learning and instruction.

Key findings from the report included:

  • Computer access is divided along socioeconomic lines. Smaller percentages of lower income students reported having computer access at home in comparison to middle-to-higher income students.
  • Lower- and higher-performing students differ in how often they use computers for practicing and building academic skills in the classroom. For example, compared to higher-performing fourth-grade students, larger percentages of lower-performing students had teachers who reported that they never or hardly ever used computers in class to practice and review mathematics topics or to extend their mathematics learning with enrichment activities.
  • Computer use once or twice a week increased by as much as 5 percentage points in mathematics classes and 6 percentage points in reading classes between 2013 and 2015.

The report helped the public to understand how use of computers are related to academic performance. However, the survey did not include students' use of other digital devices, such as tablets and smartphones in 2015.

 

Source (Open Access): The Nation's Report Card (2018.) 2015 survey questionnaires results - Students' computer access and use. Retrieved from https://www.nationsreportcard.gov/sq_computer/. Read the rest

Surprise rewards for good attendance had a surprising consequence

A working paper by Carly Robinson and colleagues, published by the Harvard Kennedy School, reports on an experiment to measure the impact of attendance rewards on students.

The trial included 15,629 sixth through twelfth grade students from 14 school districts in California. All the students had previously had perfect attendance in at least one month in the fall. The students were randomly allocated to one of three groups:

  • “Prospective Award” students received a letter telling them they would receive a certificate if they achieved perfect attendance in February (the following month).
  • “Retrospective Award” students received a letter and certificate telling them they had earned an award for perfect attendance during one month in the fall term.
  • Control students received no communication.

The researchers collected data on the students’ attendance in the following month (February). They found that:

  • There was no impact of offering the prospective reward on subsequent attendance.
  • Offering the retrospective award resulted in students attending less school in February. Absences among this group increased by 8% (an average of 0.06 days per student).

The researchers suggest that the retrospective awards may have sent unintended signals to the students, telling them that they were performing better than the descriptive social norm of their peers, and exceeding the institutional expectations for the awarded behavior.

 

Source (Open Access): Robinson, C.D., Gallus, J., Lee, M.G. & Rogers, T. (2018). The demotivating effect (and unintended Message) of retrospective Awards - HKS faculty research working paper series RWP18-020. Retrieved from https://research.hks.harvard.edu/publications/getFile.aspx?Id=1681Read the rest

How much is enough?

There have now been many controlled studies of preventive mental health interventions for young people. For these studies to be useful, practitioners need to know whether the effects shown for a particular intervention are modest, moderate, or large.

Emily Tanner-Smith and colleagues summarized more than 400 mean effect size estimates from 74 meta-analyses that synthesized findings from many trials. All the trials were of programs aimed at preventing problematic behavior or emotional problems for young people aged 5-18. The results, published in Prevention Science, indicate that:

  • With few exceptions, the median average effect sizes on various outcomes fell within the range of +0.07 to +0.16.
  • Prevention programs yielded larger effects on knowledge than on actual behavior.
  • Providing information to increase knowledge (e.g., about the risks of drug use) is an important component of many programs, but knowledge does not always correlate strongly with actual behavior.

The authors advise that the effect sizes indicate the level of improvement that has been achieved to date and can serve as a benchmark for assessing the value of new findings.

 

Source:Tanner-Smith, E. E., Durlak, J. A., & Marx, R. A. (2018). Empirically based mean effect size distributions for universal prevention programs targeting school-aged youth: A review of meta-analyses. Prevention Science. Advance online publication. doi: 10.1007/s11121-018-0942-1.Read the rest

Evidence supports The BSCS Inquiry Approach

With the increasing interest in STEM (science, technology, engineering, and math) curricula comes the need for evidence backing these programs. One such science program in the U.S. is The BSCS Inquiry Approach, a comprehensive high school science approach based on three key concepts: constructivism, coherence, and cohesiveness. The materials are built around the 5E process (engage, explore, explain, elaborate, and evaluate). Teaching focuses on evaluating students' current understanding and using inquiry methods to move them to higher understandings. Each of the science disciplines (physical science, life science, earth science, and science and society) is composed of four chapters that repeat common themes, which advance over a three-year period. Designing and carrying out experiments in small groups is important in all topics. Teachers receive seven days of professional development each year, including a three-day summer institute and four one-day sessions, enabling sharing of experiences and introducing new content over time.

To determine the effects of The BSCS Inquiry Approach on student achievement, BSCS conducted a two-year cluster-randomized study of the intervention that compared students in grades 10-11 in nine experimental (n=1,509 students) and nine control high schools (n=1,543 students) in Washington State. A total of 45% of students qualified for free or reduced-price lunches. The evaluation findings were:

  • At the end of two years, the BSCS students scored higher than controls (effect size=+0.09, p<.05) on the Washington State Science Assessments.
  • Class observations identified that BSCS classrooms engaged students more frequently in activities that aligned with the framework for the Next Generation Science Standards.

Besides ,the authors suggested that the findings also demonstrated the combination of research-based curriculum materials and curriculum-based PD was effective.

 

Source (Open Access):Taylor, J. A., Getty, S.R., Kowalski, S.M., Wilson, C.D., Carlson, J., & Scotter. P.V. (2015). An efficacy trial of research-based curriculum materials with curriculum-based professional development. CO: BSCS Science Learning.… Read the rest

Grouping students by achievement

The Education Endowment Foundation in the UK has published an evaluation of two trials of programs developed by the University College-London (UCL) Institute of Education investigating approaches to grouping students: Best Practice in Setting and Best Practice in Mixed Attainment Grouping.

The main trial, "Best Practice in Setting," tested an intervention that aimed to get schools to improve their setting practice (grouping students in classes by their current achievement levels). A total of 127 schools took part in the trial, which ran over the course of two academic years. Teachers were randomly allocated to sets to prevent "lower" sets from being disproportionately assigned less-experienced teachers, while students in Years 7 and 8 were assigned to sets based on independent measures of achievement, rather than more subjective judgments such as behavior and peer interactions. There were opportunities throughout the year to re-assign students to different sets based on their current level of achievement. The evaluation showed:

  • No evidence was found that the intervention improves outcomes in math (effect size = -0.01) or English (effect size = -0.08).
  • Also, no conclusive evidence was found that the intervention improves students’ self-confidence in either subject.
  • The process evaluation revealed mixed views from participants, and many interviewees thought that what they were being asked to do represented little change from what they already do.

The researchers noted that because school and teacher buy-in was low and attrition rates for follow-up testing were high, half of the schools in the math trial and more than half of the schools in the English trial stopped the intervention before follow-up, and this makes it difficult to conclude anything certain about the impact of Best Practice in Setting.

 

Source (Open Access):Roy,P., Styles,B., Walker,M., Morrison,J., Nelson, J. & Kettlewell, K. (2018). Best Practice in Grouping Students Intervention A: Best Practice in Setting : Evaluation report and executive summary. London: The Education Endowment Foundation.

Source (Open Access):Roy,P., Styles,B., Walker,M., Morrison,J., Nelson, J. & Kettlewell, K. (2018). Best Practice in Grouping Students Intervention B: Mixed Attainment Grouping :Pilot report and executive summary. London: The Education Endowment Foundation. Read the rest