June 23, 2014
Data mining demonstrates outcomes in blended schools in California
Keeping Pace 2011 included this statement:
“[w]e need to better understand under the conditions in which online learning works. When we hear the term “research,” however, we think of multi-year studies, with large numbers of randomly selected students using specific content or technology that is being tested. The students are then assessed and results compared to a control group of students who did not use the content, technology, or teaching method…Such research takes many years and a high level of funding, and when complete, it is often limited in the results that can be reported. Even when results are statistically significant, they are—or should be—confined to the specific set of content, technologies, or teaching strategies being tested.
With online schools in existence for more than a decade in some states, and with students having taken millions of online courses and full-time online students having taken hundreds of thousands of state assessments, mining existing data represents a more powerful and less expensive approach to determining what works.”
That statement was about online learning and online schools, but much the same applies to blended learning. There is clear value in the types of research that we referenced in the above quote, as we have noted in posts about the recent SRI study of Khan Academy, and the Dell Foundation study of blended charter schools. Yet a relative lack of studies based on data mining—which would be lower-cost and more quickly conducted—remains evident.
In this latter category, Princeton University student Laura Du recently published a study of test scores of blended schools in California, using data from the California Standardized Testing and Reporting (STAR) Program.
The full study is not yet available online, but Du summarizes the findings:
“…blended schools appear to perform as well or better on California statewide standardized tests than do non-blended schools serving demographically similar student populations. These findings are based on a study of 35 rotation-model blended learning charter and district-run public schools in California.”
These findings are important in two distinct ways. First, they add to the body of knowledge suggesting that blended learning can improve outcomes. Second, they augment the research done by SRI, Rand, and others. Data mining such as Du’s shows whether blended learning can improve outcomes, and larger research studies can help describe how implementations can be most successful.
Du’s study contains important caveats and is not definitive: “[I]n the absence of a more controlled research setting, it is difficult to determine exactly how much of the achievement gains can be attributed simply to “blended learning,” an umbrella term that encapsulates a wide range of practices and resources, both human and material.”
A second caveat is equally important: “As the number of blended schools has increased sharply in the past few years, so too has the variation in test scores. As a result, the average test score performance of blended schools has decreased.”
These cautions are important, and they suggest the need for more studies of this type. As Alan Krueger, former White House Chief Economist and Bendheim Professor of Economics and Public Affairs at Princeton University, says, “It is critically important that studies like this gather and analyze evidence to empirically evaluate the efficacy of blended learning and that we build on what works.”
Proudly Sponsored by: