New set of proof points profiles show success in blended learning programs

Today we and the Clayton Christensen Institute for Disruptive Innovation released the final set of case studies in the series Proof Points: Blended Learning Success in School Districts, which examines blended learning efforts in traditional school districts, and the correlated improved student outcomes.

Regular readers may recall that we originally released six profiles in April, and another three in June. The final three are:


  • Hamilton County Community Unit School District, McLeansboro, Ill, established blended learning in both of its elementary schools. Students in blended learning classrooms are outperforming those in traditional classrooms on benchmark assessments.
  • Washington County School District, St. George, Utah, launched blended learning programs by leveraging digital assets from its Utah Online School to boost graduation rates.

The full set of profiles together examined 12 districts with diverse geography, demographics, district sizes, and assessment demands. Despite the unique circumstances of each district, the case studies show that blended learning has had a positive impact on student achievement using a number of measures, including graduation rates, benchmark test scores and state assessments.

The results have received considerable media attention, and we have heard of quite a few districts using the profiles in a variety of ways as they promote their own efforts to improve student outcomes. We are grateful to all of the educators who have contributed their time to help us understand, explore, and explain their programs and their key elements of success.

Upcoming blog posts will review some of the findings across the profiles, examining why these particular districts have been successful and what others, particularly those in early stages of program development, can learn from them.


The myth that “students are comfortable with technology” is prevalent and problematic

Among the common misconceptions in education is that students are comfortable with technology, and therefore implementing blended learning doesn’t require helping students make the transition. For example, a 90 second Google search returned this quote about students and technology:

“Many students have grown up around technology and feel comfortable with it. Don’t be embarrassed that they may know more about technology than you do. Welcome opportunities to learn from them.”

In reality, students often have to become comfortable learning in new ways—as much as teachers have to become comfortable with new methods of instruction.

This point is made by Diane Tavenner, the founder and CEO of Summit Public Schools, in an interview with The Hechinger Report.

“Kids are literally making decisions about how they are going to learn and when they are going to learn. They are so engaged that it is spilling into the rest of their life and it [learning] is 24/7. We are about three years in, and it’s taken that long for kids to really break free of that old model.”

Students may generally be more comfortable with laptops, tablets, and smartphones than some adults. But that doesn’t mean that all students are comfortable with these devices at any level, and it doesn’t mean that most students are comfortable with the educationally appropriate use of these devices. It’s one thing for a student to know how to watch a video on her tablet, but a very different task to watch an animation explaining a science concept, analyze it, perhaps annotate it, and learn from it. In addition, many educators have found that although eventually students learn to take control of their learning as they gain agency over time and path, that process usually takes time. In the meantime, students need direction and support as they become comfortable with the new blended learning model.

In other words, students have to (first) be comfortable with the technology, (second) they have to understand the educational use of the technology, and (third) they have to adjust to new instructional methods that use technology. Expecting that they will quickly and easily make the transition to blended learning is a recipe for failure.





The difference between research and evaluation

An earlier post reviewed the recently published District Guide to Blended Learning Measurement from The Learning Accelerator, and promised a follow-up post regarding the distinction that the guide makes between research and evaluation.

This distinction is important, for reasons explained by Richard Culata and Katrina Stevens of the US Department of Education:

“Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?

In the past, we’ve used traditional multi-year, one-shot research studies. These studies go something like this: one group of students gets to use the app (treatment group) while another group of students doesn’t (control group). Other variables are controlled for as best as possible. After a year or so, both groups of students are tested and compared. If the group that used the app did better on the assessment than the group that didn’t, we know with some degree of confidence that the app makes a difference. This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.”

These reasons that this approach often doesn’t work well are that research 1) takes too long, 2) is too expensive, and 3) is usually one-and-done, as opposed to being iterative. Instead, the authors write, “There is a pressing need for low-cost, quick turnaround evaluations.”

That is a good explanation of the shortcomings of formal research. The TLA guide takes the thinking even further with this table explaining research versus evaluation, which for simplicity I’m reproducing in full here (click on the table to enlarge):

eval vs research table

As the table shows, compared to research, evaluations tend to be shorter, less expensive, and more closely tied to specific practices and outcomes. As the writers from the US Department of Education suggest, in the rapidly changing world of education technology, there is a greater need for short studies focused on specific online and blended learning programs.

The key question about technology in education is not “does it work?” The key questions are “can it work, and if so under what circumstances?” This is because blended learning and other technology implementations in education operate under many highly variable conditions related to teachers, students, instructional methods, and so forth. The technology being used is going to support a particular instructional approach, and variability in these other factors will almost certainly have a larger effect than the technology alone. But the technology is often important, even critical, when it allows a certain pedagogical approach to be employed.

The Learning Accelerator helps districts think about how to measure blended learning

The Learning Accelerator has just released its District Guide to Blended Learning Measurement, which provides a useful framework to districts thinking about how to determine whether their digital learning efforts are yielding results.

Educators and blended learning advocates are increasingly stressing the fact that technology should be implemented only with clear educational goals in mind, and that these goals should be well-defined and measurable. In most cases, the technology should not be considered until educational goals are established. Schools that put technology first all too often find themselves with tablets in search of a problem to solve.

But identifying the educational goal is just a start. From there, the district has to figure out what to measure, and how to measure. This is where the Guide provides useful direction under several categories, including:

When to measure

TLA points out that inputs and activities, and not just outcomes, should be measured. This is a worthwhile insight because measuring inputs and activities will help the district determine if the blended learning program was implemented with fidelity to the plan. If results are not what was expected, knowing whether the failure is due to a planning deficit, or failure to implement, is necessary to correct the problem.

The Guide shows the elements to be measured over time:

Inputs ->Activities -> Outputs -> Outcomes -> Impacts

What to measure

The guide stresses that “you may not even need to add many measures if you notice that you’re already measuring some of the outcomes or impacts that you’re interested in. For example, your district is probably already tracking graduation rates, and may even be measuring student engagement or school climate, for all schools across your district.” This approach is similar to the one Evergreen and the Christensen Institute have followed with the Proof Points project, which has focused on measures that districts are already implementing.

Whom to measure

TLA makes the evident point that “you measure those who are participating in your blended learning initiative,” but stresses that while “often these are students…measuring educators (teachers and others), administrators, families, and community members may also align with your original objectives for implementation.” The guide also explores the advantages, and challenges, of including a comparison group or data set in the measurement.

How to measure

Finally, the last section of the guide explores reliability and validity in measurement tools. Reliability is related to consistency in measurement, and validity refers to whether the tools are measuring what they purport to be evaluating.

These are constructive insights, which help frame the actual measurement tools and practices that districts are using. As we have seen in the Proof Points project, districts are gauging blended learning success either based on existing measures (e.g. graduation rates, state or district assessments) or on newly implemented tools (e.g., NWEA MAP). Our discussions with district leaders suggest that using tools such as MAP result in a far richer picture of what is happening with students and schools—but that these require a significant level of investment.

In addition to these findings for educators, the guide also has a very useful table exploring differences and similarities between research and evaluation—which is a distinction that’s not well understood by funders, policymakers, and advocates. That section of the guide alone is worth another blog post.

New York Times reports HS students taking “more” online courses; but article includes no data

The headline is attention-grabbing: How High Schoolers Spent Their Summer: Online, Taking More Courses.

But the article is disappointing because it has no data; instead it consists of a series of anecdotes about New York-area students taking massive open online courses (MOOCs) in order to bolster their college applications—and often not completing them. Stories such as the one about the student taking online courses while traveling with his family around Italy are mildly interesting but would have been more noteworthy five or ten years ago than they are today.

The Times report tells us that of the millions of students taking MOOCs, “an untold number” are “teenagers looking for courses they high schools do not offer” and seeking to “nab one more exploit that might impress the college of their dreams.” One college admissions director reports “more and more students who apply to us mention they’ve taken online courses of various kinds.” But also, “admissions officials cautioned that MOOCs are not necessary for already overburdened students, and that the number of applicants listing them at this point is still relatively small.”

The risk is that this story will be reported as if it is based on data instead of a few anecdotes.

WordPress SEO fine-tune by Meta SEO Pack from Poradnik Webmastera