September 18, 2014
Efficacy, data, and whether or not to take your vitamins
A recent blog post explored how Pearson is moving toward using efficacy as a key measure of its products, and a key element in company performance as it considers acquisitions. In that blog post I was supportive of the ideas behind using efficacy in this way.
But it is important to note that figuring out what products or instructional practices are effective is not easy, and it’s far harder when one takes into account that what may be effective for one student may not be effective for another. In particular, a product that is deemed not effective for large numbers of students may be effective for particular students—but that effect could be lost within large data sets.
I’ll use a personal example from health care to illustrate why this is the case. A recent article on FiveThirtyEight titled Don’t Take Your Vitamins reviews numerous studies about the efficacy of taking vitamins to ultimately show that the “bottom line is that there is simply very little evidence that these supplements matter.” This finding is based on the concept that some small studies have shown positive results, but randomized controlled trials—the gold standard in research—show that the positive results of the smaller studies are due to the fact that people taking supplements are more likely to already be wealthier and healthier than average. These gold standard studies suggest that money spent on vitamins is money wasted. If supplements were reimbursed by health insurance, these studies would make a strong case that they should not be. If an individual wants to pay for vitamins, nobody should stop him, but neither should a health plan pay for them.
Whether or not this finding is true for most people, for me it doesn’t apply. The reason is that I have a very specific situation—an intestinal disease that often renders patients deficient in certain vitamins. Small-scale, non-randomized controlled trials demonstrate that vitamins can help patients like me, and my own personal experience shows it as well.
Given that I understand my health situation better than anyone else, and that I buy my own vitamins, there’s nobody telling me I can’t use them. So there’s not a problem with the large-scale studies showing that vitamins don’t help.
But let’s pivot back to education. Imagine a student with a specific and somewhat uncommon reason that she is having trouble mastering math. For example, perhaps she is an English Language Learner and her native language is Vietnamese. Imagine further that her teacher knows this, and that the teacher selects a digital math instruction product that the teacher believes will help this student because it minimizes reliance on any reading or spoken words, and emphasizes videos demonstrating how to work through math problems.
Now imagine that the latest study is released and shows that this particular product doesn’t improve achievement when large randomized controlled studies are used, and the school board cancels the contract. The problem is that the study isn’t refined enough to show that the product can help the ELL student. The teacher knows it can help in some situations, but may not be given the latitude to use a product that hasn’t shown improvement among 10,000 students, even though it is likely to produce the best result for one unique student.
Studies and data are valuable, and necessary. But they should not be used to minimize the experience and expertise of the teachers and administrators using the tools. It’s not an either/or situation; both must be taken into account.