Distinguishing Good Science From Bad In Education

Distinguishing Good Science From Bad In Education

Distinguishing Good Science From Bad In Education
SYNOPSIS

When can you trust the experts? A conversation with Dan Willingham on how you can distinguish good science from bad in education.

“If it disagrees with experiment, it’s wrong.  In that simple statement is the key to science.  It doesn’t make a difference how beautiful your guess is, it doesn’t make a difference how smart you are, who made the guess, or what his name is.  If it disagrees with experiment, it’s wrong.” – Richard Feynman

Feynman was a physicist, a field in which this statement uniformly applies.  However, standards are different in various fields, and one of the fields in which this statement probably does not uniformly apply is education.  There are certainly some education ideas that are backed by solid science, but there are many other ideas which have no scientific backing at all but, for whatever reason, are still immensely popular.

Daniel T. Willingham is a professor of psychology at the University of Virginia.  He writes the “Ask the Cognitive Scientist” column for American Educator magazine, contributes to RealClearEducation, and has a popular science and education blog.  I recently had the opportunity to read his excellent book When can you trust the experts? How to tell good science from bad in education which I think is important reading for anyone who is interested in what ideas in education are based on scientific evidence, which ones are not, and how to tell the difference.  I decided to ask him some questions.

JON: I think we often don’t take the time or want to put in the effort to vet the research—or as you say, “we are often on autopilot.”  So we go to sources that seem trustworthy enough for a shortcut, such as the media and public intellectuals that seem confident in what they are talking about.  Do you think this is a mistake?  What are the key reasons why we should not take the mental shortcut?

DAN: In certain cases this strategy works out well. If you can reliably identify who is really an expert, and if the experts mostly agree that there is a settled truth, then you’re in pretty good shape. The experts might be wrong, of course, but you’re getting the best knowledge available. You run into trouble if one or both of those conditions is absent. Sometimes it’s hard to tell who is an expert because people who want to persuade you are good at snatching the earmarks of expertise: academic degrees for example, or, merely having been asked for your expert opinion before (e.g., on TV). And in some fields (for example, education) expert opinions differ.

You write: “Historians have pointed out that there is a pattern of education theories being tried, found wanting, and then reappearing under a different name a decade or two later.”  First, how big a problem is different labels being applied to the same old ineffective stuff?  Second, do you think part of the reason many people don’t consult the true experts for evidence is because they are simply unsure who to trust?

I think the “old wine in new bottles” problem is pretty severe in education. For example, the “whole word” method of reading instruction was found to be less effective than teaching letter-sound correspondences (the “phonics” method) in the mid 1960’s. Then the theory resurfaced in the mid-1980s, now called “whole-language.” Another example is putting greater emphasis on group learning, and on projects, and less emphasis on didactic instruction has gone by many different names since the 1920’s, the most recent being “21st century skills.” It’s not that this idea is a bad one, it’s just hard to make this sort of instruction effective; by pretending it’s something new, we don’t learn from experience.

I think there’s more than one reason people don’t consult true experts for evidence. One is that it may not be obvious who the expert is: several people saying different things seem equally well credentialed. Second, people usually find it hard to accept evidence that contradicts your beliefs, and that’s doubly true when the issue is emotional. Third, experts themselves are sometimes to blame when they offer opinions on educational matters that are actually outside their area of expertise.

You write: “Politicians don’t persuade with statistics.  They persuade with emotions.”  If emotional stories are so effective and used by popular writers, activists, and many other parties with great results, should scientists learn to persuade with emotions rather than statistics?  What are the positives and negatives of taking such an approach in your view?

As a way of making scientific truths more understandable, and their importance easier to comprehend, sure. We do need to be careful, though, I think, not to shade the truth in the interests of a good story, even if we’re telling ourselves that we’re serving some greater truth is so doing. That’s a judgement call where it gets pretty easy to fool yourself.

What specific audience did you write this book for?  Many of the techniques and standards you advocate for evaluating the soundness of an educational approach are not even practiced consistently among education researchers.  Do you think the first audience that really needs to read your book is education researchers?

In my experience, all scientists have moments where they are not at their scientific best. Science is hard. Most of education researchers I’ve worked with are no different in this regard; they are good scientists, but a big part of the reason that we have scientific methods and institutions in place is to guard against human mistakes.

© 2014 by Jonathan Wai

You can follow me on Twitter, Facebook, or G+. For more of Finding the Next Einstein: Why Smart is Relative go here.

Note: This article originally appeared on Psychology Today.

comments powered by Disqus
RECOMMENDED
FOR YOU