Here we go again…

The New York Times is undertaking a series of stories evaluating educational technology called “Grading the Digital School.” Today’s installment is entitled, “Inflating the Software Report Card.”

My previous piece responding to another story in this series identified that the piece was missing a lot in the argument for educational technology and seemed too quick to dismiss educational technology if there wasn’t a direct correlation to test score improvements.

My beef with today’s piece is along the same lines.

Once again, the NY Times makes it seem that test scores are the only metric worth consulting when deciding the import of technology in the classroom. Today’s installment was lengthy, but only in the last quarter of the article did it even suggest at all that maybe we should think about something else besides test scores when assessing the value of any classroom technology.

And it was only one tiny paragraph:

Karen Cator, a former Apple executive who directs the Office of Educational Technology at the Department of Education, said the clearinghouse reports on software should be “taken with a grain of salt” because they rely on standardized test scores. Those tests, Ms. Cator said, cannot gauge some skills that technology teaches, like collaboration, multimedia and research.

I would love to see the NY Times explore how technology teaches these other important things that are not easily assessed on standardized achievement tests, how technology can teach these other important things in ways not possible without the technology. As it stands now, millions of NY Times readers are being led to believe that classroom technology only matters if it raises test scores and that test scores are the only thing we should care about when it comes to education.

And I would also like to see the NY Times discuss other classroom technologies besides learning software. Today’s piece was highly focused on Carnegie Learning’s Cognitive Tutor, a software application that actually costs about triple what textbooks cost. The most powerful technology applications I use in my classroom cost nothing, yet the NY Times is letting readers think that classroom technology is prohibitively expensive. And in the case of the very expensive Cognitive Tutor, of course people are scrutinizing it to decide if it’s worth the extra cost. Carnegie and its opponents are swapping “research” studies to either defend or attack the software, to either justify its expense or show why the old-fashioned textbooks sitting on the shelves will do just fine.

There are all kinds of problems with this. For one, people have a complete lack of understanding about the power of educational research. The “What Works Clearinghouse, [found in 2010 that] Cognitive Tutor did not raise test scores more than textbooks.” Opponents throw this result at Carnegie Learning again and again and Carnegie Learning has responded with research studies that show different outcomes. But the What Works Clearinghouse has dismissed this research because

the gold standard of education research is a field trial in which similar groups of students are randomly assigned to classes where one uses the curriculum and the other does not.

This makes it sound like experimental research with absolutely valid, totally generalizable results is possible in the realm of education. The best we can do in the experimental realm in education is quasi-experimental research. Classrooms cannot be tightly controlled in experiments the way test tubes and petri dishes can. Randomly assigning students to different classes where one “uses the curriculum and the other does not” does not control for all the possible variables that might affect the results for either classroom. Educational research does attempt to control for variables (such as the gender, socio-economic status, language, or cultural background of students), but even these research results must be considered very carefully and cannot be generalized beyond the specific research study that attained the results. Randomly assigning students to a “treatment” classroom and a “control” classroom does not guarantee that each classroom is operating under the exact same conditions aside from the curriculum being assessed. Each classroom contains different students and a different teacher–the rooms are not clones of one another. A research result could have just as much to do with some social dynamic that exists in one classroom that does not exist in another. Good educational researchers know this and write their results guardedly and reveal everything they can think of that might also have affected the results.

And beyond this issue of the fact that classrooms are not petri dishes, think of the ethics of experimental research on actual children. If I have a “treatment” that I think might be of benefit to students, is it ethical for me to withhold it from one group just so that they can be the “control” to create the illusion of experimental research in an educational setting?

People are so quick to dismiss educational research that isn’t of this “experimental” type, but what they don’t realize is that case studies and qualitative approaches are the only way to collect the stories of what happened in classrooms to help us understand what the numbers mean. And just because the data sought by a research methodology aren’t numbers doesn’t mean that the research study has no rigor or isn’t valid or isn’t worthy of our consideration when we make decisions about things as important as technology for the classroom.

This was going to be a quick post today because I have paper rewrites to look at and some other work I need to do–so I’ll work to wrap this up now.

In short:

  • Classrooms are not petri dishes that can be carefully controlled in a lab setting. Educational research cannot have the same kinds of controls in place as experimental research.
  • Decision makers need to broaden their scope when looking for research to inform their decisions about classrooms–the “experimental field study” may not be as valid as it purports to be (see first bullet), and case studies and other qualitative approaches are critically important if you want to actually understand that complex landscape of a classroom.
  • Yes, educational software packages like Carnegie Learning’s Cognitive Tutor may be prohibitively expensive. But that does not mean that all educational technology is. Some of the best stuff is free.
  • Instructional software is not the end point of educational technology. There is much more out there to consider!
  • The NY Times is giving its readers a limited view into the issues of technology in the classroom. So far what I’ve learned from this series “Grading the Digital School” is that the bottom line for the import of any educational technology is test scores, classroom technology = instructional software only, and classroom technology is prohibitively expensive.

Now, off to those rewrites I need to take a look at (such is the Sunday afternoon in the life of a Language Arts teacher…)

 

This entry was posted in 21st century teaching and learning, education, engagement, policy, technology, the system and tagged , . Bookmark the permalink.

Leave a Reply