cross-posted at HASTAC.
I generally find Jaron Lanier a bit too reductionist, a bit too either/or, for my tastes. His recent New York Times column arguing for a return to innovative, creative educational approaches and a turn away from problematic assumptions inherent in algorithmic approaches to assessment (“Does the Digital Classroom Enfeeble the Mind?” Sept. 16, 2010) is characteristically both reductionist and either/or. This makes me worried, because the piece is also–characteristically–poetic and moving, which means we education-y types have been slinging it around like Tea Party candidates sling xenophobia and hate. Because it took me a little while to realize Lanier’s message should worry us, I sent it on to my Twitter followers and drafted a glowing review of the piece to post here before realizing that the piece makes its own problematic assumptions about education and technologies and therefore calls for a much more critical read.
Lanier’s biggest concern, one with which I sympathize, is that turning issues of educational accountability over to computers and computer-scored tests results in a double-edged sword that pushes both the most creative teachers and the most unimaginative teachers out of the classroom. Reflecting on his father’s decision, in middle age, to become an elementary school teacher, Lanier writes that he
would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
But this is not the whole story…. It’s a romantic notion, the magic of teaching, but magic always has a dark side. Trusting teachers too much also has its perils. For every good teacher who is too creative to survive in the era of “no child left behind,” there’s probably another tenacious, horrid teacher who might be dethroned only because of unquestionably bad outcomes on objective tests.
No matter where you stand on NCLB and the use of standardized tests, you have to admit that Lanier has a point. Using standardized testing statistics to make decisions, at a distance, about the quality of a teacher may very well help us push the terrible educators out of the classroom, but it’s likely to also push out the most innovative teachers, the ones whose creativity, whose ability to foster deep and lifelong commitments to learning, don’t show up in test scores.
The problem, though, is that Lanier connects this real, worrisome concern to the windmill he’s been tilting at for some time: his conviction that internet technologies dehumanize us.
Lanier argues that while algorithmic, predictive approaches to some human experiences are “heartless,” they’re at least better than the alternative. As an example, he describes his frustration with algorithms that predict what sort of music he’d be interested in hearing, based on his previous musical selections. Lanier, a musician himself, writes that
(n)othing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to.
And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
Of course, this argument ignores the fact that “bad DJ’s” are often themselves the products of a different set of algorithms, numbers calculated by music producers, radio conglomerates, and the FCC. In fact, as most of us know (or at least suspect), a pretty significant proportion of our daily experiences are managed by algorithms–by computers. When we need to quickly learn about an event, a term, a date, a location, we Google it. We don’t go to Yahoo or About.com or Ask. How come? Because Google’s algorithms resulted in better, easier to navigate search results. When we add a new friend on Facebook, algorithms point us to other people we might know–and often, these suggestions help us broaden our social circles in useful, productive ways. Certainly we should worry about net neutrality and the dominance of Google, Facebook, and similar algorithmically driven tools; but in my view net neutrality is a political concern and not a strictly algorithmic one.
That’s the first bone I have to pick with Lanier. The second is with what he lists as the deeper concern: what he thinks is the underlying message of algorithmic, statistically driven tools. He writes that
(s)ome of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons.
I don’t think he’s quite accurate in this assessment. It seems to me that the real message is not “we understand how the brain works” but “we understand how people behave.” In other words, the algorithms used by Google, Facebook, Twitter, Pandora and the like couldn’t really care less about how our brains are wired; what matters to them, what makes for “good,” useful results, is making sense of the social operations that drive our participation online. Pandora’s algorithm, for example, relies on the “music genome project,” but the good folks at Pandora assume that musical tastes are about much more than DNA. Based on my musical preferences in the channel I call “Ani DiFranco Radio,” it’s entirely possible that I might get offered a Britney Spears song. I don’t like Britney Spears, and she certainly doesn’t belong on Ani DiFranco Radio. Why? Not because the musical structures of a Britney Spears song are opposed to my expressed musical tastes but because I don’t like Britney Spears. Pandora lets me register a “thumbs down” and thus makes it less likely that I will be offered another Britney Spears song.
Likewise, when people argue that, for example, the SAT is a more accurate predictor of first-year college success than extracurricular involvement, parents’ education levels, or other benchmarks, they’re not arguing that the SAT understands how the brain works. They’re making an argument about validity–basically, they argue that the SAT accurately measures what it’s intended to measure.
It’s fairly well established that if you want to do well on the SAT, you should do your best to be rich, white or East Asian, and male. It also turns out that being rich, white or East Asian, and male makes you more likely to succeed in your first year of college. In this respect, the SAT is making a perfectly valid prediction of college success. The issue, then, is not with the SAT itself but with the assumptions about what “counts” as learning–assumptions that lead to gender, racial, and class biases in both the SAT and in institutions of higher education.
Lanier is right that we should worry about the use of standardized tests to make accountability decisions, but it’s not because the algorithms behind these tests erroneously claim to know how our brains work. It’s because those algorithms erroneously claim to know beyond a doubt what “counts” as good learning, what “counts” as good teaching, and what “counts” as success. These social claims are far more dangerous, far more potentially destructive, than any biological or neuroscientific claims could ever be.