Digitizing Truth

1979 Alma College Professor of Religion, Dr. Ron Massanari teaches class outside – from the Alma College archives.

My first year of college, I took the obligatory Introduction to Western Thought class at 8:00am Tuesdays and Thursdays. This class taught me a couple things: I hate morning classes and I don’t know much about anything. In some ways, Dr. Massanari’s class fundamentally changed the ways I think about truth and knowledge. I think he’d be happy to know that I walked away from his class questioning how we “know” things. His course encouraged my interest in language and how language constructs the way we view the world. Through him, I found Foucault.

So, when The Atlantic published Friedersdorf’s “Should Google Always Tell the Truth?” I was perplexed. The article doesn’t quite hit the mark. Friedersdorf asks, “When should it [a search engine] direct searchers as neutrally as possible to the Web pages that they’re seeking?” While this question is interesting, I don’t think it’s the most relevant. A better question might be, “How does Google determine what is neutral information?” or “Who determines what is true?”  These questions have real consequences for how we prosume information online.

As I’ve pointed out in the past, sites like Wikipedia that are collaboratively written, edited, and cross-checked still have issues with truth and neutrality. In particular, the voices of women are unrepresented on Wikipedia and other collaborative writing spaces. In other words, truth is always situated in culture. We should be concerned that search engines like Google have so much power over how we know things.

Or, as Dr. Massanari might say, “Clarify and define the basic assumptional claims of Google.”

Plagues, Viruses, and the Internet

One of my ongoing research interests is the way in which metaphors shape the way we think about the interwebs.  For example, we often describe technological changes as revolutions, breakthroughs, or cutting edge. These kinds of metaphors may mask the ways in which the interwebs are not revolutionary and recapitulate existing inequalities.

Nevertheless, one of my favorite metaphors is the virus/plague/meme metaphor. In 16th Century England, authors often compared the spread of unregulated books to the spread of disease. Similarly, we describe unwanted programs as viruses that infect our machines. This metaphor makes sense to us because plagues and viruses spread without our control.

Like viruses, images, news stories, and memes proliferate on the internet without our control. For example, historian Monica Green from Arizona State Leprosy_victims_taught_by_bishopUniversity recently found that a Medieval illustration is often used online to describe the Black Death.  However, the image actually depicts clerks with leprosy being taught by a Bishop.Green and her colleagues traced the error back to a catalog mistake at the British Library: “While art historians have long known what this image portrays, it was mislabeled as a plague image when the British Library’s digitization process removed it from its original textual context” (Jones). In other words, the error is replicated online over and over because one librarian made a simple mistake.

Errors are not new to publishing.  After all, just looking at the Hamlet Quarto Project shows you how easily errors, changes, and typos can appear in printing. Or, in the infamous Wicked Bible (1631), the printers forgot a small word and the Eighth Commandment suddenly read, “Thou shalt commit adultery.” Mistakes happen.  However, digital publishing can make these errors widespread. What’s more, when an image or error is uploaded to Wikipedia, the mistake is perceived as truth.

At the same time, the affordances of digital media allow us to fix these errors.  A mislabeled image in Wikipedia can be easily removed or relabeled by anyone with an internet connection and the knowledge. Conversely, an error may circulate for years because the Tudor kings and queens were right – information is like a virus.