[Un]Critical Digital Literacy
A cautionary tale
Fake news: a good crisis
Do you think we would we be talking about “fake news” if the results of the past US election were different? Think about it. Even if Trump would not have made fake news a buzzword, the deceptive sites would still be there. Do you think French media will pay so much lip service to the idea of fake news as the American media has, now that Emmanuel Macron — fortunately — won the French elections?
Propaganda has been around for a long time and, in my opinion — from a strictly educational point of view — any attempt to make a distinction between propaganda and fake news feels like a stretch. One of the most common arguments about the alleged difference between the two is that fake news has a financial motivation behind it, although political gain has also been mentioned. I understand financial gain as a reason to engage in dissemination of deceptive media, but how does financial gain change the mechanisms of deception? How is deceptive media fundamentally different because of financial gain? Despite early suggestions of the influence of fake news in the US election — mostly disseminated through social media — research shows that fake news had comparatively little to do with its outcome, and that its impact could have been severely overrepresented in popular media. In fact, in Fake news is a red herring Ethan Zuckerman notes that “[fake news] impact and visibility comes mostly from mainstream news reporting about fake news.” Maybe the questions to ask are: Why is it worth rebranding propaganda as fake news? Who benefits from it? Considering its ongoing credibility crisis, mass media might have very good reasons to maintain the buzz. After all, as Winston Churchill allegedly said: “Never let a good crisis go to waste”.
If that was the case, mass media would not the only ones benefitting from this “good crisis”. Fake news became the perfect opportunity for scholars to engage with media, something that has been not only suggested, but highly encouraged by universities and funding agencies. One of the most recent examples around the fake news topic is the case of Dr. Melissa Zimdars, Assistant Professor of Communication at the Department of Communication and Media at the Merrimack College and head of OpenSources: “a curated resource for assessing online information sources, available for public use” that went viral recently. Open Sources and B.S. Detector (a plugin powered by the now popular site) have been represented as digital solutions for a digital problem, and have been implicitly promoted as an educational tool. But again, we know that deceptive media is not a product of the digital age, in fact, we even know from that Stanford study that “social media have become an important but not dominant source of political news and information. Television remains more important by a large margin”.
So I wonder, are we educators inadvertently tipping the scale in favour of mass media by making deception a “digital literacy” problem?
“An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions.” Algorithms are at least partially responsible for the association of deceptive media with digital technology. These shapeless digital monsters have been blamed for controlling our news and media consumption. And yet, algorithms are a perfect model of human bias. In a nutshell, when it comes to news feeds, algorithms use our preferences to curate the content that we access and to suggest more; they only give us more of what we like, just like we do. If anything, the problem is the way we select our sources and feed our own confirmation biases. It is a well-known fact that we tend to reject facts that contradict our core beliefs. In fact, when we are confronted with this kind of facts, our amygdala reacts (the same area of the brain that detects and responds to physical threats). This would explain the very modest effect that fake news actually had in the U.S. elections. This phenomenon is known as backfire effect, which occurs when “in the face of contradictory evidence, established attitudes do not change but actually get stronger”. (David McRaney created a great podcast series on this topic and Matthew Inman from the Oatmeal made an awesome comic about it. McRaney recently updated his series to elaborate on the difference between beliefs and attitudes). The good news is that it is possible to overcome the backfire effect, the bad news is that it is really hard. In The Affective Tipping Point: Do Motivated Reasoners Ever “Get It”? David Redlawsk, Andrew Civettini and Karen M. Emmerson project that there is a point in which an overflow of information can make people change their minds.
Algorithms make impossible the notion of having enough information contrary to your beliefs to make you change your mind about something. In that sense, curation can be an enemy of critical thinking. But you know what else makes difficult to access other points of view? “a curated resource for assessing online information sources, available for public use” in which not only fake news are flagged, but entire websites with editorialized content for conservative audiences, for example. You might be thinking, “sure, but we are the good guys”, and I totally understand that. As a parent, I want my children to have views that align with mines. I want them to share my confirmation biases with me. As an educator, I would love to have only “good people” in my classroom. But that is not critical thinking. I mean, learning how to use online curated lists might be a form of digital literacy, but it is definitely not critical. Now, I am not saying that Dr. Zimdars’ list is evil or anything like that, in fact, I think is a valuable resource. But it is very clearly stated in opensources.co that its mission is “to empower people to find reliable information online”, not to provide a tool for critical literacy. Moreover, Dr. Zimdars cannot respond for the individuals (or individual) that curate the site. And that is precisely the problem. Programmers and scientists have been trying to train neural networks to detect individual instances of fake news (instead of outlets altogether), and it is really hard. So far, we can only rely on people to curate these sites.
But we don’t know the biases of these people. Who are they? Why should we trust them? At least with algorithms we know what to expect.
But first, a disclaimer: I am not pro-Wikileaks nor anti-Hillary (if I were an American citizen, I would have undoubtedly voted for Hillary in the past U.S. election). That being said, the next paragraphs are not going to be pretty. You can always skip to “Pre-digital solutions”, of course.
- In late July 2016, Wikileaks released several batches of emails leaked from the U.S. Democratic Party Convention (DNC). These leaks showed clear bias from the DNC towards Hillary Clinton’s over Bernie Sanders’ campaigns, a bias that was initially denied and cost Debbie Wasserman Schultz the chair of the DNC. Donna Brazile was chosen to replace Wasserman Schultz.
- In October 2016, almost one month before the U.S. election day, journalist Jordan Chariton revealed Donna Brazile leaked to Clinton’s team a question that was going to be given to the candidate the next day, during a town hall session held by CNN. Despite the evidence, Brazile denied the accusations until very recently, when she expressed regret about her actions.
- In a recent interview (May 2, 2017) Clinton said that she “was on the way to winning until a combination of Jim Comey’s letter on October 28 and Russian WikiLeaks raised doubts in the minds of people who were inclined to vote for me and got scared off”. Earlier analyses of the impact of these events in Clinton’s campaign dispute these declarations.
In each one of these paragraphs, there is some form of lie or misrepresentation. CNN has been directly involved in at least one, but it is not flagged by B.S. Detector (nor listed in Open Sources). There is no reason to flag CNN, because — I assume — even if they editorialize their content or make mistakes while reporting particular events, it would not be fair to assume that the outlet itself is biased or unreliable. However, since the first release of the Open Sources repository, Wikileaks has been listed as an unreliable source. In the beginning, only the tag “rumours” was provided as a rationale for the flagging, but since April 1, a link and the legend “Increasingly Wikileaks is being accused of spreading misinformation” has been added to the descriptors. The link is an opinion piece published in The New York Times titled The truth about the Wikileaks C.I.A. cache. It is important to note that the New York Times openly endorsed Hillary Clinton’s campaign, but the inclusion of such link as a rationale for adding Wikileaks to the list of unreliable sources is obviously not seen as bias. On the other hand, The Intercept (an outlet that is not flagged by the B.S. Detector) showcases an article by Glenn Greenwald that states that Wikileaks has a “perfect, long-standing record of only publishing authentic documents”. These inconsistencies are problematic to say the least but totally understandable. They are the product of people’s biases.
It was not pretty, I know… If you are an educator and a proponent of these resources, I know how you are feeling. That is the backfire effect kicking in.
As academics and educators we should — at the very minimum — be able to practice what we preach and be really critical of what we promote and recommend, even if it goes against our own biases. Critical literacy is not about curating content for the “right” side, it is exactly the opposite, it “encourages readers to actively analyze texts and offers strategies for what proponents describe as uncovering underlying messages”, whatever those underlying messages are and wherever they come from. If we are legit about critical literacy and we are not using the situation as a “good crisis”, we have to be as honest and transparent as we can and put our own biases up front.
Deceptive media has existed for a long time and the attempts to represent it as a matter of “digital literacy” ignore not only a long history but previous efforts to deal with it. In my opinion, curating information is not the same as “fighting fake news”. Curation avoids such a fight. We need to facilitate the analysis of deceptive media to our students, and for that, we need to make deceptive media available in a relatively controlled environment. I would even recommend letting the students experiment by creating their own deceptive media artifacts. In my opinion, the exercise of producing “fake news” demands at least a fundamental understanding of the genre, which is crucial for deep analysis. We even have some solid frameworks to perform these analyses! Rhetoric has been around for literally millennia (talking about pre-digital) and is still used extensively in the production of persuasive media. Check out this TedEd video by Camille A. Langston.
Another great resource is the Debunking Handbook by John Cook and Stephan Lewandowsky, in which the authors provide a systematic approach to persuasive argumentation.
It is undebatable that the buzz about fake news is bringing attention to issues of media deception. This is not a bad thing at all, it is an opportunity for all educators to tackle this issue in a critical way, particularly if we manage to become more reflexive about our methods and open about our biases. It is also a great opportunity to transmit these dispositions to our students. Sounds like a potentially good crisis to me.