Publication

Learning the Landscape of Digital Literacy

Cory Collins
Kate Shuster

The internet and digital technology have not only changed our day-to-day lives—they have changed the boundaries of education. Most educators embrace the opportunities (and responsibilities) presented by new media and increased access to information.

With a global library of resources at their fingertips, students and educators can research more broadly and deeply than ever before. Social platforms allow for personal and professional connections, regardless of location. Networks of people connected by a common cause have expanded the definition of activism and collective action. And though access to digital resources remains an important equity issue, information has never been more widespread, allowing many students and educators to reach beyond the limits presented by their locations, budgets or other circumstances.

But as the digital landscape becomes more complex and expansive, it is also becoming more difficult to navigate and easier to manipulate, as high-profile reports about the influence of “fake news” and Twitter bots reveal. The ability to navigate this landscape effectively without succumbing to the pitfalls of media manipulation requires a multi-faceted skill set often referred to under the umbrella term digital literacy.

Digital literacy is more than the ability to identify misinformation or avoid bad guys online; it means being able to participate meaningfully in online communities, interpret the changing digital landscape, and unlock the power of the internet for good. Digital literacy, in the modern United States, is fundamental to civic literacy.

Why Is Digital Literacy Important?

The need for digital literacy extends into multiple areas of life, including life away from the keyboard.

Privacy concerns. As identity, personal information and accounts become more entangled with the web, the stakes become higher with regard to privacy. Hacking and doxxing (the purposeful and often malicious reveal of someone’s personal information or images) are weaponized more frequently, and more of us are vulnerable as a result. Even legitimate commercial entities can legally share or sell personal information under certain circumstances, increasing the need for vigilance. Personal security now requires knowing how to protect against these vulnerabilities.

Digital footprints. Screenshots, check-ins, selfies and tagging are part of daily life for many students and adults. With such intense, sometimes involuntary, documentation flooding the digital landscape, online users must understand the consequences of what they share (or what is shared about them), often referred to as their digital footprint. Deleting a post or untagging a photo doesn’t erase online activity. Once words and images go online, they can have a lasting impact on everyone involved.

Uncivil online behavior. Some online communities have moderators or guidelines for participation, but many don’t. Online users need the tools to counter uncivil speech and behavior, and to understand the consequences of engaging in uncivil acts online, including bullying and hate speech.

Fake news. Online content producers are very good at figuring out what kinds of stories get clicks, allowing them to both capture public attention and sell profitable ad space. This ability to manipulate user behavior on the web has led to the spread of false and misleading information. Sometimes this information is posted accidentally, sometimes deliberately for profit, sometimes for attention, and sometimes it is used to promote a political agenda. Users need the skills to think critically about online sources and evaluate their reliability.

Internet scams. Similar to the need to be able to evaluate online messaging and calls to action, users need to know enough to be wary of offers designed to exploit their financial trust or breach personal security.

Echo chambers. The market economy of the internet pushes people into increasingly partisan and divided corners by exposing users to content that reinforces their existing beliefs. Users need skills to help them find different viewpoints and perspectives and to evaluate how their online behavior influences the information they receive. This skill is one of the most powerful tools we have to counter political polarization.

Legitimacy concerns. The online marketplace rewards popularity. News and content providers, search engines, social media sites and advertisers all want high numbers of views, and stories with an extreme partisan bent, humor or memes tend to spread or “go viral.” Content that gets clicked on, however, isn’t always accurate or high quality. Users need the skills to differentiate between the attention an item receives and the item’s actual merit.

The rise of the alt-right. The so-called alt-right’s recruitment strategy exposes a need to understand—and, particularly, to teach young people—how to recognize propaganda and hate speech, even when it is shrouded in humor, irony or pseudoscience. Other extremist groups that purposefully peddle false messages have already adopted the tactics of the alt-right, so the need to recognize its recruitment strategy is greater than ever.

Online radicalization. With extreme ideologies readily present on the internet, users need to be equipped with critical thinking and research skills that will keep them from falling down a rabbit hole of increasingly extreme and isolating content online. Students unable to distinguish good from bad information remain more vulnerable to recruitment by hate groups.

Challenges to Digital Literacy

Volume

Understanding the goals and the importance of digital literacy is one thing, but actually achieving it is no small task. That’s because the sheer volume of information available online is unprecedented—and can be overwhelming. We are exposed to more information than ever before, and the speed at which it comes to us can outstrip our capacity to process that information carefully.

Did you know that the number of worldwide internet users equates to nearly half the population of Earth? Moreover, in the last five years, more than a billion people have joined some form of social media. As internet use has grown, so has the online marketplace of news and content. According to the business intelligence experts at the software company DOMO: 4.1 million users watch YouTube videos; 456,000 Twitter users send a tweet; 46,740 Instagram users post a photo; 3.6 million people conduct a search on Google; and more than 100 million spam emails get sent—every minute.

Across these platforms, organizations and individuals very often present their content as news or as fact. Accuracy is not always a prerequisite. The truth is that people are more likely to share false or exaggerated stories than fact-checked stories because: (a) factual reporting is time consuming and expensive; and (b) the benefits of publishing inaccurate news (clicks, ad buys, political capital) often outweigh the consequences. In a fight for attention, quick, “sexy” stories present a profitable opportunity. Politifact identified more than 200 sites and Facebook pages deliberately sharing “fake news,” and Google has punished more than 300 sites for publishing fake content. These lists do not include sites that purposefully stretch or distort truth to promote propaganda or partisan messaging. In other words, falsehoods are not merely present on the internet—they are pervasive. And while many internet users may struggle to distinguish fact from fiction, students in particular have trouble telling the difference.

Multiple sources competing for our eyeballs creates an attention economy. Think of this like a carnival, with content providers as the carnival barkers aggressively calling for your attention and (by extension) your money. Once you pay to play their game, the quality and fairness of the game no longer matters. Your click has been recorded, and the exchange has been made. Content providers operate similarly, often sharing different versions of the same story at the same time in a digital space that values page or video views more than accuracy.

Young people—the superusers of the internet—are the ultimate target audience for this economy. More than 90 percent of U.S. teens go online daily; about a quarter of them say they are online “almost constantly.”

Internet culture and social media makes it more difficult to assess the credibility of stories for three reasons: the abundance of sources, the speed at which they make a story go viral and the presence of filter bubbles—where likeminded people gather online, often unexposed to varying viewpoints and perspectives. This means that people mostly see stories that confirm their own beliefs and biases. It also means that when fake news spreads within those filter bubbles, fact-checked stories often fail to reach the audience that needs them most.

These bubbles are strengthened by the science behind search engines, social media sites and even our brains. This heavily influences what internet users see on a daily basis—whether they know it or not.

Illusion of choice

Even if the internet offered easy access to a cross-section of information and news, it would still be a challenge to navigate. Media literacy has always required a good eye for reliable sources and the willingness to dive deep into a source and determine what is true and what isn’t.

But the internet isn’t an open library. It’s a predetermined experience.

One reason for this is that the algorithms powering search engines and social media timelines often choose what we see, tailoring our browsing experience based on our search histories, interests, posts we like, what we buy, our location, personal data, etc. Imagine if a restaurant rearranged its buffet to put the food you frequently eat and enjoy at the front end of the table so you’d see it first, or if the restaurant even removed what you don’t like from the table entirely. Algorithms work in a similar way. Algorithms not only serve the purpose of showing us content or ads that closely align to our interests; they also give us the illusion of comfort and belonging within a platform, increasing the likelihood that we’ll continue to use it.

Algorithms are not the only way that content is pushed to us without our knowledge. Content creators hoping to make money, spread political propaganda or both boost certain kinds of content, either by gaming algorithms or causing something to “trend” through widespread sharing. (There is actually a cottage industry devoted to this type of signal boosting.) Bots can take this a step further, creating an army of fake electronic messengers whose high levels of engagement create the illusion that a topic or piece of content is popular or important.

People who know how to manipulate search engine algorithms or organize an army of social media users—real or robotic—have a lot of power in this new attention economy. Their methods often influence what trends (or appears popular), and therefore, what becomes the talking points of the day. News outlets desperate for headlines soon follow suit, figuring that if enough social media users (real or fake) talk about a subject to get it trending on Twitter, Facebook or Google, it will likely make for a well-trafficked story. This is called media manipulation or media hacking.

Media consumers have been conditioned to believe that topics appear in the headlines organically—that these are the subjects that matter most, no matter their origin. That’s what makes media hacking so dangerous: misinformation is legitimized before it is caught. According to a report by the technology-focused research institute Data and Society, “The media’s dependence on social media, analytics and metrics, sensationalism, novelty over newsworthiness, and clickbait makes them vulnerable to such media manipulation.”

Media hacking increases the visibility of ideas, stories and movements that once existed only on the fringes or were never real to begin with. Journalists pressured to deliver breaking news may get caught up in the speed of online media and share misinformation unknowingly. Well-followed influencers tricked by a post may share something hurtful or fake without a thoughtful evaluation of the content. Well-intended users—like many of us—may see a story cloaked in what seems like legitimate stats, graphics or science and then may unknowingly participate in the hack by sharing misinformation.

These falsehoods rise to (and often above) the level of legitimate news stories in the attention economy. And, unfortunately, we do not possess the natural capability to easily find and remove bias and lies from our news feed.

Cognitive shortcuts

Part of understanding digital literacy means understanding the science of how we think. Our brains use shortcuts (often called heuristics) to cut through confusing masses of information—such as the overwhelming number of stories that we encounter on social media every day. Here are just a few examples of brain tendencies we have to overcome if we want to remain open to multiple viewpoints, address our biases and resist misleading content:

Confirmation bias: The tendency to be more willing or likely to believe information that supports what we already believe to be true.

Example: John’s favorite drink is grape soda, and he thinks his mother’s concerns about his health are overblown. When he sees a misleading headline, “Study suggests drinking grape soda improves health and happiness in teens,” he shares it on his social media without reading the text of the article, which points out the study’s poor methodology and the fact that is was sponsored by the soda industry. He uses this study to refute his mom’s worries.

While this is a humorous example of confirmation bias, the story won’t always be about grape soda. Confirmation bias makes it more difficult to fact-check false narratives that go viral, especially for readers who find that the false narrative fits their pre-existing beliefs. This bias also makes it more difficult for people to listen to perspectives and voices outside of their culture (or their filter bubble) in a way that is empathetic and open-minded.

Illusion of explanatory depth: The tendency to believe we know more than we truly do.

Example: Most people feel confident, when asked, that they understand how a bike works. We were all kids once, and many of us rode bicycles. But in a 2006 study, the University of Liverpool asked people to draw a bicycle, and many of them failed miserably, failing to place pedals, chain, wheels and frames in the correct position.

This tendency to trust our knowledge—even when we shouldn’t—can inhibit our abilities to fact check, make connections or research a topic we think we understand. If we don’t think we need to look further into something, we won’t. This leads to limited understanding. It also makes it easier for media manipulators to peddle cleverly designed misinformation.

Dunning-Krueger effect: A cognitive bias that leads people of limited skills or knowledge to mistakenly believe their abilities are greater than they actually are.

Example: Claudia considers herself a “grammar nerd” and tells everyone that she sweats over the improper use of semicolons. She did, after all, major in English. When an editor returns her academic paper and notes several misused commas and run-on sentences, she is flummoxed and feels it is the instructor who must be wrong.

Most people believe they are equipped to handle information overflow on the internet and overestimate their ability to sniff out a fake story or misleading headline. Students and adults alike feel more digitally literate than their online behavior actually indicates. This means part of the challenge of teaching digital literacy is convincing people that they need the information or that they are part of the problem.

Illusion of comprehension: A cognitive bias that occurs when people mistake familiarity or awareness for actual understanding. Also called the “familiarity effect.”

Example: Trey studied for the upcoming history test by looking at flash cards for hours. But he still got a bad grade on the test, and didn’t understand why. Trey had become familiar with all of the dates and people he was going to be tested on, but he couldn’t remember how all the pieces connected, nor could he describe the bigger picture. Memorization didn’t work in the context of a test that required a show of deep understanding.

This cognitive bias has two major implications on the internet. First, it means people often mistake a surface-level awareness for deeper understanding, making them less likely to look closer at a topic they’ve seen discussed repeatedly online. This often leads to people taking strong positions on topics they hear about a lot, but of which they actually know little, such as climate science. Second, it leads consumers in the attention economy to accept repeated or familiar misinformation as factual information. The more a conspiracy theory or fake story gets repeated by content providers, the more that theory or story becomes familiar to online users, increasing the likelihood that their brains will accept it as fact.

Together, these cognitive shortcuts leave us vulnerable to disinformation in the digital media world. Even fact-based checks on fake stories are met with resistance from a brain that has already accepted another reality (a phenomenon known as belief perseverance). This is why it’s so important to be able to identify and resist misinformation in the first place; reactive fact-checking rarely works. It’s also why it’s dangerous when fake stories spread too widely: Repeated exposure to false information may induce people to believe that information is true, even if there is evidence to the contrary (a phenomenon known as the illusory truth effect).

Loss of trust in the media

Democracy depends upon a free press and trust in the information it provides. Loss of trust in the media has consequences. According to the Data and Society report, more manipulation of the attention economy and the media means “decreased trust of mainstream media, increased misinformation, and further radicalization.”

Already, we know that young people lack trust when it comes to traditional news and rely heavily on social media as their primary source of news, leaving them open to misinformation campaigns. Moreover, if young people increasingly reject journalistic institutions, they will not seek out as much high-quality, investigative reporting. Journalism has historically served to hold those in power accountable and expose truth. But if people stop seeing reporting as truth, who is held responsible?

Another negative consequence is that lack of trust in institutions will make self-government difficult, as people will be less likely to learn about and take part in civic engagement. Will today’s students understand the power of the vote and the power of representation if they do not, first, believe in the legitimacy of voting and government? And, the less we understand and participate in self-governance, the less we understand how government works and how to hold public officials accountable. This lack of engagement can lead to less resistance and more corruption—making trust issues even worse.

We also know young people can become less empathetic in an internet culture that prizes humor and viral memes (sometimes referred to as meme culture) more than genuine human connection. Groupthink only makes this worse, and can cause insensitive or even cruel behavior to happen en masse. The normalization of trolling, shaming and exploiting others’ insecurities for the likes or “lulz” has made the internet an often-uncivil place. And radical groups use these tactics to appear youthful, edgy and fun while disguising their hateful messages as humorous and normal. In a worst-case scenario, students more susceptible to misinformation and meme culture also become more susceptible to radicalization and recruitment from extremist groups that promise a sense of belonging, appeal to a teen’s need to rebel and find new identity, and campaign against certain groups of people with misinformation and troll tactics.

While these consequences may not seem individually threatening, when combined, they can profoundly alter what internet users believe and how they behave—socially, politically and economically. Over time, this loss of trust even has the power to destabilize our democracy.

Toward a More Digitally Literate Society

It may seem like a daunting problem, but there are steps we as individuals and as members of institutions like schools, clubs and professional associations can take to become more digitally literate and to encourage the digital literacy of others. Begin by exploring the lessons, videos and professional development materials Learning for Justice has created to help internet users of all ages become more savvy and self-aware as they navigate the online world. Try a few activities, like taking steps to balance your media diet, learning the language of digital literacy and watching a short video on how “fake news” becomes just “news.” You’ll soon begin to notice cues you may have previously ignored—maybe tipping you off to a source that isn’t reliable, an online offer that seems too good to be true, or to the habits of your own mind. It may require work, but it’s work we must undertake. And the more familiar we become with the problem, the more easily and capably we can become part of the solution.

Vocabulary

Algorithm: A procedure used to locate specific data within a collection of information. Also called a search algorithm.

Attention economy: The idea that one of the driving forces of online interactions is the exchange of attention, rather than goods or money.

Belief perseverance: The tendency to continue believing something even after learning that the foundation of the belief is false.

Bot: An automated online program; short for web robot.

Digital literacy: The ability to participate safely, critically, meaningfully and justly in the production and consumption of content online. 

Digital footprint: The information about a person that can be found online as a result of their internet activity.

Filter bubble: The limited perspective that can result from personalized search algorithms.

Groupthink: A group’s practice of thinking or making decisions in such a way that promotes harmony and conformity within the group at the expense of creativity or individual responsibility.

Heuristic: A cognitive shortcut, rule or method that helps people solve problems in less time than it would take to think the problem all the way through.

Illusory truth effect: A cognitive bias that occurs when people confuse repetition with truth. Repeated exposure to false information may induce people to believe that this information is true, even when they know better.

Lulz: Laughter and enjoyment, usually at someone else’s expense.

Media hacking: The manipulation of electronic and online media, especially social media, to shape a particular narrative.

Meme culture: Internet culture centered around the creation and distribution of memes: images, videos, phrases, symbols or other brief texts meant to be funny and shared widely online, often with slight changes.

This publication was written by Cory Collins based on research by Kate Shuster. Adrienne van der Valk and Maureen Costello contributed editorial support.

x
Illustration of person holding and looking at laptop.

New Virtual Workshops Are Available Now!

Registrations are now open for our 90-minute virtual open enrollment workshops. Explore the schedule, and register today—space is limited!

Sign Up!