Search results aren’t neutral. Sometimes they can lead us to misleading and even hateful parts of the internet. Safiya Noble and Heidi Beirich explain how this happens and what we can do about it.
Earn professional development credit for this episode!
Fill out a short form featuring an episode-specific question to receive a certificate. Click here!
Please note that because Learning for Justice is not a credit-granting agency, we encourage you to check with your administration to determine if your participation will count toward continuing education requirements.
Resources and Readings
Departments of Information Studies and African American Studies, UCLA
- Safiya Noble: Algorithms of Oppression: How Search Engines Reinforce Racism
- Google: Ad Words
- Sarah T. Roberts: Behind the Screen: Content Moderators as the Internet’s Invisible Gatekeepers
- Ida B. Wells: Anti-Lynching Activism
- Tressie McMillan Cottom: Lower Ed; The Troubling Rise of For-Profit Colleges in the New Economy
- Ars Technica: "Blackbird browser reaches out to African American community"
Intelligence Report, Southern Poverty Law Center
- SPLC: "Judge: Neo-Nazi must pay more than $14 million to Jewish woman targeted by harassment campaign"
- Lawfare: "Facebook’s Role in the Genocide in Myanmar: New Reporting Complicates the Narrative"
- Anti-Defamation League: Fighting Anti-Semitism and Hate
- Data & Society
Monita Bell: It was the summer of 2010, and I had recently heard about an organization dedicated to the empowerment of black girls. The work was relevant to my personal and academic interests, and I wanted to learn more, so I decided to Google it, as one does. Not remembering the exact name of the organization but recalling the phrase “black girls” in it, I used those two words as my search term. Y’all … these search results? I just can’t. In the top seven of more than 276 million results, only one was not related to porn. I was dumbfounded, I was hurt, I was angry. When searching the world’s most influential search engine for information on black girls, girls like me, like my sisters and cousins and nieces and now my child, how did it happen that we were most associated with pornographic sex?
After my initial reactions subsided, my knowledge of history took over. That association of black girls and women with lewd and even pathological sexuality is so old. There’s pseudoscience around it from at least the 19th century, if not earlier. We know that enslavers used it to justify the sexual exploitation of enslaved women and that views of black women as hyper-sexual made their way into novels and films and so on. How does this history connect to Google? You’re about to find out. Just so you know, it turns out the organization I was searching for is called “Black Girls Rock.” Imagine if that was the message that rose to the top.
I’m Monita Bell, your host for The Mind Online. This podcast comes to you from Teaching Tolerance, a project of the Southern Poverty Law Center. In each episode, we’ll explore an aspect of the digital literacy world, what educators and students alike need to know, and how educators can guide students to be safe, informed digital citizens. In this episode, I speak with Safiya Noble, an assistant professor at the University of Southern California’s Annenberg School [for] Communication. It just so happens that she and I performed identical Google searches around the same time, give or take a year or so. That search led her on a path of research that includes her 2018 book, Algorithms of Oppression: How Search Engines Reinforce Racism.
We talk a great deal about that research. Beyond intersections of racism and sexism in search results when it concerns women and girls of color, as was the case with mine and Safiya’s searches, there’s also the element of search engines allowing content from white supremacist organizations to rise to the top of searches about black Americans and other marginalized groups. That high ranking indicates legitimacy and credibility and can influence people, as in the case of Dylann Roof, who murdered nine black people in Charleston, South Carolina, in June 2015. To dig into this issue, I speak with Heidi Beirich, director of the Southern Poverty Law Center’s Intelligence Project, which tracks and reports on hate groups and other extremists throughout the United States.
When we ask each other, “Did you Google it?” we need a better sense of what might come with that. First, my chat with Safiya Noble about how search engines reinforce racism, and how educators can approach this issue. Let’s get into it.
Safiya, thank you so much for talking with me today on a topic that is very near and dear to my heart. Before we get into it, will you introduce yourself and tell us a little about what you do?
Safiya Noble: Sure. My name is Safiya Noble, and I am a professor at the University of Southern California, in the Annenberg School for Communication. I am the author of a new book called Algorithms of Oppression: How Search Engines Reinforce Racism.
Monita Bell: Thank you, and this is near and dear to my heart because I didn’t tell you this before, but you and I actually performed identical Google searches around the same time.
Safiya Noble: Did we?
Monita Bell: Yes, “black girls,” and for me, it was around the summer of 2010. I experienced what you experienced, just the, What is this?
Safiya Noble: Is this for real? It might also be a question that we ask. How could this be—
Monita Bell: Yes, I absolutely did. How could it happen, how is this even a thing? At the time, it was 2010 for me.
Safiya Noble: Yes.
Monita Bell: As you note in your book, Google has become synonymous with Internet search, pretty much. It’s a commonly held belief that it’s a neutral, objective repository of information, and also that algorithms themselves are neutral. Yet Google is, first and foremost, a commercial platform. You speak at length about this in the book, but can you explain why that commercial element matters, and how we approach Google as a source of knowledge-gathering?
Safiya Noble: Sure. One of the things that I think is really important to understand about the commercial nature of the Internet, and I say this as someone who has been on the Internet since, oh, now I’m going to date myself, but easily since 1989, back in the dial-up days. And as we’ve watched platforms come into existence of a commercial nature, many of the large platforms we engage with are completely dependent upon advertising as their financial model. They financialize data of all various types, including search engines, which are commonly thought of as maybe a portal into all the world’s knowledge as the place we go when we don’t have answers, that we can ask questions of.
Truly, Google Search is an advertising platform. It optimizes content based on its clients that pay it, pay the company, to help create more visibility for their content. It’s also optimized by people who have an incredible amount of technical skill—these might be search engine optimization companies. In my former life, before I was an academic, I was in advertising and marketing for 15 years. As I was leaving that industry, we’re spending all of our time trying to figure out how to optimize getting our clients to that first page of search results through our media buying and public relations activities.
It’s a system that, while on one hand is incredibly reliable for certain kinds of queries, like, Where can I find the closest coffee shop to me? it is terrible for asking questions of a more complex nature, like keywords or questions around identity, trying to make sense of complex social phenomena, or just, quite frankly, not correcting the kinds of misinformed questions that we might ask of it.
Monita Bell: Of course, this goes to the algorithm. A lot of people think that “Oh, well search results are just ranked according to popularity, or whatever words or terms are linked the most or clicked the most, for a given set of keywords.” Can you explain, as you do in the book, why that’s not the case? What’s actually the case, when it comes to what’s getting prioritized?
Safiya Noble: Sure. I can tell you that those of us who study Google and companies that are highly reliant upon algorithms and artificial intelligence are often trying to deduce from all the available factors that are visible to us. No one except people who are under NDA at Google actually know the specifics of how its search algorithm works, so let me just give that bit of a disclaimer.
Monita Bell: Got you.
Safiya Noble: However, we definitely know that there are multiple factors that go into optimized content in its platform. Certainly, popularity is the first line of organization. In the early days of Google, there was the process of hyperlinking among websites as a way to evaluate legitimacy. For example, if you had a website, and I linked to it, and a thousand other people, let’s say 10,000 of us linked to SPLC’s website, we would give it legitimacy. That is a form of pointing or popularity or legitimacy-making in a search engine. There’s also other factors, like Google’s own Ad Words product, which allows people 24 by seven to enter into a live auction where they can offer to pay X amount of money for optimization of the key words that they associate with their content.
This is a more difficult place for all of the commercial search engine companies, because as we know, if we look, for example, to the 2016 U.S. presidential election, many different actors from around the world with all kinds of nefarious intent or mal intent, might engage in optimizing disinformation, for example. That’s a much harder, more difficult part of the monetization of content, to regulate or manage. There are certainly algorithms that are deployed, and they are often updated. Google itself says it has over 200 factors that go into its algorithm, that does this automated decision-making about what we find on the first page. And of course, in the book, I really focus on what comes up on the first page of results, because the majority of people who use search engines don’t go past the first page.
It really stands as the truth, so to speak, in terms of how people think of what shows up there. There’s a third invisible force that’s playing a role, and these are what my colleague at UCLA, Sarah Roberts, has written a new book about, called Behind the Screen, which is the armies of commercial content moderator. And these are people who have to manually filter out or curate out content that might be illegal, for example, child sexual exploitation material, anti-Semitic content that might show up in Germany or France; any content that would be illegal in the place where a search engine is doing business. These are the trifecta, I think of, of important factors that influence what the algorithmic output is.
Monita Bell: Got you. Why do you think it’s important for really all of us, but certainly young people who are learning the ins and outs of doing online searches, why do they need to know those different elements of what factors into the algorithm?
Safiya Noble: The process of becoming educated is complex, for sure, and I think that we’re all educated by a variety of different factors: the media, our families, teachers, professors if we’re fortunate, and Internet search engines, for sure, are playing a role. I think young people are increasingly asking search engines complex questions that might be better suited or pointed toward, say, a reference librarian or a trusted, learned adult who might know, or maybe organizations that have been working in an area for a long time who have bodies of evidence and research that could help inform a point of view.
In the past, we’ve always relied upon teachers and schools and textbooks and librarians, literature, art, to help us develop our education and become knowledgeable. I think we cannot substitute those multiple inputs with a search engine. This is the main thing that I certainly stress to my own students, which is, a Google search or any kind of commercial search engine might be a starting place, but it certainly should not be the end. Questions about complex social phenomena are rarely answered well by search engines. This is a place where I think figuring out what’s appropriate for the right place, and the right knowledge sector is really important. I don’t think of search engines myself as being a place that are chock-full of leading us to knowledge, but you might get some helpful information.
There certainly is a difference between information, knowledge and then wisdom. That wisdom might take a much more contemplative set of practices. The other thing that I think people need to think about, young people in particular, is that expecting to ask complex questions and getting an answer in .03 seconds is really antithetical to everything we know about being deeply educated, well-educated and having a depth of knowledge. There are some things you cannot know in .03 seconds. We should not socialize ourselves to thinking that there are instant answers to complex questions.
Monita Bell: That actually reminds me, I had a conversation earlier with Heidi Beirich, who is over our Intelligence Project here at the Southern Poverty Law Center. We were talking about in particular, the autofill that pops up when you begin a Google search and this very idea that even if you go in looking for something in particular, just by typing one word, you’re getting these suggestions for other things that may lead down these terrible rabbit holes.
Safiya Noble: It’s true.
Monita Bell: I think it gets to the point you’re making here, and definitely in the book, that when we look to search engines to replace libraries and replace what experts can tell us in these more informed ways, then that’s when we get into trouble.
Safiya Noble: It’s really true. Auto-suggestion is something that I’ve also been very interested in. One of the ways that I’ve looked at this, for example, is when I did queries on Mike Brown after he was killed in Ferguson, Missouri, and seeing that the kind of auto-suggestions for Mike Brown led to his autopsy, video of his death. Similarly, when I did searches, I wrote a paper about Trayvon Martin and George Zimmerman, and looking at the searches there where Trayvon Martin, the auto-suggestions for him were, “Was a thug,” “Is a bad person.” These really negative framings of Trayvon Martin, while George Zimmerman was framed up auto- complete with, “Is a hero.”
These narratives are really potent in shaping and framing how someone who might have zero knowledge, for example, of George Zimmerman’s murder of Trayvon Martin, which he was acquitted for, we know, but certainly frame-up how one might be oriented to thinking about both of these people. I think this ... demonstrates again, the complexity and the loss of what it means ... for communities and families to narrate the tragedies that befall them or other kinds of experiences. When I think about other moments in history like for example, Ida B. Wells using photographs of lynchings of African Americans to go around the country, those photographs on their own were in some people’s homes as memorabilia that they celebrated participating in these heinous, horrific acts.
She was able to take those photographs and use them in service of ending lynching, or organizing people in service of ending lynching, legalized lynching. What we can’t do in something like Google is organize the narrative to frame what these suggestions might be. When you go and you look for Mike Brown and the first thing you see, or Eric Garner, is a video of them being murdered or killed at the hands of law enforcement, there is no way to narrate or frame what’s happening there. It’s just left there. I think that what we’re seeing is that these images are fomenting PTSD and trauma for people who view them, and certainly the families have lost control of the ability to take those things down.
Monita Bell: The concept of control that you’re talking about—the ability for a group of people or a community to control the narrative about themselves—is an important part of your book, I know. And it’s related to this concept that you mentioned called “technological redlining.” Can you just explain what that is?
Safiya Noble: Sure. I’ve been concerned with thinking about other phenomena, illegal practices like redlining, which we know has been about fostering discrimination in our society, mostly racial discrimination. When I talk about technological redlining, I really mean, how is it that digital data is used to foster discrimination? That could be maybe using our digital identities, and our activities to bolster inequality and oppression. Sometimes, it’s enacted without our knowledge, just through our digital engagements, which become part of an algorithmic, automated or artificially intelligent sorting mechanism. These mechanisms can either target us or exclude us.
One of the ways that you see this, for example, are how people’s past traces of places that they’ve been or identity, demographic information that they have populated into all kinds of websites that they have visited, might be used without their knowledge to profile them into certain kinds of categories. Let’s say they are profiled into being given higher-premium insurance quotes, or maybe they are not advertised to, with certain kinds of premium products. There’s a long history of digital platforms really trying to profile the people who are visiting those sites, and then marketing or selling their information or their identities to companies that then target them.
What we often see, I think, replicated online, and I think here of the work of Tressie McMillan Cottom, who wrote a great book called Lower Ed, about the way that black women, for example, are targeted for these kinds of predatory, for-profit colleges, when they’re trying to go online and better themselves with some type of educational opportunity. They’re targeted with these DeVry or Trump University, fraudulent educational opportunities where, this is what I mean about the digital redlining that happens.
Part of that happens because these systems are collecting information about our race, our credit, our gender, the ZIP codes that we live in and so forth. This is something that I think to me is really important when we talk about the distribution of goods and services, in our society, like education or housing or other civil rights. The technological means to do this type of redlining is something that we really have not paid close enough attention to, and something that I think we certainly need greater public policy around.
Monita Bell: Getting back to this idea of women and girls, in particular, especially women and girls of color, being targeted in unique ways. Why do you think it’s important for young people who are learning about the Internet and how to use Internet search, and how knowledge comes to bear on that platform. Why is it important for them to know the way that women and girls fare in search results?
Safiya Noble: It’s such an important question, which is, Why do we need to understand that the platforms that we’re engaging with are not neutral? This is something that’s important, because if I think about when I was a young woman, and the dominant form of media was television, radio, magazines, let’s say, for me when I was young. I knew that these were unidirectional. I knew that programs were made in Hollywood, and that’s how they showed up on TV. You know what I’m saying? I understood a little bit, even as a young person, about the logics of that some producers somewhere or directors were deciding what I was going to come up against, or consume.
Some of it was for me, some of it was against me, right, in my identity. But I had a better sense of that. Young people right now, I think, go on the Internet and in many ways, especially when they go through something like Google, they think this is going to bring them a truth, or impartial kinds of information. What does it mean? I think this is why you and I both might have been a bit jarred when we did our searches on “black girls” years ago, and found that pornography and hyper-sexualized material dominated the first page of results. That’s really different than seeing sexually explicit or eroticized advertising in a magazine that might be targeted toward me.
I can, of course, be harmed by consuming those images, but I don’t think the magazine is a teller of truth in the way that people relate to Google or search engines as being these fair, neutral, objective, credible spaces. That’s the huge difference in the media landscape right now, and this is one of the reasons why people like me, I think, are trying to impress upon the public to handle these technologies with caution, but also, try to raise awareness with these companies that they have ... If they’re going to claim to be democratic and neutral and objective, then they have a responsibility to fairly represent people in that way, or they should come clean and just declare themselves advertising platforms that are catering to their clients or to people who want maybe some of the most base or derogatory content you can get.
Monita Bell: Right. To be honest about what the aim is. So, going back to what you were saying about the magazine, going in, you know the context and the aim of what that ad in a magazine is doing, versus something appearing in a Google search, or results in another search engine, and being presented as truth.
Safiya Noble: Yes, because in a magazine, or in newspaper, or in television, radio, there’s the content, and then there’s the ad. There’s a more clear line of demarcation. When we talk about a search engine, a commercial search engine or even social media, quite frankly, those lines of demarcation are not entirely clear. The public often doesn’t understand that optimized content may or may not be advertising-related. They may not even ... The logics. One of the things I say in the book, for example, is that we could present, in a library, we have stacks and stacks and stacks of books. No one part of the library, often, is seen as more valuable or better than some other stack, or row of books. It’s just all there, and you can browse those shelves a bit equally, under the best circumstances.
When we talk about presenting information in a rank-order fashion, where we’re going from one to a million or more hits, the cultural logics in the U.S. and in the west are that if it’s number one, what? It’s the best. We wear a big number one foamy finger at the football game for a reason. We don’t wear 1,300,000 on our foamy finger. The cultural logics of rank order, for example, confer a particular type of legitimacy. These are the kinds of things that I really try to tease out in the book about what are the things we’re not really noticing about these kinds of information environments, that have, again, a certain type of legitimacy that we might call into question a bit?
Monita Bell: If we are approaching digital literacy instruction, at least from the standpoint of using search engines in a way that okay, now you know that biases are inherent in algorithms, and that search results don’t necessarily represent the most relevant, or the most credible information you can find, how do you think that awareness also speaks to the need for young people to think of themselves as digital producers and creators?
Safiya Noble: I think it’s really great that we’re finally in a moment where we’re starting to think about the biases that get built into all of the digital technologies that we’re engaging with. I think that’s really important. When I started writing the book, it was incredibly controversial to a number of people who I talked to. They just didn’t want to believe that they could be possible. Journalists like yourself are doing a really important service to the public, in calling into question whether these platforms can be fair and neutral.
Having said that, I think that the Internet has also been an amazing place for creative expression, for sharing ideas, for finding other people, for both exacerbating social isolation but also creating connections with people. It’s a complex, psychological space, I’ll say, at a minimum. Young people, I think one of the things that I worry about is that just because we think it and we put it on the Internet doesn't mean that it’s true. There is a really important skill set that we all need, called “critical thinking.” It’s tied to our depths of education and knowledge. I think that one thing we should not to give up is that just because somebody writes it and puts it on the Internet, that it’s true—that we should believe it, that we should internalize it, that we should let it hurt us.
That evidence, and other ways of knowing things that we experience in life as young people is really important and we should never give up on wisdom, and a deep education. I think some of that comes from long-form reading in books. Obviously, I come out of the field of library science, so I’m a big fan of libraries and books because I think they help us think in very complex ways about the complex worlds we live in. The Internet sometimes truncates and shortens our patience for complex thinking. I guess I would say, as people are creating, don’t give up on the many forms of media that are available for us, including books and art and classes and conversations.
Monita Bell: That also reminds me of the section in your book when you talk about alternative search engines that have sprung up to reveal more representative narratives about a particular community. Would you say that there are search engines out there that maybe people don’t know about that perhaps do a better job than Google when it comes to identity-based searches, and things related to that.
Safiya Noble: It’s a good question. There was the great experiment of Blackbird, which was a browser that was really intended to help Africa Americans and black people find more relevant content. One of the things we know from research, for example, is that for people of color in the U.S., particularly African Americans and Latinos who are not online: many of them report out it’s because the content they find is not relevant to them. I think that we certainly ... I certainly argue for more search engines, and more pathways to knowledge, not just information, rather than fewer. Rather than having one; Google has become synonymous with Internet search in such a way that people don’t even think of any other pathway.
I think there should be many pathways, and one of the things that I often talk to librarians about is, how amazing would it be to have librarians and professors and teachers, people who are subject matter experts, curating content, and in these various knowledge spheres that are more open to the public. When we think about academic librarians, where we can go and get some of the best research available in the world. That’s only if you’re in a university, so that’s not particularly helpful to the public, the majority of whom are not in a university. What happens, right, for the rest of us, if we’re not in those spaces?
I think that there are a lot of information professionals and librarians and those that I mentioned who could help us navigate the Web better, and make clearer the values. Like, “We are intending to send you to the Southern Poverty Law Center when you need to understand how hate works in America, and which organizations you should know about, and become literate and educate yourself.” Rather than leaving it to Stormfront to potentially pop up when you want to know about hate groups, and let them have the microphone of educating young people. These are the kinds of things… Certainly someone like me, I would know the difference between Stormfront and the SPLC.
I think that kind of knowledge, we have hundreds of thousands of people in this country who are great experts, who could lead us to knowledge, and they’re called “librarians and teachers and professors and journalists, educators.” I think that we should center them more in the conversations about access to good education and good knowledge, rather than leaving it to advertising companies. That’s really the point that I try to make: We should diversify the information ecosystem with more search engines and more ways, more pathways to knowledge, rather than fewer.
Monita Bell: Just a moment ago, you made a distinction between information and knowledge, and I just wanted to have you talk about that a little bit. How do you distinguish the two?
Safiya Noble: I think of information as being ... It can have a function, and be not necessarily connected to fact. It may or may not be connected to understanding or sense-making. There’s a lot of information that floats around us all the time. It may or may not deepen our understanding of the social practices that are at play in building a healthy society or democracy, or making better humans, so to speak, or a more peaceful coexistence. There’s lots of ways you can think of knowledge, or just even deepening your skill set and your abilities to do things in your life. Knowledge really helps us to point our intentions in ways that are well-informed, we should say, that take in multiple factors and some of those might be science, history, processes of learning from the past.
Knowledge, I think, is important, and of course, maybe I put my own premium on wisdom, which I think is really being able to learn vicariously from other experiences that we don’t have to go through ourselves, taking the best of what knowledge has produced, and applying it in a way that really improves our own individual and collective experiences, being alive, living in a society, living in a community, a neighborhood. Information may not necessarily make you wise. It certainly is ... It’s here, and there’s many kinds of information. There might be information that’s factual. There might be misinformation where people are misleading us away from the facts, and there might be disinformation or propaganda, which is purely intended to ill-inform us. That is harder and harder for people to discern. The Internet, I think, is a place where it’s also difficult to discern.
Monita Bell: I really want to highlight this because as I began consuming your book, which just had me rapt, you make this point about the fact that algorithms are created by human beings, which is why, in and of themselves, they can’t be neutral because human beings are not neutral or objective. Specifically, you talk about the fact that, generally speaking, for the most part, the folks who are creating these algorithms in Silicon Valley and in these tech spaces, are white, male, from very similar backgrounds that don’t reflect the diversity of who we are as a society.
Not only do we have the biases that they come in with as human beings, but we also have the biases from our history and from old media, that you talk a great deal about, pouring into these algorithms, and thus, the search results we get. All of that to say, are there ways that we can reduce or push down the biases showing up in our results? Assuming that we can actually recognize them.
Safiya Noble: It’s a great question about reducing bias, and that’s really a hot topic right now. I feel like there’s actually no such thing as the unbiased place, certainly with social history. Maybe there’s even science, the things that we have thought of as fact, sometimes we do more experiments and we come to have new interpretations of what we previously thought. This is again because we generate more knowledge about a particular phenomenon. I think this is one of the ways in which calls for unbiased technology are probably woefully insufficient. I think for sure, we know that algorithms and certainly AI, and predictive technologies use data sets that are made by human beings.
I’ll tell you as a researcher, I know a lot about how to make a data set, and I know a lot about how data sets get made and constructed. They’re made up, on some level. Categories are made that rely upon the decision-making and thinking of the researchers who make those data sets. Those data sets are already flawed, and they already drop out a lot of nuance. They’re about generalizations, not about specifics. And then those data sets get used as a baseline of training for predictive technologies or artificial intelligence, or automated decision-making systems.
This is where we have, of course, new attention on things like predicted policing, or criminal sentencing software, where previous patterns, what the data sets that are training those AI to do don’t account for is, for example, a history of over-policing in poor communities. They don’t account for a history of over-policing in poor communities, in black communities, in Latino communities. They don’t account for over-arrests that are also made. They don’t account for this racist or discriminatory pattern of policing that we have had since the invention of policing in the United States.
Those become a baseline of training data by people who may or may not even understand that history of bias and discrimination, in our society and they use that as a baseline of truth. Now, future predictions are made on past discriminatory practices. What’s particularly concerning for me is that as we move rapidly toward deep machine learning and AI, that human beings have a hard time processing the amount of data that a computer can process, then new predictive patterns are being developed, and decisions are being made that human beings can’t even necessarily intervene upon.
Because the amount of data that’s being processed by the computer is something that a human brain can’t process. I think those are the kinds of things that we should really ... To me, that’s the future of AI as a human rights issue, and as a civil rights issue, that are going to be confronting us soon because we will lose our ability to intervene upon these complex, decision-making systems that are made from so much data that human beings can’t fix it.
Monita Bell: I was struck by your use of the term throughout the book, and I’m hearing it now, of this “decision-making processes” or “decision-making tools” that we allow to do our thinking for us, so to speak. I think it goes back to what you were saying earlier about this emphasis that we must have on critical thinking. “Critical thinking” is a term people use all the time, especially in education spaces. Actually thinking about what that means in this world where we are increasingly relying on technology to make our decisions.
Safiya Noble: Yes, I will say that I teach my students that critical thinking means being able to understand how power is operating in any particular situation or project. Because if they understand and can recognize and trace how power is being used for or against communities or individuals in any given scenario, then they are thinking critically about the consequences or affordances of a technology, of a campaign, of a particular media, a film, a text, quite frankly. Critical thinking is who wins and who loses in a particular situation, and why. What questions are we not asking? What biases or past discriminations or past wrongs might we still be operating under, and how might it be different?
Those are, to me, the really important dimensions of thinking critically about something. I think that these, again—any way that you can deepen your knowledge will help you to be a more critical thinker. The way that someone who’s got a Ph.D. in sociology or information studies like me, we might have spent years thinking about all of the various ways that power is operating in our fields or in society or around knowledge. For example, one of the chapters in the book, I focus on the role of gatekeeping people of color and women and poor people out of knowledge production. This has been a central issue for librarianship and academia for a long time.
I find it interesting that at the moment in history where you have more voices of color, more previously poor people who have become educated, who have access to creating knowledge, to publishing books, to having research—their research—in play, we step away and abandon that and say, “Well, we don’t need universities,” or “We don’t need to be intellectuals” or the vilification of academia. Those are, to me, again, really important things we should be thinking about it. What does it mean that when our institutions of knowledge-making just start to become democratic and have fair representation. And I’ll tell you as someone who works in it: we’re not quite there yet, but just as that’s happening, we divest from academia and we divest from education.
When we have a full generation of underrepresented teachers, this is when we now abandon public education? That’s what critical thinking allows us to do is say, “This is interesting. Why are we stepping away from the possibilities of using that knowledge in education to create a more fair and just society?” Instead, we’re stepping away from those very institutions that could help us the most.
Monita Bell: That is really powerful, and I would say especially for our community of anti-bias educators, so folks who are dedicated to making sure they’re addressing issues of social justice and inclusion and equity, that when we talk about critical thinking, in any arena, that we need to be talking about power. Thank you for that.
Safiya Noble: Absolutely.
Monita Bell: I will say, just to wrap up then, for educators who will listen to this conversation, whether they’re classroom teachers or librarians or media specialists or some other position in an educational space, what do you want them to take away from your work? How do you hope it will inform their practice?
Safiya Noble: I hope that ... I see myself as a partner with all of those people that you mentioned, and I hope that the work that I do will be an asset to them. Certainly, I think there are lots of exercises that all of those educators can do, like have your students do Google searches. Have them look for news stories about the topics of the day. See what kinds of things they’re finding, and then put them into alternate environments. Send them to the library and have them not just look in the library database, which again, is another constraining system, but have them walk the stacks. Have them go and interview people and talk to people, older people in their communities, about those phenomena.
Have them engage in multiple ways of learning and knowing, and then thinking about “Who benefits and who loses, with some of the things that I’ve been able to find? Who’s in control of a particular narrative?” I think that level of critical, digital media literacy is so important. I don’t put the onus on educators and parents and students to figure all of this out. I do think that the products that get put out into the public, whether it’s a search engine or pharmaceuticals or cars, need to have regulation, and consumer safety needs to be at the forefront, including hate speech and disinformation and things that can provoke social harm.
Those things need to be dealt with in a regulatory framework. I think on the ground, where people are interacting with students: challenge students to think about the kinds of things they find on the Internet, and then expose them to alternatives so that they know that the beginning and end of information-seeking doesn’t have to just be through the Internet.
Monita Bell: I think that is the perfect way to wrap up. Thank you so, so much. I think folks are going to find this incredibly useful, and insightful, and thought-provoking, and challenging. We have to do the work.
Safiya Noble: Awesome, let’s do it. We’re all in this together.
Monita Bell: That was Safiya Noble, an assistant professor at the University of Southern California’s Annenberg School [for] Communication. Find out more about her research, and her book, Algorithms of Oppression: How Search Engines Reinforce Racism.
Next, I talk with Heidi Beirich about how hateful actors have used the Web to their advantage, and what we can do to counter the spread of hate online.
Did you know that Teaching Tolerance has other podcasts? We’ve got Teaching Hard History, which builds on our framework, Teaching Hard History, American Slavery. Listen as our host history professor, Hasan Kwame Jeffries, brings us the lessons we should have learned in school, through the voices of leading scholars and educators. It’s good advice for teachers and good information for everybody. We’ve also got Queer America, hosted by professors Leila Rupp and John D’Emilio. Joined by scholars and educators, they take us on a journey that spans from Harlem to the frontier West, revealing stories of LGBTQ life that belong in our consciousness and in our classrooms. Find both podcasts at tolerance.org/podcasts, and use them to help you build a more robust and inclusive curriculum.
Heidi, thank you so much for joining me today to talk about a very important subject, something that educators in particular need to know. First, will you start by just introducing yourself and telling us a little about what you do?
Heidi Beirich: Thanks, Monita. It’s really great to be here and talk with you and your audience. My name is Heidi Beirich, I’m the director of the Intelligence Project here at the Southern Poverty Law Center. That’s basically the part of the law center that tracks hate groups and extremists, and reports on them. That’s our main job.
Monita Bell: Given what you’ve learned from your work in the world of hate, what does the public need to know about the ways hate proliferates and is cultivated online?
Heidi Beirich: Unfortunately, with all the benefits that the Web and technology brought us, it also brought a terrible downside, which is this: social media and forums have created an environment in which it’s very easy for neo-Nazis, Klansmen, other racial extremists, to interact in a way that just really wasn’t possible before. If you think about in the 1990s, if you had these ugly inclinations, the only way you were going to connect with somebody else who shared your views was by getting on a fax list or a mailing list, or somehow getting in contact with someone in a hate group who could invite you to an event. In other words, it was really hard, because chances were that your neighbors didn’t share these ideas, and how do you get connected to these people?
But what happened with the rise of the Web, in particular, forums and social media, is that problem was essentially solved. You could go online, type something terrible into Google, something anti-Semitic, racist, whatever the case might be, and within seconds, you’re going to find fellow travelers. Nowadays, you’re going to find those fellow travelers on mainstream platforms, like Facebook or Twitter, or you can find them on bona fide hate sites like stormfront.org. And that’s the problem with the Web: It has this really dark side that allows people with really ugly ideas and plans to connect and build movements. We see great movement-building, for example, throughout the Middle East, and the uprisings against authoritarian governments, but at the same time, neo-Nazis now have a tool to organize themselves, and that is the problem that we have with the online space.
Monita Bell: Since we’re talking about these hate groups and extremists, their ability to really grab more people and pull them in, can you talk a little bit about young people’s vulnerability to that?
Heidi Beirich: This is the most scary thing, I think, about the Web, is the vulnerability of young people to being exposed to propaganda, that they have no sense about it. They have no ability to judge it or get why it’s wrong; they don’t have the level of digital literacy that’s necessary. And one of the problems that you find online is not only do you get this one-to-one touch between individuals, where they share propaganda, hate propaganda, but you also have problems with algorithms, where when you start searching for something that may lead you to a line of racist websites, there’s nothing to veer you off that path.
So it’s really easy to get sucked in, and when you have fragile-minded people, young people, they can get lured in by older racists who start teaching them why they should hate, giving them a worldview that’s toxic, and frankly just wrong. Or the way that the algorithms themselves work on the mainstream platforms can basically suck you down a rabbit hole of hate.
Monita Bell: I think that’s a great segway into talking about Dylann Roof, and how this very thing—doing a search for black-on-white crime—sucked him in to one racist, white supremacist website after another. Can you talk more about that process, of how he became radicalized, basically?
Heidi Beirich: Dylann Roof is, in many ways, a very, very tragic situation, of course, because of the people he shot, but also the way a classic kid who was isolated, had some mental health issues, was very much fragile-minded, was essentially manipulated by the Web. What happened with Dylann Roof, because he told this story in his manifesto, we know what happened exactly, is he was watching the news during the Trayvon Martin situation when Trayvon Martin was killed. He was curious about what was going on there. He gets on Google, he says he gets on Google, and he types on “black-on-white crime.”
What Google then does, is instead of pointing him to legitimate statistics about crime, which would show that the idea that black-on-white crime is happening at some high level is a lie basically—it doesn’t exist; it’s fantasy. Instead of getting that, what Google did was pointed him to the hate sites that proliferate this myth. It’s one of the most powerful propaganda points on hate sites today: and it’s this idea that white people need to arm themselves and protect themselves violently against black people, because black people are basically carrying on an unreported, secret war against whites, literally killing them in the streets. Of course, this is all ridiculous and not true.
What happened to Dylann is he typed that in, and the next thing he knows, he’s on the Council of Conservative [Citizens] website, that has pages and pages of so-called black-on-white murders. They’re not real, but they’re presented that way. He bounces from there to other sites that share the same ideology, like American Renaissance, and eventually, he makes his way to stormfront.org, where he begins posting. He continues his curiosity. He ends up on the neo-Nazi website Daily Stormer, where he’s posting.
What he’s basically done is from that flawed, initial search that pushed to the top of the Google results on that first page, just hate sites, he has now made his way all the way through the movement, sucked in all this false data, wrong ideology, come to the position that he thinks that he needs to kill black people to protect white people from some scourge of violence that doesn’t exist, and he’s now become, literally, a member of the neo-Nazi movement because he’s posting on all these websites. That’s what happened to him, and it literally started with Google’s algorithm.
Monita Bell: I want to touch on that algorithm in just a second, but just for people who may not know, can you talk about the importance, or I would say the stature of the Daily Stormer and Stormfront, for those who don’t know how big they are in the hate world?
Heidi Beirich: Yes, I would say the Daily Stormer, which is a neo-Nazi website, and Stormfront, which is ... it’s just a forum, but it’s an agglomeration of everybody from the Klan to neo-Nazis, to people who describe themselves as white nationalists. It’s more of a widespread racist and anti-Semitic community. Those are the two biggest places for the propagation of hate, and I say this because that’s what the data shows. Stormfront, for example, has more than 330,000 registered users who are active on the site. What’s interesting about Stormfront is it very much grew as a backlash to Obama’s election. In 2008, there were 150,000 registered users on Stormfront.
Monita Bell: Wow.
Heidi Beirich: The other thing that’s interesting about Stormfront: It’s the first hate site that was put up on the Web. It came up in 1995, just a few months after the Oklahoma City bombing, and a former Klansman named Don Black runs the site. He said, he predicted at the time, “This is our way around the mainstream media. This is the way we connect,” which is exactly what’s happened with the Web and white supremacists. Daily Stormer, a neo-Nazi site that’s run by a guy named Andrew Anglin, has more than 400,000 unique page views a month.
It’s also a website that we at the Southern Poverty Law Center have a particular interest in because we are suing the owner of the site, Andrew Anglin, for conducting a troll storm against a particular Jewish woman in Whitefish, Montana. When I say “troll storm,” I mean he picked her out as an anti-racist activist, and then a massive amount of harassment came down on her family, including phone calls, postcards, right up in your face—this isn’t just about online activities.
And we’re suing the group and this particular neo-Nazi to hold them accountable for that harassment. This is another thing you have to understand about these websites. They don’t just exist on the online world. They don’t just radicalize the Dylann Roofs of the world. They create real harm for the rest of us out here; especially if you speak up against racism, you can become the target.
Monita Bell: Right. We see these real-world implications from what’s happening online. Going back to Google’s algorithm, I know, because I work with you, that the SPLC has actually been in conversation quite a bit with Google about its algorithm. Can you talk a bit about what you’ve learned in terms of why this hateful content has risen to the top, as opposed to something legitimate?
Heidi Beirich: Yes. Here is what happened to Google. When the site was originally conceived, page rank, which is how you get to the top of the first page, and incidentally, “page rank” is not because it’s a page on the website. It’s named after Larry Page, one of the Google founders.
Monita Bell: I did not know that.
Heidi Beirich: I know, it’s really interesting. Page rank, for a long time, and initially, was determined basically by how much your academic article was cited by other academic articles. That was the original conception. There was some sort of logic and authoritativeness to being ranked number one on the first time you search for something, or ranked number 10, or on the second page.
Monita Bell: It was actually about credibility.
Heidi Beirich: It was totally about credibility. But then in 2009, Google decided to start monetizing its content. What that means is they wanted to sell you ... They wanted to sell stuff to you through ads, and then they take a cut from the ads. They make money; this is how they’re profitable. Instead of giving you authoritative data, like giving Dylann Roof the FBI’s statistics on crime, they started giving you more and more of what you want. This is a little bit like when you go on Amazon and you buy, in my case, some terrible hate book because I’m watching these people, my list of what I should buy next is a little scary because it’s more hate material.
Heidi Beirich: That’s one thing. It’s a totally different thing when it’s a generic search on an important topic. Google monetized this thing, and that’s how Dylann Roof ended up bouncing from one hate site to another, and not to real data on crime. A few years ago, I think in 2015, we went and visited with them and pointed this out to them, showed them that this problem existed.
Heidi Beirich: We made a video at one point that laid out all the problems with the algorithm, using Dylann Roof as an example, and saying, “Look, Google. You have got to push hate material off the front page. You cannot treat it as though it’s as ‘trustworthy and authoritative,’” and those are the words they use for what gets on the first page, “as, for example, crime statistics from the Department of Justice. You have a terrible, terrible flaw here. You didn’t handle the monetization of these algorithms in an appropriate manner, and you got to start fixing things now, or we’re going to have more Dylann Roofs.”
Monita Bell: Has anything come of those conversations with Google?
Heidi Beirich: It’s really interesting: the first time we visited with them at their headquarters in Mountain View, they basically told us there was nothing they could do about the algorithm. It was almost like they were describing a Jenga game, and if you removed one piece, everything was going to cascade down. Literally, we had some of the top data scientists at Google in the room with us saying, “There’s nothing we can do about this.”
Heidi Beirich: We had walked them down the path that Dylann Roof had gone down. But there was a moment in there that we switched to another subject, and where I had this PowerPoint, and I said, “You know, when you type ‘Martin Luther King’ into this, our great civil rights icon, not just for the U.S., for the world, the first hit you get, and most of the hits on the first page, are hate sites.” In fact, the page that was the top link at the time was martinlutherking.org, which is owned by Stormfront, the hate site.
Monita Bell: A dot.org—wow!
Heidi Beirich: And a dot.org, right, because as we all know, a dot.org doesn’t necessarily mean that something is legitimate, even though we all tend to think that.
Monita Bell: Right.
Heidi Beirich: I put that up on the screen, and I was saying to them, “You’re teaching the world, kids all around the world who are looking into civil rights and who are Googling ‘Martin Luther King,’ this is what they’re getting. This is what you are teaching the billions of users of your site.” They continued to, and they looked a little horrified, but they continued to insist that their system was perfect and they couldn’t make any changes. But about two, three weeks later, someone here at Southern Poverty Law Center, I don’t think it was me, Googled this, and all of a sudden, Stormfront’s Martin Luther King site had gone off the front page, including some other hate sites.
Heidi Beirich: So apparently, it wasn’t quite as complex as they laid it out. Frankly, since the time that we criticized them, and others have done so as well, they have been continually changing the algorithm to actually make it more trustworthy; to make the things that end up on the first page more reliable and correct. There’s still a lot of problems with the system, but I feel like it’s a seminal moment for them to admit that the monetizing road they’d gone down had had some really huge flaws and that they had to take responsibility for what this algorithm spits out, and not have it spitting out hate sites for Martin Luther King or for anything else.
Monita Bell: Right, and I think you make such an important point that what’s coming up through their search engine is helping to shape people’s knowledge-gathering. A young person who may not know better, if they see martinlutherking.org, and if you can talk some about the content that was on the site, it wasn’t necessarily obviously hateful, right, it was kind of subtle. Can you talk about that?
Heidi Beirich: The information was subtle. In fact, the creepiest part of that site, actually, was at the top, it said something like, “Hey students, take a quiz on Martin Luther King, Junior.” And then you click on that link, and what it is, is all this negative propaganda about him. But it sounds authoritative; it sounds like somebody legitimate could have written this. So they’re trying to suck you in. They’re trying to suck you in into a false narrative about Martin Luther King.
Heidi Beirich: I actually think the bigger issue here is this: There are so many billions of people now who learn everything from Google, all over the world, that Google has essentially become our library—the world’s library. So it can’t just be treated as an amoral enterprise to make money, and you can’t just lie and say you’re somehow presenting information in some sort of a technical, non-moral way. You have a moral responsibility, meaning Google does, or anybody else who runs these algorithms, to present valid information. Because you’re going to set people down the road of wrong thinking, otherwise. You’re going to essentially dis-inform them. We named our video “The Miseducation of Dylann Roof.” You’re not just miseducating Dylann Roof, you’re miseducating billions if you do this—and you can’t do that. You cannot proclaim to be the library of the world, and not take the responsibility that the librarian has, to make sure the books that you’re highlighting are legit.
Monita Bell: Absolutely, well said. Thank you for that. I don’t know if you know the name Safiya Noble? She came out with a book in January called Algorithms of Oppression, and she’s talking about this very thing, except where it concerns Google searches of, say, “black girls,” and what comes up. It’s the same thing. Google has basically become synonymous with Internet search, and so there’s a responsibility there. I think that’s something for our listeners too, that we want them to be aware of when they’re teaching young people: that what you see isn’t necessarily 100 percent authoritative or credible. We need to know how these algorithms work.
Heidi Beirich: There is no question that educators actually have to school up on this particular issue. The fact of the matter is that Google, and most of these places, tend to reflect back to us the horrific biases we already have in society. When you crowdsource page ranks, or you crowdsource photos, or you crowdsource anything, you’re going to get just what’s deep-seated in our society: racism, white supremacy, bigotry.
Heidi Beirich: There have been situations where photographs of African-American people have been interpreted as animals because the people that they’re talking to have biases. And so you can’t really teach about the online world without teaching about how it reflects back to us in many ways, everything that’s wrong with us. There are, in some ways, opportunities there to highlight how deep biases are, that they can almost just be reflected as people are typing stuff into an algorithm.
Monita Bell: Yes. That reminds me, when you’re typing keywords in and you get those auto—gosh, they’re awful.
Heidi Beirich: The autofill feature on Google has been a cesspool, and they’ve been working on it, but for a while there, if you typed in ... Say you were going to ask a question about Muslims, so you type in “Muslim” and then it would say, “All Muslims are terrorists.”
Monita Bell: Right.
Heidi Beirich: It had the same kinds of problems with every single minority population. Forget it, it’s not just about minorities, it’s also about women. Terrible, misogynistic material. You had to be careful for all that, and students today have to know how to navigate this. Teachers have to help them understand how flawed this stuff is.
Monita Bell: We’ve been talking a lot about ... When Dylann Roof went into his search for black-on-white crime, he already had a very loaded premise of what that was going to mean and what he was going to find, basically. I think the Martin Luther King example is a good one, but can you talk about are there other seemingly harmless or benign search terms that actually do lead to hateful sites that folks might want to think about, or just as good examples?
Heidi Beirich: I think probably the most problematic here is actually the autofill feature. Because what happens is, even if you have a generic interest in something, say for, perhaps, you’re interested in Judaism. It could be anything. The autofill actually tells you what question you want answered. So it suggests to you, in that way, what material you should be looking at. I think that that is far more dangerous, actually, than Dylann Roof talking about typing in “black-on-white crime.” If you type in “black” and it tells you, “Are blacks more violent than whites?” that’s really problematic. Because Google is telling you what you should be looking for.
Monita Bell: Right. Something that you probably weren’t even thinking about, it’s like, “Oh, well, are they? I should look at that.”
Heidi Beirich: Exactly, and the autofill is based on all those racial biases and bigotry and everything that we have in our society. It’s not something we addressed in our video, but in many ways, it’s more important. I’m glad they’ve started cleaning it up, but they’ve got work to do.
Monita Bell: Is there anything we can do to counter the hate we see online?
Heidi Beirich: Yes. There’s a few things we can do. One, we need to get literate about the online space. That’s a basic responsibility for everybody, and I think kids today just need to be taught differently. They have to be really suspicious and understand flags in the online environment, and so on. The other thing that all people can do is tell the Facebooks and the Googles of the world: “I don’t want to see this. You need to change this.” I think too often, people in their busy lives don’t realize that when they do that, it’s like writing a letter to your Congressman.
Heidi Beirich: But in this case, writing it to the tech companies or sending them an email saying, “I’m opposed to this. I don’t want to see your little autofill fill things out like this. I don’t want to see your facial recognition be racist. I don’t want to see Google Maps being manipulated by white supremacists to say something racist about the White House.” I’m thinking about when the Obamas were there.
Heidi Beirich: In other words, “You people in Silicon Valley need to learn more about civil rights, white supremacy, the history of colonialism,” because we’re not just talking about the United States. We’re talking about imbalances of power. “You need to learn about genocide.” Facebook is one of the reasons that we had the Rohingya genocide; it was orchestrated off Facebook. These are problematic things across the world that lead to mass spasms of racial and other types of violence between in-groups and out-groups. And we the people need to let these people know that we’re not going to have it.
Monita Bell: I think that speaks to, once again, Google has, over the years, contested that. “Oh, well, we can’t change our algorithm,” and then we’ve seen what happens when they get public pressure about it: they actually do make tweaks, so we know that they can be affected.
Heidi Beirich: The fact of the matter is, our original strategy in dealing with the tech companies was to approach them quietly behind the scenes, to say, “Hey, you have these problems. You might want to work on them.” And it was a completely ineffective strategy. It was only by going public, and this isn’t just about the Southern Poverty Law Center, allies of ours like the Anti-Defamation League, groups like Data & Society, there’s a lot of folks who care about this, also have helped on this front, but it’s taken the public pressure for them to take this stuff seriously. Frankly, from my position, the more the better. I think educators can play a very powerful role here. Because you know what your kids are seeing; you know how they’re being impacted. Google may feel maybe it doesn’t have such a moral responsibility to the adult population, but it has to, to those kids.
Monita Bell: For educators, you mentioned earlier that educators need to study up on all of this stuff, too. What are some particular things you think they need to be doing or learning so that they can better teach their kids to navigate the online world, basically?
Heidi Beirich: I think you need to have a rudimentary understanding of where the most ugly material comes from on the Web. Those are forums like 4chan and 8chan and places like Reddit, although Reddit has been trying to clean itself up. Really, the online space is not as big as you would think. Most people are on Facebook, Twitter, the sites I just mentioned, Instagram, some places like that. You need to know what those forums are about, how they function, and how they’ve been connected to spreading bias. I think that’s a baseline, and there have been some good books like the one you mentioned recently that dive into that. You need to know that.
Heidi Beirich: You need to know a little bit about what hate speech looks like online, how it functions, and at least the basics of the flaws in algorithms. This isn’t something that takes massive amounts of time to figure out, and there are a ton of resources on it, including from us here at the Southern Poverty Law Center. That, I think, is the baseline. You need to know about the flaws so that you can convey them to your students. And each site is a little different. In Facebook, it’s about the algorithm and the News Feed that’s problematic, or about the Messenger app, that’s how the Rohingya situation in Myanmar was manipulated.
Monita Bell: Wow.
Heidi Beirich: When it’s Twitter, it’s actually about just letting really vile people continue to have public accounts.
Monita Bell: Just spew and spew and spew.
Heidi Beirich: It’s a place of harassment, where you’ve had these terrible harassments of journalists and all kinds of stuff. And then knowing about places like 4chan and 8chan is to understand that these are unregulated areas of the Internet where kids end up getting radicalized. You can’t control them, they’re not going to have a hate speech policy, they don’t have terms of service. That’s a place that if a kid gets there, it’s the Wild West, and they’re going to be exposed to a lot of bad stuff, including criminal stuff. It’s not just about hate speech.
Heidi Beirich: You need to know about those things, and I’ll just add one other thing that I am no expert on, but is important. The Charlottesville rally in 2017 that led to Heather Heyer’s death, that was organized on a gaming app called Discord. There are some gaming apps, including Discord and Steam, where children are very active and white supremacists are very knowledgeable about the fact that children are active there. So both, I guess in this case, parents and teachers need to be aware of what their kids are doing in those gaming apps because bad people are trying to take advantage of them in that space.
Monita Bell: Thank you for adding that about gaming, and that’s something we definitely want to get into in another episode, because that’s so relevant right now, as you’re saying, kids are really into these gaming apps and can be sucked into these really hateful worlds.
Heidi Beirich: Absolutely, and in the gaming apps, someone can go at them one-on-one, they can speak to them, and that makes it even more intense than a forum where you might be writing back and forth with people. That’s good, I’m glad you’re going to address it. That’s important.
Monita Bell: Thank you. Once again, tell us who you are and what you do for the Southern Poverty Law Center.
Heidi Beirich: Sure. I’m Heidi Beirich, I run the SPLC’s Intelligence Project, which is the part of the Southern Poverty Law Center that monitors hate groups and extremists.
Monita Bell: Thank you, Heidi.
Heidi Beirich: Thanks for having me.
Monita Bell: That was Heidi Beirich, director of the Intelligence Project for the Southern Poverty Law Center. You can learn more about the Intelligence Project at splcenter.org. That’s splcenter.org. Thanks for tuning in to The Mind Online, a podcast of Teaching Tolerance, which is a project of the Southern Poverty Law Center. I’m your host, Monita Bell, senior editor for Teaching Tolerance. This podcast was inspired by Teaching Tolerance’s Digital Literacy Framework, which offers seven key areas in which students need support developing digital and civic literacy skills, and features lessons for kindergarten through 12th grade classrooms.
Monita Bell: Each lesson is designed in a way that can be used by educators with little to no technology in their classrooms. The Digital Literacy Framework and all its related resources, including a series of student-friendly videos and a professional development webinar, can be found online at tolerance.org/diglit. That’s tolerance.org/diglit. This episode of The Mind Online was produced by Jasmin López. Production was supervised by Kate Shuster, and we’d like to give special thanks to our guests, Safiya Noble and Heidi Beirich, and to Teaching Tolerance senior writer Cory Collins.
Like what you heard? Share this podcast with your friends and colleagues, and remember to rate and review us on iTunes, or wherever you listen.