You are here: American University Centers Antiracism Center Critical Information Literacy

Critical Information Literacy


Critical Information Literacy (CIL) helps information-seekers evaluate and use credible sources and do so with the power dynamics of knowledge production in mind. The university’s CIL committee (CILC)—made up of antiracist university librarians and Writing Studies Program professors—encourages the AU community to think about knowledge in terms of who creates it, how it becomes valid as knowledge, who has access to it, and how aspects of identity, including race, class, gender, and sexuality, affect it all. For more on the committee and the field of CIL, please read the CILC Mission Statement. 


Antiracist Praxis

This year, the ARPC and CILC are collaborating on the revision of the library’s open-access Antiracist Praxis Subject Guide. The Subject Guide is being updated and revised with a focus on critical media literacies. This is the work of our doctoral fellow in collaboration with CILC professors and librarians. Revisions will roll out in batches in February, March, and April and will be presented at our April 8th ARPC Colloquium. Save the date!

Critical AI

The ARPC is creating critical AI tools and partnering with CILC to promote them. We all align in perspective with the antiracist members of the Responsible AI Use movement, like NYU data journalism professor Meredith Broussard. She writes that 

Digital technology is wonderful and world-changing; it is also racist, sexist, and ableist. For many years, we have focused on the positives about technology, pretending that the problems are only glitches. Calling something a glitch means it’s a temporary blip, something unexpected but inconsequential. A glitch can be fixed. The biases embedded in technology are more than mere glitches; they’re baked in from the beginning. They are structural biases, and they can’t be addressed with a quick code update.

 


Young man using computer

Critical Praxis with Artificial Intelligence: Recommendations

In support of critical, ethical, and mindfully antiracist use of AI, we surveyed the critical AI literature and fact-checking best practices to compile key points. (See below.) Our extraction of best practices and things to know about AI are open access.

We ask only that users credit us as follows: American University Antiracist Research and Policy Center (2026, Feb 2), “Critical Information Literacy” page, ARPC website, american.edu/centers/antiracism/critical-information-literacy.cfm. 
 

  • Technochauvinism: a mindset that regards everything technological as superior to that which is fully human. Holders of this mindset erroneously treat technology as neutral and usually fail to remember that technology is an extension of things human, including our fallibility, prejudices, and other biases.
  • Glitch: a minor problem in a digital tool that does minimal damage and can be somewhat easily fixed or sidestepped. 
  • Bug: a much more significant problem in a digital tool than a glitch. Bugs are built into systems and can be hard to detect and correct. Antiracist generative AI users should be aware that the racism and sexism built into digital tools are considered bugs and can have very serious consequences in the real world. 
  • Hallucination: textual output from a generative AI prompt that is false or misleading but sounds convincing.
  • LLM training: the technological act of searching and synthesizing large bodies of information that describes what large language models do as they detect patterns and predict the next words or ideas in a sequence. This becomes users’ “answer” to a prompt. 
  • Training data: the body of information (e.g., a data set, database, or internet at large) that generative AI tools and chatbots mine for their outputs. Antiracist AI and chatbot users should be aware that all of the biases and the racialized mis- and disinformation in training materials inform the outcome of user searches.
  • Model collapse: a phenomenon in which generative AI quality and correctness degrade over time because the data that LLMs train on is imperfect and then put back out into the same cyberspaces to be trained on again. Each iteration then becomes more flawed.
  • Deep fake: a pernicious practice in which harmful audio, visual, and textual content is created using digital tools. It is made to look real and then leveraged against its target for the purpose of doing harm. This content is often sexualized toward misogynist ends or weaponized to incite racial conflict. 
  • Blackbox model: a term describing the lack of industry transparency regarding how AI tools work. People in the general public often don’t understand the technology, and so they remain unaware of its shortcomings and biases. 
  • Filter bubbles: a term referring to a sort of algorithmic echo chamber in which digital tools employ user preferences to return text that would affirm one’s own likes and exclude their dislikes. This is a built-in feature of many digital systems, meant to increase user engagement. But it also affirms one’s prejudices and biases and prevents their exposure to ideas that differ from their own. 
  • Cognitive offloading: the result of AI overuse in which users aren’t doing very much of their own thinking. They learn less and, according to recent research, may even become less intelligent. 
  • Know what generative AI is and what Large Language Models actually do.
  • Question the rhetoric of ads and explanations. It often uses human terms that signify true intelligence and the ability to reason, and it almost always suggest that technological computation is superior to human thought. It downplays or erases the biases and flaws inherent in the tools. 
  • Monitor your own responses to AI fear, excitement, and other emotions.
  • Notice the “rabbit holes” that you may be led down. Ask if these might be radicalizing you or exposing you to dangerous ideas that are harmful to yourself or others.
  • Know what data and information your AI technology is trained on, and evaluate its credibility.
  • Keep in mind that AI is fallible and inherently biased toward the White-centring perspectives of coders and the creators of training data. Counterbalance biases.
  • Never rely on AI technologies alone for things that impact other human beings’ lives or quality of life. 
  • Use AI information searches for brainstorming that informs your true searches.

Adapted from the Critical Information Literacy Committee (2026, Jan. 8), “Critical Use of AI,” Ann Ferren Conference presentation, American University, Washington, DC.

 

  • Pay attention to critiques of the environmental and socioeconomic costs of AI and the intellectual property debates surrounding AI training data. Be aware of the industry’s tendency to exploit labor (e.g., India, Kenya, etc.)
  • Keep abreast of the lawsuits against companies misusing AI. Note the number of challenges that are based on racism and sexism.
  • Look for AI competency frameworks and literature on critical race technological literacies. Consider using them to educate yourself before diving into the tools you would like to use.
  • Familiarize yourself with the responsible AI movement (e.g., the Algorithmic Justice League; the Ethics in AI Institute; Data for Black Lives; the Explainable AI Movement; and Algorithm Watch). Take particular note of the scholarship of Broussard, Boulamwini, Noble, and Bates as well as Emily Benter, Ruha Benjamin, and Timnit Gebru. Take their critiques seriously. Most people involved in that movement are tech experts offering legitimate insights, especially into the racial and gender biases being rapidly proliferated by digital tools.



Unmasking AI by Joy Buolamwini, The New Age of Sexism by Laura Bates, Algorithms of Oppression by Safiya Umoja Noble, and More Than a Glitch by Meredith Broussard

  • Treat generative AI skeptically. Use it for brainstorming but run it through rigors of validation.
  • Read laterally to determine whether your AI summary is accurate. (See the work of the Digital Inquiry Group for lateral reading pedagogies and praxes.)
  • Make sure your lateral reading includes materials on and by people of color, as AI biases lean heavily in toward the White heteronormative mainstream.
  • Find credible sources that you can trust. (See, for example, the range of American Library Association guidelines on evaluating information.) Make sure these sources have adequate racial and gender representation. Consider bookmarking these for use in your intertextual comparisons of AI generated text.  
  • Do not circulate AI-generated text that has not been fact-checked. In a pinch, check the things you question through credible fact checking sources. Fact checker Edwin Mallin (2025) recommends Reuters, Snopes, Politifact, and Health Feedback/Science Feedback for a range of fact-validation tasks.

  • Let’s be unafraid to be skeptical of Big Tech.
  • Let’s rekindle our love affairs with our own minds
  • Let’s embrace the grey and get okay comfortable with not knowing if we can believe a thing that we’ve read. 
  • Let’s be patient and realize that, with proper time and effort and credible sources, we will come to understand something we could otherwise rush with uncritical AI use. It’s okay to stay in the lane of our expertise and not jump into forming opinions solely through generative AI. 
  • Let’s not use generative AI for anything that has a bearing on people’s lives or well-being. We should always remember the flaws of this technology, which include the racial and other biases baked into them through coding and data training on imperfect sources.

Adapted from Trembath, Sarah. (2026, January 26) “Appraising Information in the Cybersphere, Pt. II,” in “The Critical Reader,” washingtonindependentreviewofbooks.com

Navigating the truth in the age of misinformation by Viet-Phong La, The Fact-Checker's Bible by Sarah Harrison Smith, the Chicago Guide to Fact-Checking by Brooke Borel, Think Before You Believe by Edwin Mallin