AI and algorithmic bias are the repeatable and systematic outcomes or errors produced by a computer system or program that disadvantage certain content. Algorithmic bias can arise from faulty data sets, program design, human error, or a combination of any of these.
Since 2018, researchers have shown that AI facial detection algorithms discriminate on the basis of race and gender (Boulamwini & Gebru). More recent studies indicate this is still a problem (Dooley et.al, 2022). Some AI companies are actively trying to suppress research on this topic (Chin, 2020). The CEO of Proctorio, a company that provides AI-powered proctoring for tests, tried to get a peer-reviewed article retracted to cover up the potential that these programs may "discriminate against students based on their bodies and behaviors" (Swauger, 2020).
AI and algorithmic bias are the products of multiple biases. The National Institute of Standards and Technology (NIST) created the image to the right to help conceptualize AI and algorithmic bias. This can be a difficult concept because many people think that, because data is impartial, algorithms or AI that utilize that data must also be unbiased. Or, as NIST put it in their report, "bias in AI systems is often seen as a technical problem." However, the NIST report found that "a great deal of AI bias stems from human biases and systemic, institutionalized biases," when in fact, algorithms and AI amplify our biases (Schwartz et al., 2022, emphasis added).
While this is a complex and nuanced problem, overall, it arises because algorithms and AI don't exist in a vacuum — they exist and are used within societies and specific contexts.
These programs are created by humans operating under different cultures, rules, and norms. Algorithms replicate human tasks or actions, using datasets and rules based on human actions.
Some of these actions were based on bias, and now these biases are embedded in the core programming. New iterations of the program then build on top of this core programming, continuing and sometimes compounding the bias. Even when humans work to not add more bias to the system, the effects of past bias are still in the dataset, pushing AI bias into view.
Cognitive biases already make evaluating information sources a challenge, and algorithmic bias can make it even harder. Algorithmic bias can make it harder for you as a researcher to break free of this bias. One of the most insidious problems with algorithmic bias is that you might never know what you're not seeing. For example, a study by Datta et al. (2015) found that Google showed women fewer ads for high-paying jobs.
It also turns out that the bias built into all algorithms offering "personalization" can make our tendency to seek out echo chambers even worse. One study (Shin & Jitkajornwanich, 2024) found TikTok's algorithm pushed people past echo chambers and into radicalization. Shin & Jitkajornwanich (2024) found that the increased extremism and polarization were a result of the algorithm's ability to "slot users into specific categories and reinforce their ideas" (p.1020). In other words, amplifying bias all the way into extremism.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
Chang, X. (2023). Gender Bias in Hiring: An Analysis of the Impact of Amazon’s Recruiting Algorithm. Advances in Economics, Management and Political Sciences, 23, 134–140. https://doi.org/10.54254/2754-1169/23/20230367
Chin, M. (2020, October 22). An ed-tech specialist spoke out about remote testing software—And now he’s being sued. The Verge. https://www.theverge.com/2020/10/22/21526792/proctorio-online-test-proctoring-lawsuit-universities-students-coronavirus
Dooley, S., Wei, G. Z., Goldstein, T., & Dickerson, J. (2022). Robustness Disparities in Face Detection. Advances in Neural Information Processing Systems, 35, 38245–38259. https://proceedings.neurips.cc/paper_files/paper/2022/hash/f9faef4e1b4dbbd48ef60056ffe14c90-Abstract-Datasets_and_Benchmarks.html
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112. https://doi.org/10.1515/popets-2015-0007
Feathers, T. (2019, August 20). Flawed Algorithms Are Grading Millions of Students’ Essays. VICE. https://www.vice.com/en/article/flawed-algorithms-are-grading-millions-of-students-essays/
Harwell, D. (2020, April 1). Mass school closures in the wake of the coronavirus are driving a new wave of student surveillance. Washington Post. https://go.gale.com/ps/i.do?p=STND&u=iastu_main&id=GALE|A619199748&v=2.1&it=r&sid=bookmark-STND&asid=a83e8fe0
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1270
Shin, D., & Jitkajornwanich, K. (2024). How Algorithms Promote Self-Radicalization: Audit of TikTok’s Algorithm Using a Reverse Engineering Method. Social Science Computer Review, 42(4). https://doi.org/10.1177/08944393231225547
Swauger, S. (2020, April 2). Our bodies encoded: Algorithmic test proctoring in higher education. Hybrid Pedagogy. https://hybridpedagogy.org/our-bodies-encoded-algorithmic-test-proctoring-in-higher-education/
Turnitin. (2024, July 23). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Turnitin&oldid=1236120068#cite_note-34
The library's collections and services are available to all ISU students, faculty, and staff and Parks Library is open to the public.