The ABCs of AI: Artificial Intelligence, Biases, And Concerns for Black Students

Artificial intelligence (AI) technology is all the rage these days. People are using artificial intelligence for all types of functions, including crafting resumes, analyzing data, and checking out shoppers at supermarkets. With all that AI has contributed to thus far, there’s even more that can be done.

According to McKinsey’s research, we have barely scratched the surface; also suggesting generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually to the global economy. For capitalists, that means there’s more money to be made with newer technologies. For the rank and file, it means a modicum of new conveniences—and big brother surveillance.

For Black people, sadly but expectantly, it means more racism.

AI consists of artificially reproduced intelligence sourced from pre-existing human-generated content, which may include flawed data sets, challenging the notion that it is entirely artificial as its name suggests. In other words, AI is created by biased and/or racist individuals, in spaces where Black people are grossly underrepresented, to function in historically racist systems and institutions.

This clarifies how systemic racism can be incorporated into intelligence designed to help society—absent any regulation at this point.

Proponents of expanding the use of artificial intelligence (AI) often point to its potential to stimulate economic growth — increased productivity at lower costs, a higher GDP per capita, and job creation have all been touted as possible benefits. AI is not objective or unbiased; instead, it mirrors systemic racism, white supremacy, and other forms of oppression in our world. For example, according to Vox Technology:

“Facial recognition software has a long history of failing to recognize Black faces. Researchers and users have identified anti-Black biases in AI applications ranging from hiring to robots to loans. AI systems can determine whether you find public housing or whether a landlord rents to you. Generative AI technology is being pitched as a cure for the paperwork onslaught that contributes to medical professional burnout.”

With the realities of systemic racism baked into AI, it doesn’t seem like the use of AI in schools makes the most sense in light of the racism already embedded in public schools. However, AI has made its way into public schools… and it’s harming Black students. Programs like Gaggle, Go Guardian, and Proctorio are examples of AI gone wrong for Black students.

Gaggle and Go Guardian are known to falsely identify verbiage used by Black students as dangerous and in violation of school policy. Both are connected to police and the parents of students found in “violation” are contacted. This is an insult added to the injury of Black students being disproportionately suspended and expelled.

Facial recognition software has a long history of failing to recognize Black faces. Researchers and users have identified anti-Black biases in AI applications ranging from hiring to robots to loans.

Proctorio is a remote proctoring platform that uses AI to detect perceived behavior abnormalities by test takers in real time:

“Because the platform employs facial detection systems that fail to recognize Black faces more than half of the time, Black students have an exceedingly hard time completing their exams without triggering the faulty detection systems, which results in locked exams, failing grades, and disciplinary action.”

Now, the Los Angeles Unified School District (LAUSD) is employing a system called “Ed” to replace student advisors for students with IEPs, with the hopes of supporting Black students. In the district, students with IEPs are disproportionately Black. School district officials may have good intentions concerning this program. But the road to hell is paved with good intentions and Black children have had enough hell in schools.

School districts interested in implementing AI programming in their schools must do their homework. AI can be a help with meeting key services on behalf of students, but they must do the due diligence to ensure that they’re not adding any more racism for Black students to navigate.

Districts must investigate prospective AI tools to ensure Black students experience minimal to no harm. In addition, districts must ensure that they hire and retain Black IT employees, with the skills to use AI platforms as well as the ability to dialogue with AI platform representatives to address any racism found in the program. Districts must also stay in communication with Black students and their families to evaluate these programs to decide if AI programming should be swapped out for another program if it needs to be updated with tech support, or if it is fine as is.

In light of the failures of AI creation and implementation where Black people are concerned, it is important that school districts not become another space where Black people are harmed by technology. Black youth have been harmed enough. It’s long overdue for Black students to be protected by the adults who claim to care about them.

So, when new technologies employ old racism, school leaders must reinvent themselves wholeheartedly, because old dogs rarely learn new tricks.

More Comments

Up Next