Department of Education Issues Guidance on Racial Bias in AI Systems Used in Schools

Department of Education Issues Guidance on Racial Bias in AI Systems Used in Schools

On Tuesday, the Department of Education’s Office for Civil Rights (OCR) released guidance warning of the potential racial bias in AI systems used in schools. The guidance, which stems from President Biden’s executive order last year, aims to ensure responsible and non-discriminatory use of AI in educational institutions, specifically addressing the impact on vulnerable and underserved communities.

“The growing use of AI in schools, including for instructional and school safety purposes, and AI’s ability to operate on a mass scale can create or contribute to discrimination,” states the Education Department’s guidance. It offers examples of scenarios where AI systems could potentially discriminate against students based on race, color, national origin, and even gender.

One example provided in the guidance highlights the potential discriminatory nature of a plagiarism checker that uses generative AI and has a high error rate when evaluating essays written by non-native English speakers. If a school continues to use such a racist plagiarism checker despite complaints from students and parents, it could lead to a federal civil rights investigation.

Another scenario discussed in the guidance addresses the use of AI to determine appropriate disciplinary procedures for students. The guidance points out that significant disparities by race in disciplinary practices can lead to biased outcomes. For instance, if AI software relies on historical data that reflects a school’s discriminatory disciplinary practices, it perpetuates injustice.

Furthermore, the guidance touches on the potential for AI to discriminate based on sex. It presents a scenario where facial recognition software used to check students into school could misidentify students who do not conform to traditional gender norms, leading to improper flagging as a security risk. This situation could result in a Title IX violation if administrators are aware of the problem but continue to utilize the screening software.

The implications of discriminatory AI systems in schools are significant. False flags can subject students to embarrassment and disrupt their class time. To address these concerns, the Biden administration has been vocal about challenging discrimination within AI. Leaders from various agencies, including the Justice Department, the Federal Trade Commission, and the Equal Employment Opportunity Commission, have pledged to use existing civil rights and consumer protection laws to crack down on discriminatory AI systems.

This recent guidance from the Department of Education adds another layer of commitment to fighting discrimination in AI within the education system. By laying out specific examples and potential violations, the guidance serves as a tool for educators and administrators to ensure that students, particularly those from marginalized communities, are not unfairly targeted or excluded by AI systems.

As AI continues to play an increasingly significant role in our lives, it is crucial to address its potential biases and discriminatory effects. The Biden administration’s dedication to combating discrimination in AI reflects a growing recognition of the importance of ethical and responsible AI development. By holding educational institutions accountable for the use of AI, we move closer to a more inclusive and equitable future.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.