Artificial intelligence (AI) has been making waves lately, particularly with the recent release of ChatGPT. Behind the scenes, AI has quietly become a part of everyday life, from screening job resumes to determining medical care. However, there is a significant issue that arises with AI systems: bias. Many AI systems have been found to discriminate, favoring certain races, genders, or incomes. Surprisingly, there is minimal government oversight in this area, but lawmakers in at least seven states are stepping in to regulate bias in artificial intelligence.
These legislative proposals mark the first steps in a long-standing discussion about the balance between the benefits of AI and the risks it brings. Suresh Venkatasubramanian, a professor at Brown University who co-authored the White House’s Blueprint for an AI Bill of Rights, emphasized the impact of AI on everyday life. “AI does in fact affect every part of your life whether you know it or not. Now, you wouldn’t care if they all worked fine. But they don’t.”
The success or failure of these regulatory efforts will depend on lawmakers navigating complex problems while negotiating with an industry worth hundreds of billions of dollars. In 2023, only a small number of AI-related bills were passed into law out of the nearly 200 introduced in statehouses. Most of these bills targeted specific areas of AI, such as deepfakes or chatbots. However, the focus is now shifting to the larger issue of AI discrimination, which is being debated across industries from California to Connecticut.
Experts studying AI’s tendency to discriminate argue that states are already behind in establishing regulations. AI is commonly used to make consequential decisions, such as hiring employees, yet the majority of Americans are unaware that these tools are being used, and most have no knowledge of whether the systems are biased. Bias in AI can occur because the algorithms are trained on historical data that may contain discriminatory patterns. For example, Amazon discontinued its hiring algorithm project because it was found to favor male applicants due to the historical data it learned from.
The lack of transparency and accountability in the use of AI is a major concern that these bills aim to address. Under the proposed legislation, companies using automated decision tools would be required to conduct impact assessments that analyze the risks of discrimination and provide explanations of the company’s safeguards. Some bills also aim to inform customers when AI is used in decision-making and give them the option to opt out, under certain conditions.
While the industry lobbying group BSA supports some of these proposed steps, such as impact assessments, the progress of legislation has been slow. Bills in Washington state and California have already encountered obstacles. However, there is hope for the future as lawmakers in other states, including Colorado, Rhode Island, Illinois, Connecticut, Virginia, and Vermont, prepare to introduce similar bills.
Suresh Venkatasubramanian believes that the impact assessments outlined in the bills are a step in the right direction, but he highlights the need for greater access to these reports to determine whether individuals have been discriminated against by AI. Additionally, Venkatasubramanian suggests that bias audits should be conducted to identify discrimination accurately and make the results public. However, the industry argues against routine testing of AI systems, as it could expose trade secrets.
Despite the challenges, the introduction of these bills reflects the growing awareness and understanding of the pervasive nature of AI in society. As AI continues to be an ever-present technology, it is crucial for lawmakers and voters to grapple with these issues. Venkatasubramanian emphasizes the importance of caring about AI’s impact, stating, “It covers everything in your life. Just by virtue of that, you should care.”
Use the share button below if you liked it.