Connecticut Senate Passes Bill to Regulate AI and Protect Individuals from Harm

Connecticut Senate Passes Bill to Regulate AI and Protect Individuals from Harm

The Connecticut Senate has taken a significant step towards regulating artificial intelligence (AI) by passing a bill aimed at curbing bias in decision-making and protecting individuals from harm caused by technologies such as deepfakes. Despite concerns that the legislation could stifle innovation and burden small businesses, the bill was approved with a vote of 24-12 after a lengthy debate. This move comes after two years of task force meetings in Connecticut and a year of collaboration among bipartisan legislators from multiple states who are striving to prevent a fragmented approach to regulation due to the lack of federal action.

Democratic Senator James Maroney, the key author of the bill, emphasized its significance, stating, “I think that this is a very important bill for the state of Connecticut. It’s very important I think also for the country as a first step to get a bill like this. Even if it were not to come and get passed into law this year, we worked together as states.” Legislators from Connecticut, Colorado, Texas, Alaska, Georgia, and Virginia have been collaborating on this issue and find themselves in the midst of a national debate between civil rights groups and the industry regarding the core components of the legislation.

During a news conference last week, several legislators, including Maroney, highlighted the need for regulation and emphasized how they have worked with industry experts, academics, and advocates to create proposed regulations for safe and trustworthy AI. However, not all lawmakers are in favor of the bill. Senate Minority Leader Stephen Harding expressed concern that the senators were being rushed to vote on a complex piece of legislation without sufficient consideration for its potential unintended consequences, which could harm businesses and residents in the state.

Some key Democrats, including Governor Ned Lamont, have also expressed reservations about the bill’s potential impact on the emerging AI industry. Lamont, who has a background in cable TV entrepreneurship, called for caution, stating, “We need to make sure we do this right and don’t stymie innovation.” Despite these concerns, the bill includes various provisions aimed at protecting consumers, tenants, and employees from discrimination based on race, age, religion, disability, and other protected classes. It also criminalizes the dissemination of deepfake pornography and deceptive AI-generated media in political campaigns and requires digital watermarks on AI-generated images for transparency.

Furthermore, certain AI users will be required to develop policies and programs to eliminate the risks of AI discrimination. The legislation also establishes an online AI Academy where Connecticut residents can take classes in AI and ensures that AI training is incorporated into state workforce development initiatives and other training programs. However, some advocates argue that the bill does not go far enough and have called for the restoration of a requirement for companies to disclose more information to consumers before utilizing AI in decision-making about them.

With the bill now awaiting action in the House of Representatives, Connecticut is poised to become a pioneer in regulating AI, taking steps to address biases and protect individuals from the potential harms of deepfakes and other AI-generated content. As the state moves forward with this legislation, other states and the federal government will likely be watching closely to determine the effectiveness and impact of these regulations.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.