In a move to ensure public safety in the realm of artificial intelligence (AI), the White House has announced new rules that federal agencies must adhere to. The guidelines state that agencies must show that their AI tools do not pose a threat to the rights and safety of the American people, or else cease using them altogether. This directive is part of a broader AI executive order signed by President Joe Biden in October 2023, which aims to safeguard both government and commercial AI systems.
Vice President Kamala Harris explained that each agency must have concrete safeguards in place by December of this year. These safeguards will cover a wide range of applications, from facial recognition screenings at airports to AI tools that control the electric grid or determine mortgages and home insurance. Harris highlighted an example, stating that if the Veterans Administration wishes to use AI in hospitals to assist with diagnoses, they must first demonstrate that the AI system does not produce racially biased diagnoses or discriminates against certain populations.
The new policy directive also includes two other binding requirements. The first is that federal agencies must hire a chief AI officer with the necessary experience and expertise to oversee their AI technologies. This individual will have the authority to ensure that AI is used responsibly and effectively within the agency. The second requirement is that agencies must annually make public an inventory of their AI systems, along with an assessment of the associated risks.
It is important to note that some rules exempt intelligence agencies and the Department of Defense. These entities are undergoing separate discussions about the use of autonomous weapons and AI in their operations. Shalanda Young, the director of the Office of Management and Budget, emphasized that these new requirements aim to strengthen the positive use of AI by the U.S. government. Responsible and well-managed AI systems have the potential to reduce wait times for critical government services, improve accuracy, and expand access to essential public services.
The new rules unveiled by the White House highlight the growing concerns surrounding the use of AI and the need to ensure its responsible implementation. By requiring agencies to verify the safety and non-biased nature of their AI tools, the government is taking a proactive approach to protect the rights and well-being of the American people. The appointment of chief AI officers will further strengthen oversight and ensure that AI technologies are used wisely and ethically. By making the inventory and risk assessment of AI systems public, agencies are fostering transparency and accountability.
This move by the White House sets an important precedent for the responsible adoption of AI across various sectors. As AI continues to play an increasingly prominent role in decision-making processes, safeguarding public safety and rights becomes paramount. The new rules not only address immediate concerns but also lay the foundation for a framework that promotes the positive and beneficial use of AI in government services.
In the words of Vice President Harris, “When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people.” With these new rules in place, the United States is taking a significant step towards ensuring the responsible and ethical integration of AI into government operations.
Use the share button below if you liked it.