Setting Standards for AI Safety: The Role of NIST

Setting Standards for AI Safety: The Role of NIST

No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it’s paramount AI systems are safe, secure, trustworthy, and socially responsible. But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration’s task of setting standards for AI safety a major challenge.

To define the parameters, the Biden administration has tapped a small federal agency, The National Institute of Standards and Technology (NIST). NIST’s tools and measures define products and services from atomic clocks to election security tech and nanomaterials. At the helm of the agency’s AI efforts is Elham Tabassi, NIST’s chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden’s Oct. 30 AI executive order. It cataloged such risks as bias against non-whites and threats to privacy.

“I think it is crucial that we have a shared understanding and language when it comes to AI,” says Tabassi. “A single term can mean different things to different people, and this can lead to misunderstandings and disagreements. This is particularly common in interdisciplinary fields such as AI.”

Setting standards for AI safety requires input from a diverse range of experts. Tabassi emphasizes the importance of involvement from not just computer scientists and engineers but also attorneys, psychologists, and philosophers. AI systems are inherently socio-technical and are influenced by their environments and conditions of use. Therefore, testing in real-world conditions to understand risks and impacts and involving cognitive scientists, social scientists, and philosophers is crucial to the process.

Despite being a small agency, NIST has a history of engaging with broad communities. Tabassi highlights the extensive public input they received during the creation of the AI risk framework. “In quality of output and impact, we don’t seem small. We have more than a dozen people on the team and are expanding,” says Tabassi.

While the task of setting AI standards is a challenging one, Tabassi is confident in the team’s abilities to meet the July deadline set by the executive order. “It’s not like we are starting from scratch,” she explains. “In June we put together a public working group focused on four different sets of guidelines, including for authenticating synthetic content.”

Amid concerns about transparency, Tabassi assures that NIST is committed to maintaining scientific independence. They are exploring options for a competitive process to support cooperative research opportunities and emphasize that they are the ultimate authors of whatever they produce, ensuring their work is not delegated to someone else.

When it comes to red-teaming AI models for risks and vulnerabilities, NIST will play a role in developing the measurement science and standards needed for the work. However, they will not be directly involved in determining which models get red-teamed. “Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective,” says Tabassi.

Guaranteeing accurate assessment and identification of AI risks requires addressing trustworthiness during design, development, and deployment. The AI risk management framework developed by NIST includes guidelines for regular monitoring and evaluations throughout the lifecycle of AI systems. Tabassi emphasizes the importance of addressing these risks as early as possible, as trying to fix AI systems after they are deployed can be costly and challenging. Context also plays a significant role in determining the tradeoffs between convenience, security, bias, and privacy in AI systems.

As NIST works towards creating a toolset for guaranteeing AI safety and trustworthiness, they are tackling the complex challenges that arise with regulating and standardizing a rapidly advancing technology. With a diverse team of experts and a commitment to transparency and scientific independence, they aim to lay the foundation for a responsible and trustworthy AI future.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.