Lack of Transparency in AI Model Development

Lack of Transparency in AI Model Development

The development of artificial intelligence (AI) has brought tremendous advancements in various fields, but it has also raised concerns about transparency. A recent study conducted by Stanford HAI (Human-Centred Artificial Intelligence) reveals that major AI model developers, including OpenAI and Google, are becoming less transparent in their operations.

The Stanford HAI released its Foundation Model Transparency Index (FMTI), which evaluates transparency based on 100 different indicators. This index assesses how companies build their foundational models, how they work, and how they are utilized by others. Researchers from Stanford, MIT, and Princeton used this index to evaluate 10 major model companies, and the results indicate a need for improvement.

The scores in the FMTI ranged from a low of 12 to a high of 54. “This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency,” says Rishi Bommasani, Society Lead at the Center for Research on Foundation Models (CRFM) within Stanford HAI.

Transparency issues have long plagued the digital technology industry, from deceptive ads to unclear wage practices to opaque content moderation systems on social media platforms. However, in the realm of AI, transparency becomes even more crucial. As AI technologies continue to evolve and be adopted across industries, it is important for journalists, scientists, and policymakers to understand their designs and the data that powers them.

Shayne Longpre, a PhD candidate at MIT, emphasizes the significance of transparency. “If you don’t have transparency, regulators can’t even pose the right questions, let alone take action in these areas,” he says. Foundation models not only have implications for energy use and intellectual property but also raise important questions about bias and labor practices.

Among the top 10 companies building foundational models, Meta scored the highest with its Llama 2 model, receiving a score of 54. BigScience, the company behind BLOOMZ, came in second with 53, followed by OpenAI and ChatGPT with scores of 48. Despite these rankings, the researchers believe that none of these scores are truly satisfactory.

Rishi Bommasani emphasizes that Meta’s score should not be seen as the goal for other companies to reach. Rather, he believes that all companies should strive for higher levels of transparency, aiming for scores of 80, 90, or even 100. Transparency is essential for building trust with users, allowing regulators to ask the right questions, and ensuring that AI is used responsibly and ethically.

The findings of this study shed light on the current state of transparency in the development of AI models. In an era where AI touches various aspects of our lives, understanding the inner workings of these models is imperative. As technology continues to advance, it is crucial that companies prioritize transparency to address societal concerns and foster responsible AI development.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.