The Central Intelligence Agency (CIA) is rapidly adopting generative AI technology to augment their work and address the threat posed by AI-generated deepfakes from adversaries. Nand Mulchandani, the agency’s first chief technology officer, is overseeing the development of several projects, including a generative AI application similar to ChatGPT. This application, which utilizes open-source data, is already being used by thousands of analysts across the U.S. intelligence community.
However, Mulchandani acknowledges the limitations of generative AI. He describes these systems as “probabilistic in nature” and compares them to a “crazy, drunk friend” who can occasionally provide out-of-the-box thinking but is not suitable for precision tasks like math or engineering. He also raises concerns about bias and narrow focus, referring to them as the “rabbit hole” problem.
The CIA’s current use of large-language models (LLMs) is primarily focused on summarization. Given the vast amount of information the agency collects on a daily basis, LLMs have been instrumental in providing analysts with broader insights into sentiment and global trends. However, the agency still relies on human analysts for interpreting and explaining data, as well as making critical judgments.
The integration of generative AI presents challenges for the CIA, particularly in terms of information compartmentalization and system building. While there is little internal resistance to adopting AI technologies, the agency must navigate legal constraints and ensure that encryption and privacy controls are maintained when combining data from various sources.
Mulchandani emphasizes that human analysts remain essential and that generative AI serves as a powerful tool for brainstorming and enhancing productivity. It has the potential to generate new ideas and provide valuable insights. However, proper implementation and careful consideration of its use are crucial to avoid negative consequences.
In terms of partnership with large-language model providers, the CIA is not tied to any specific vendors. It evaluates and uses various commercial-grade and open-source LLMs available on the market. The rapid evolution and competition in the LLM market make it advantageous for the agency to stay flexible and adapt to new releases.
The CIA is committed to scaling up its use of generative AI and views it as a critical technology. While they have already made significant progress, they recognize that there is much more to be done in terms of integrating the technology into their applications and systems. The agency is dedicated to staying at the forefront of these advancements to protect U.S. interests in an increasingly AI-driven world.
Use the share button below if you liked it.