
The U.S. National Institute of Standards and Technology (NIST) has issued four draft publications to address the challenges associated with artificial intelligence (AI) and ensure the safety, security, and reliability of AI systems. These drafts are open for public comments until June 2nd, 2024, reflecting NIST’s response to the AI Executive Order from October 2023. One of the areas of focus in the drafts is the security threats arising from generative AI technologies, with NIST providing a framework of over 400 potential risk management actions. The drafts also emphasize securing training data for AI systems and encouraging transparency in AI-created content. Additionally, NIST is developing a Global Engagement Plan on AI Standards to promote international cooperation and coordination. Furthermore, NIST has introduced the NIST GenAI tool, which aims to evaluate and measure generative AI technologies and provide guidance on the ethical dimension of content creation in the AI era. Overall, NIST’s proactive approach to AI safety and standards aims to establish a safe and trustworthy AI ecosystem through the involvement of key stakeholders and the formulation of preferred practices.
Alleviating generative AI risks
A main area of concern in NIST’s publications in drafts is security threats arising from generative AI technologies. The AI RMF AI Generative AI AI profile provides 12 risks which range from high accessibility to sensitive information to the propagation of hate speech and malicious content. Addressing these risks has been a key area of focus for NIST, which has identified over 400 potential risk management actions that organizations can consider. This framework provides a structure developers can follow, and align their goals and priorities.
Minimizing training data risks
Another key content presented in the drafts is how to secure the data used in training AI systems. The draft publication on Secure Software Development Practices for Generative AI and Dual-Use Foundation Models, which is part of NIST’s existing guidance, is generated to guarantee the integrity of AI systems amidst worries about malicious training data. NIST suggests some ways to make the computer code secure and gives solutions for data problems including data collection and use. This will make the AI systems more secure against possible threats.
Encouraging transparency in AI-created content
As a response to the rapidly increasing number of synthesized digital materials, NIST is developing mitigation measures to combat the risks posed by them in their upcoming document on Reducing Risk posed by Synthetic Material. Through digital watermarking and metadata recording, NIST is trying to make it possible to track and identify altered media, which will hopefully prevent some negative outcomes including the distribution of non-consensual intimate images and child sexual abuse material.
Driving global engagement on AI standards
Recognizing the fact that international cooperation is one of the keys to establishing AI-related standards, NIST produced a draft of the Global Engagement Plan on AI Standards. The objective of this plan is to encourage cooperation and coordination among international allies, standards-developing organizations, and the private sector to speed up technology standards for AI. By making content origin awareness and test methods a top priority, NIST aims to develop a strong regime that ensures the safe and ethically sound operations of AI technologies all around the globe.
Initiating NIST GenAI
Moreover, the institute has created NIST GenAI, a software tool that assesses and quantifies the capabilities of generative artificial intelligence tools. The NIST GenAI will be an instrument with which the NIST AI Safety Institute can issue challenge problems and pilot evaluations to help the U.S. AI Safety Institute at NIST differentiate between the input of the AI algorithm and human-produced content. By design, the main goal of this initiative is to promote information reliability and provide guidance on the ethical dimension of content creation in the AI era.
NIST’s announcement of these draft reports and the launch of NIST GenAI mark an active and development-oriented initiative to solve AI-related problems that challenge our society while keeping the innovation of society secure. Through NIST’s solicitations for input from key stakeholders, such as companies that have either developed or deployed AI technologies, the platform can influence the determination of AI safety and standards guidelines. Through active involvement in this process, stakeholders can help in setting up the most preferred practices and the industry’s standard approach, which ultimately leads to a safe and trustworthy AI ecosystem.
With these comprehensive measures and initiatives, NIST is taking a proactive approach to addressing the challenges and risks associated with AI technologies. By providing guidance on security threats, minimizing training data risks, encouraging transparency in AI-created content, driving global engagement on AI standards, and initiating the NIST GenAI program, NIST aims to ensure the safety, security, and reliability of AI systems while promoting responsible innovation and ethical content creation. By involving stakeholders and soliciting their input, NIST is working towards establishing a safe and trustworthy AI ecosystem that benefits society as a whole.