U.S. AI Safety Institute Partners with Anthropic and OpenAI for Collaborative Research

The U.S. AI Safety Institute, part of NIST, has partnered with Anthropic and OpenAI to enhance AI safety through collaborative research and model evaluation. These agreements aim to address safety risks and promote responsible AI development, marking a significant milestone in AI governance.

nist

The U.S. Artificial Intelligence Safety Institute, part of the Department of Commerce’s National Institute of Standards and Technology (NIST), has announced significant agreements with both Anthropic and OpenAI aimed at enhancing AI safety research, testing, and evaluation. These partnerships mark a pivotal step in ensuring the responsible development and deployment of artificial intelligence technologies.

Framework for Collaboration

Under the newly established Memoranda of Understanding (MoUs), the U.S. AI Safety Institute will gain access to cutting-edge models from both companies prior to their public release. This access will facilitate collaborative research focused on evaluating AI capabilities and identifying potential safety risks. The partnerships also aim to develop methodologies to mitigate these risks effectively.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

Elizabeth Kelly

Director of the U.S. AI Safety Institute.

Collaborative Feedback and Global Engagement

In addition to the direct collaboration with Anthropic and OpenAI, the U.S. AI Safety Institute plans to work closely with the U.K. AI Safety Institute to provide actionable feedback on potential safety enhancements for the models developed by both organizations. This international collaboration underscores the global commitment to ensuring AI safety.

The U.S. AI Safety Institute builds upon NIST’s extensive history of more than 120 years in advancing measurement science, technology, and standards. The evaluations conducted through these agreements will help further NIST’s mission by fostering in-depth collaboration and exploratory research on advanced AI systems across various risk domains.

Supporting Safe AI Development

The evaluations and research initiatives will play a vital role in promoting the safe, secure, and trustworthy development of AI technologies. This effort aligns with the Biden-Harris administration’s Executive Order on AI and the voluntary commitments made by leading AI developers to prioritize safety in their innovations.

As the landscape of artificial intelligence continues to evolve, these agreements highlight the importance of collaborative efforts in ensuring that AI technologies are developed with safety and ethical considerations at the forefront.

Anika V

Next Post

OpenAI to integrate Project Strawberry into ChatGPT-5

Thu Sep 5 , 2024
OpenAI aims to create Project Strawberry model that can better understand and interact with the world, potentially leading to significant breakthroughs in various fields.
chatgpt 5

You May Like