News

Anthropic, Google, Microsoft and OpenAI collaborate on responsible development of frontier AI models

Anthropic, Google, Microsoft and OpenAI have announced that they are collaborating to form a new industry body focusing on ensuring the “safe and responsible” development of frontier AI models, hoping to facilitate collaboration, identify best practices, and advance AI safety research.

The Frontier Model Forum “will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards”.

The four core objectives of the collaboration include advancing AI safety research by promoting “responsible development” of frontier models and enabling standardised evaluations of capability and safety; identifying best practices for responsible development and deployment of frontier models, and helping the public understand the nature and capabilities of the technology; collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks; and supporting efforts to develop applications which help meet challenges such as early cancer detection and climate change.

Membership is open to organisations which develop and deploy frontier models, demonstrate a strong commitment to frontier model safety, and are willing to contribute to advancing the Forum’s efforts through participation in joint efforts.

Anna Makanhu, the VP of Global Affairs for OpenAI, says: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety.”

In May, we wrote about the £100 million in funding announced by the UK government to establish an AI taskforce ensuring the “safe and reliable use of this pivotal artificial intelligence across the economy”.

In June, we looked at the CDEI’s publication of its portfolio of AI assurance techniques, designed to build and maintain trust in order to realise the benefits of AI technology.

Also in June, we covered NICE’s new one-stop-shopt for AI and digital regulations for health and social care, intended to help the wider health and care system “adopt and make use of new digital and artificial intelligence”.