OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

OpenAI has updated its Preparedness Framework to potentially adjust safety requirements if rival AI labs release 'high-risk' systems without comparable safeguards. This move is seen amid growing competitive pressures in the AI industry. OpenAI has faced criticism for allegedly prioritizing speed over safety, including accusations from former employees in a legal case involving Elon Musk. Despite acknowledging the possibility of policy adjustments, OpenAI insists any changes will be made cautiously and maintains its commitment to protective safeguards.
The updated framework also introduces a reliance on automated evaluations to expedite development, though reports suggest potential compromises in safety checks. Critics argue that OpenAI's testing timelines have been compressed, with some tests conducted on earlier model versions. The new framework categorizes models by 'high' and 'critical' capabilities, each with specific safeguard requirements. These changes mark the first update to OpenAI's Preparedness Framework since 2023, reflecting an evolving landscape in AI deployment strategies.
RATING
The news story provides a largely accurate and timely overview of OpenAI's updates to its Preparedness Framework, reflecting the company's approach to AI safety amidst competitive pressures. It effectively communicates OpenAI's intentions and the context for potential policy adjustments, making it accessible to a broad audience. However, the article could benefit from more detailed sourcing and transparency regarding some claims, particularly those involving external reports and criticisms. While the story maintains a neutral tone and clear structure, it lacks a diversity of perspectives that could enhance its balance and depth. Overall, the article is a valuable contribution to ongoing discussions about AI safety and ethics, but it could be strengthened by incorporating a wider range of viewpoints and more explicit sourcing.
RATING DETAILS
The news story is largely accurate in its portrayal of OpenAI's updates to its Preparedness Framework. It correctly identifies OpenAI's intention to possibly adjust its safety requirements in response to competitive pressures from rival AI labs. The story accurately reflects OpenAI's stance that any adjustments would be made with careful consideration and only after confirming changes in the risk landscape. However, the claim that OpenAI has been accused of lowering safety standards for faster releases requires further verification, as it is based on external reports and allegations. The story also mentions a brief filed by ex-employees, which needs corroboration from legal documents or statements. Overall, the factual content aligns well with OpenAI's public statements, but some claims depend on external sources that need further validation.
The article presents a balanced view by discussing both OpenAI's official statements and the criticisms it faces. It includes OpenAI's perspective on maintaining safety standards and the rationale behind potential adjustments. However, the story could benefit from including more viewpoints from independent experts or stakeholders in the AI community to provide a broader perspective on the implications of these changes. While it mentions criticisms and concerns from ex-employees and external reports, it does not delve deeply into opposing viewpoints, which could enhance the article's balance.
The article is well-structured and uses clear language to explain complex topics related to AI safety and policy adjustments. It logically presents OpenAI's updates and the surrounding context, making it accessible to readers with varying levels of familiarity with AI technologies. The tone remains neutral throughout, focusing on factual reporting rather than opinion. However, some technical terms and concepts could be further simplified or explained for a general audience to ensure full comprehension.
The story relies on OpenAI's official blog post and statements, which are credible primary sources for understanding the company's policy changes. However, the article also references reports from unnamed sources and allegations from ex-employees without providing direct citations or evidence, which weakens the source quality. Including more detailed references to these reports or statements from the ex-employees would improve the credibility and reliability of the information presented.
The article provides a clear explanation of OpenAI's updated framework and the context for potential adjustments in safety requirements. However, it lacks transparency in detailing the sources of certain claims, such as the accusations against OpenAI and the brief filed by ex-employees. The story would benefit from more explicit attribution and explanation of the methodology behind these claims to enhance transparency. Additionally, it does not disclose any potential conflicts of interest or biases in the reporting, which is important for maintaining reader trust.
Sources
- https://openai.com/index/updating-our-preparedness-framework/
- https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf
- https://techcrunch.com/2025/04/15/openai-says-it-may-adjust-its-safety-requirements-if-a-rival-lab-releases-high-risk-ai/
- https://www.rdworldonline.com/openai-framework-ai-now-on-the-cusp-of-doing-new-science/
- https://www.axios.com/2025/04/15/openai-risks-frameworks-changes
YOU MAY BE INTERESTED IN

OpenAI’s latest AI models have a new safeguard to prevent biorisks
Score 7.2
Public comments to White House on AI policy touch on copyright, tariffs
Score 6.2
OpenAI seeks to make its upcoming open AI model best-in-class
Score 6.4
OpenAI’s GPT-4.1 may be less aligned than the company’s previous AI models
Score 6.6