Meta, the tech giant led by Mark Zuckerberg, has announced a policy shift that may halt the development of specific AI systems deemed too risky. This is despite Zuckerberg's previous commitment to publicly making artificial general intelligence (AGI) available.
Meta presents a classification scheme for AI risk in its newly published Frontier AI Framework, categorizing models as "high risk" or "critical risk." These designations apply to AI models that may aid in cybersecurity breaches, chemical attacks, or biological dangers. The crucial distinction is that high-risk AI may facilitate an assault, whereas critical-risk AI may result in an unpredictable, disastrous event.

Meta's assessment of AI risk doesn't rely on a single empirical test. Instead, risk classification is based on reviews from both internal and external researchers, with senior executives making the final decision. The company acknowledges that current scientific approaches for AI risk assessment lack definite quantitative indicators, making subjective expert input necessary.
Meta has agreed to restrict internal access to high-risk AI models to avoid these risks and postpone their release until measures are in place to lower risks to moderate levels. Critical risk AI models, on the other hand, will face even stronger safeguards, such as security measures to prevent data leaks and a complete halt to development until they are declared safe.

This policy adjustment demonstrates the complexities of Meta's open AI strategy. While the company has promoted an open-access model with its Llama AI models, this strategy has sometimes backfired. According to some reports, at least one US adversary has used Meta's AI to develop a defense chatbot.
Meta intends to differentiate itself from competitors like OpenAI and Chinese firm DeepSeek by using the Frontier AI Framework, which balances AI innovation with safety. The company insists its goal is to maximize AI's benefits while reducing potential harm, indicating a more cautious approach in the evolving landscape of artificial intelligence.