
Sign up to save your podcasts
Or
Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) openly available, but the company’s new Frontier AI Framework outlines scenarios where it may not release highly capable AI systems due to risks. Meta categorizes AI threats as “high-risk” and “critical-risk”, with the latter capable of catastrophic outcomes like cyber or biological attacks. Rather than using empirical tests, Meta relies on internal and external experts to assess these risks. If an AI system is deemed high-risk, Meta will limit access until mitigations are in place, while critical-risk systems may halt development entirely. This policy comes as Meta faces scrutiny over its open AI approach, especially with its Llama models reportedly being misused. Meta insists that balancing AI benefits and risks is key to ensuring safe deployment, distinguishing itself from companies with fewer safeguards.
Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) openly available, but the company’s new Frontier AI Framework outlines scenarios where it may not release highly capable AI systems due to risks. Meta categorizes AI threats as “high-risk” and “critical-risk”, with the latter capable of catastrophic outcomes like cyber or biological attacks. Rather than using empirical tests, Meta relies on internal and external experts to assess these risks. If an AI system is deemed high-risk, Meta will limit access until mitigations are in place, while critical-risk systems may halt development entirely. This policy comes as Meta faces scrutiny over its open AI approach, especially with its Llama models reportedly being misused. Meta insists that balancing AI benefits and risks is key to ensuring safe deployment, distinguishing itself from companies with fewer safeguards.