Moving on to Meta, which continues to push the boundaries of AI innovation. The Meta FAIR team recently introduced a collection of new research models and datasets, including an upgraded image segmentation tool, a cross-modal language model, solutions to accelerate large language model performance, and more.
One of the highlights is Spirit LM, an open-source multimodal language model that integrates speech and text to generate more natural-sounding and expressive speech. Meta's SAM 2.1 update offers improved image and video segmentation over its popular predecessor, which saw over 700,000 downloads in just 11 weeks. Another innovation is Layer Skip, providing an end-to-end solution for accelerating language model generation times by nearly twice as fast without specialized hardware. Other releases include SALSA for security testing, Meta Lingua for language model training, and a synthetic data generation tool.
So, why is this significant? Meta continues to raise the bar in AI with major releases across various areas. Given the company's impressive open-source systems, it's becoming challenging to envision a future where closed models and tools have a significant advantage. The gap between open and closed-source seems to be narrowing with each new release.
Now, let's talk about AI safety. Anthropic has released a set of new evaluations aimed at detecting potential sabotage capabilities in advanced AI systems. The goal is to identify risks that could arise if models attempt to circumvent human oversight or influence decision-making.
They developed four new evaluations: human decision sabotage, code sabotage, sandbagging (hiding capabilities), and undermining oversight. These evaluations use mock scenarios to test models' ability to manipulate and deceive humans, insert bugs into code, and undermine monitoring systems. Tests were run on Claude 3 Opus and Claude 3.5 Sonnet models, which did not flag concerning results but showed some capability to sabotage.
Anthropic is open-sourcing these evaluations and emphasizes that stronger mitigations will be needed as AI continues to improve. This research shows that while AI isn't very good at sabotaging humans yet, the capabilities are there to some extent. If AI model development continues at this pace, it's crucial to anticipate and mitigate these potential risks.
And now, some quick hits in the AI world:
- The Enterprise Deployment Playbook with Section’s CEO and COO offers steps for meaningful AI adoption and internal AI ROI. You can RSVP for free.
- Perplexity is discussing a new fundraising round that would double the company's valuation to over $8 billion, according to the Wall Street Journal.
- Apple internally believes that its AI technology is over two years behind industry leaders, according to Bloomberg's Apple insider Mark Gurman.
- Midjourney will release a new tool this week allowing users to edit uploaded images using its AI model, along with new retexturing capabilities—initially limited to a smaller group for testing.
- AI and quantum tech startup SandboxAQ is seeking new funding at a valuation over $5 billion, backed by former Google CEO Eric Schmidt and Salesforce CEO Marc Benioff.
- Publisher Penguin Random House has revised its global copyright notices to include a statement explicitly preventing the use of texts for AI training purposes, which will now appear on all titles.
That's all for today's edition of The Daily AI Briefing. Thank you for tuning in and staying up-to-date with the latest in artificial intelligence. Don't forget to subscribe and share this podcast with your colleagues and friends interested in AI.
We'll see you tomorrow for another roundup of the latest AI news. Until then, take care and stay curious!
Hosted on Acast. See acast.com/privacy for more information.