Anthropic CEO's AGI Prediction

Good morning. It’s Wednesday, November 13th.

Did you know: On this day in 2006, Google completed its acquisition of YouTube for $1.65 billion?

In today’s email:

  • Anthropic CEO on Lex Fridman

  • OpenAI’s “Predicted Outputs”

  • DeepMind open-sources AlphaFold 3

  • Sutskever predicts a new AI "age of discovery"

  • 4 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In partnership with HUBSPOT

Unlock the full potential of your workday with cutting-edge AI strategies and actionable insights, empowering you to achieve unparalleled excellence in the future of work. Download the free guide today!

Today’s trending AI news stories

Anthropic CEO Dario Amodei Predicts AGI Arrival by 2026, Warns of Growing AI Risks

In a recent interview with Lex Fridman, a valued follower of AI Breakfast on 𝕏, Dario Amodei, CEO of Anthropic, discussed the rapid progress toward Artificial General Intelligence (AGI), predicting its arrival by 2026-2027, with internal data suggesting it could happen even sooner. While OpenAI focuses on being first, Anthropic prioritises safety, particularly in light of the existential risks posed by increasingly powerful AI systems. One such concern is the potential for catastrophic misuse, such as in cyber or biological weapons, and the challenge of managing AI systems that could soon exceed human control.

Amodei extensively discussed the concept of AI Safety Levels (ASL), with the industry currently at ASL-2 and expected to reach ASL-3 by 2025, a turning point when AI models could enhance the capabilities of malicious actors.

Anthropic's approach is grounded in the understanding that AI evolves like biological systems, leading to discoveries such as the emergence of a "Donald Trump neuron" in large language models. While technological advances are accelerating, with models advancing from high school to human-level capabilities by 2025, Amodei stressed the critical need for meaningful AI regulation by the end of 2025 to mitigate the associated risks.

OpenAI introduced Predicted Outputs to reduce latency on GPT-4o and GPT-4o-mini models

OpenAI's Predicted Outputs feature, now available in the chat completions API, significantly reduces latency for GPT-4o and GPT-4o-mini models by providing a reference string. This enhancement speeds up tasks such as updating blog posts, iterating on prior responses, and rewriting code in existing files.

Factory AI tested this feature, reporting 2-4x faster response times compared to previous models, while maintaining high accuracy. Large file edits, previously taking about 70 seconds, now complete in roughly 20 seconds. Early access testing showed sub-30s response times and performance on par with other state-of-the-art models, even on files ranging from 100 to 3000+ lines. This breakthrough, powered by techniques like Speculative Decoding, enables faster feedback loops and opens up new possibilities for AI-driven software engineering. Read more.

Google DeepMind open-sources AlphaFold 3, ushering in a new era for drug discovery and molecular biology

Google DeepMind has open-sourced AlphaFold 3, extending unprecedented access for academic researchers to its source code under a Creative Commons license, though model weights necessitate explicit permissions. This iteration builds on its predecessor, enabling the intricate modeling of interactions between proteins, DNA, RNA, and small molecules—an essential capability for accelerating drug discovery and molecular biology while reducing dependence on prohibitively costly and time-consuming laboratory experiments. Read more.

OpenAI co-founder Sutskever predicts a new AI "age of discovery" as LLM scaling hits a wall  

Ilya Sutskever suggests that the AI industry is shifting from scaling large language models (LLMs) to focusing on "test-time compute" due to the challenges and costs associated with massive model training. Companies like OpenAI, Anthropic, and Google DeepMind are adopting this method, enabling models to generate multiple solutions before selecting the best one, enhancing accuracy in tasks like mathematical problem-solving. This shift could recalibrate Nvidia's hardware dominance as it creates demand for specialized inference chips, though Nvidia’s products remain viable for test-time compute. Read more.

AI Hardware and Infrastructure Advancements

AI Innovations in Robotics and Physical Modeling

AI in Language, Reasoning, and Information Processing

Corporate AI Strategy and Industry Leadership

4 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X!