OpenAI Signs $300B Contract with Oracle

Good morning. It’s Friday, September 12th.

On this day in tech history: In 2006, Fei-Fei Li and colleagues quietly launched the Stanford 101 Object Categories Dataset. While ImageNet (2009) gets all the glory, Caltech101 (and later Caltech256) were among the first large-ish, curated benchmarks for object recognition. They seeded the “benchmark-driven” culture of computer vision that made ImageNet possible.

In today’s email:

  • OpenAI Signs $300B Contract with Oracle

  • Quantum Leap with Google’s Willow Processor

  • Anthropic Adds Memory/Incognito Mode

  • Stable Audio 2.5 Cuts Production Steps

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In partnership with Atla

Building agents?

Atla is the improvement engine for agents. The platform helps agent teams find and fix critical failures in hours, not days. Key features:

  • “Patterns” automatically identify recurring failures across runs

  • Span-level annotations point to specific reasons for errors

  • “Compare” allows users to test changes (models, prompts, etc.) and measure improvements Fast-growing agent startups use Atla to debug faster by getting to root causes immediately, as well as to ship better by making tickets from the issues that matter most. Atla integrates with Python and Typescript, and supports popular agent frameworks.

Thank you for supporting our sponsors!

Today’s trending AI news stories

$300B Oracle cloud pact and $100B+ nonprofit stake put OpenAI in rare territory 

OpenAI has signed a $300B, five-year cloud contract with Oracle starting 2027, potentially the largest in history. The deal accelerates OpenAI’s shift away from Azure exclusivity and builds on its $500B Stargate data center project with SoftBank. Oracle, riding the wave, forecast $144B in annual cloud revenue by 2030 and reported a record $455B backlog, with AI contracts from OpenAI, Meta, and xAI driving a 42% stock surge.

Image: Oracle

On governance, OpenAI’s nonprofit will retain oversight and a $100B+ equity stake as the for-profit arm transitions into a public benefit corporation. A nonbinding truce with Microsoft secures cloud integration and tech access, but regulatory reviews in California and Delaware remain. The structure positions the nonprofit as one of the most capitalized philanthropic entities worldwide.

At the AI Infra Summit, hardware chief Richard Ho outlined OpenAI’s global-scale compute vision: stateful compute for long-lived agents, racks drawing 600 kW+, optical interconnects beyond copper, and silicon-level safety features like kill switches and secure enclaves.

ChatGPT’s new Developer Mode also unlocked full Model Context Protocol (MCP) write access this week. Developers can now wire connectors to update Jira tickets, trigger Zapier workflows, or run incident-response automations directly from chat, powered by OAuth, HTTP streaming, and SSE. OpenAI cautions that prompt injection, data exposure, and unintended writes remain serious risks, with all actions requiring explicit confirmation.

Geopolitics is also still in play. After stops at Mar-a-Lago, the White House, and the Middle East, the tech leaders are now headed to the UK with Donald Trump. Sam Altman, Jensen Huang and Tim Cook are confirmed attendees, Bloomberg reports the trip will coincide with major announcements, including billions in data center investments from OpenAI and Nvidia. Read more.

AI Ads, Global AI Max, and a Quantum Leap with Google’s Willow Processor

Google’s 58-qubit quantum processor Willow has achieved a physics first, imaging a Floquet topologically ordered state, a highly entangled, non-equilibrium phase of matter previously only theorized. Researchers from TU Munich, Princeton, and Google Quantum AI used a custom interferometric algorithm to map its topological structure and observe exotic particle “transmutations” in real time. These phenomena are practically unsimulatable on classical supercomputers, proving that quantum processors can serve as experimental labs for matter beyond conventional physics. The work, published in Nature, opens the door to next-generation quantum simulations and materials research.

Meanwhile, Google is expanding its AI advertising footprint globally. Its AI Max engine now embeds context-specific ads directly into AI-generated answers. Queries like “how to fix low water pressure” can display AI explanations alongside relevant service ads. Over 60% of shopping searches are conversational, and Google’s “AI Mode,” live in 180+ countries, handles bookings and purchases directly, potentially becoming the default.

And more is coming fast. As Logan Kilpatrick, Google’s AI developer relations lead, put it: the team is “locked in for the next few months” with launches stacked. Read more.

Anthropic adds memory to Claude Team and Enterprise, incognito mode for all users 

Anthropic just gave Claude AI a serious upgrade for workplace users. Team and Enterprise plans now get project-based memory, letting Claude remember workflows, client needs, and ongoing projects, then export those memories per project or even migrate them to ChatGPT or Google Gemini. Users stay in control with a memory summary to view, edit, or delete stored info, and admins can shut it off org-wide. On top of that, they’re also introducing Incognito mode for all users.

Chats in this mode bypass memory and conversation history, providing context-free, confidential exchanges. Messages remain stored for at least 30 days for safety and legal compliance but are excluded from memory.

The rollout hit a real-world snag on Wednesday when Claude.ai, Claude Code, and the API all went down for about 30 minutes, leaving developers joking about “coding like cavemen.” Read more.

Stable Audio 2.5 cuts production steps eightfold with ARC

Stability AI just supercharged audio generation with Stable Audio 2.5. Thanks to the new Adversarial Relativistic-Contrastive (ARC) post-training method, the model slashes inference steps from 50 to 8, cranking out three-minute tracks in under two seconds on Nvidia H100 GPUs. ARC skips heavy teacher models and classifier-free guidance, optimizing quality directly. Enterprise users get more than speed: audio inpainting lets you extend or tweak clips, deployment works via API or on-prem, and all datasets are fully licensed for commercial use.

Stability is also teaming up with WPP’s Amp to integrate sonic branding into enterprise workflows, promising rights-safe, scalable, and highly customizable branded audio in minutes rather than weeks. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!