Claude 4.5 Goes Live

In partnership with

Good morning. It’s Wednesday, November 26th.

On this day in tech history: In 1959, Arthur Samuel’s checkers program made headlines after learning to beat its creator using “rote learning” and self-play. Running on an IBM 704, it was one of the first systems to improve from experience rather than explicit rules. Today’s reinforcement learning giants like AlphaGo and MuZero are echoes of that primitive setup, a mainframe teaching itself strategy move by move, decades before GPUs or deep nets existed.

In today’s email:

  • Claude 4.5 goes live

  • OpenAI brings voice chat and global data residency to ChatGPT

  • Gemini 3, Aluminium OS, and TPUs put Nvidia and OpenAI on notice

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

From Hype to Production: Voice AI in 2025

Voice AI has crossed into production. Deepgram’s 2025 State of Voice AI Report with Opus Research quantifies how 400 senior leaders - many at $100M+ enterprises - are budgeting, shipping, and measuring results.

Adoption is near-universal (97%), budgets are rising (84%), yet only 21% are very satisfied with legacy agents. And that gap is the opportunity: using human-like agents that handle real tasks, reduce wait times, and lift CSAT.

Get benchmarks to compare your roadmap, the first use cases breaking through (customer service, order capture, task automation), and the capabilities that separate leaders from laggards - latency, accuracy, tooling, and integration. Use the findings to prioritize quick wins now and build a scalable plan for 2026.

Today’s trending AI news stories

Claude 4.5 goes live, cutting token bloat and leveling up agent workflows

Anthropic has rolled out Claude Opus 4.5, pushing AI performance in coding, reasoning, and agentic workflows to new levels. The model nailed 80.9 percent on SWE-Bench Verified and outperformed human engineers on a two-hour take-home exam, showing advanced problem-solving across seven programming languages. Developers can now trade compute for depth with a new Effort parameter, hitting top-tier results while cutting token use up to 76 percent. Memory compaction and expanded integrations across Chrome, Excel, and desktop make long-context workflows smoother. Pricing drops two-thirds, making high-powered AI more accessible.

Security is tighter but not perfect. Gray Swan testing shows a single strong prompt injection can bypass Opus 4.5 four percent of the time, scaling to 63 percent over 100 attempts. That’s far better than competitors like Gemini 3 Pro and GPT-5.1 at 92 percent. Reinforcement learning, classifiers, and human red-teaming work together to keep agents in line, though browser-based tasks still carry some risk.

Anthropic also crunched the numbers on productivity. Using 100,000 privacy-protected Claude interactions, the team estimates AI could nearly double U.S. labor productivity growth, raising annual gains by 1.8 percent and total factor productivity by 1.1 percent over a decade, assuming full adoption.

Enterprise deployment gets a boost through Arcade.dev’s URL Elicitation for the Model Context Protocol. Using OAuth 2.0, AI agents can securely access Gmail, Slack, and Stripe without touching credentials, with fine-grained permissions and a production-ready framework. Enterprises can now deploy agentic AI that actually interacts with real systems safely. Read more.

OpenAI brings voice chat and global data residency to ChatGPT

OpenAI’s new Shopping Research feature, built on a fine-tuned GPT-5-Thinking mini, lets users handle detailed queries, such as finding the quietest cordless vacuum or the best commuter e-bike under $1,200, while asking follow-ups on budget, features, and intended use. It produces visual buyer guides with specs, pros and cons, and images, and makes use of past conversations if memory is enabled, and allows users to refine results in real time using “More like this” or “Not interested.”

For enterprise clients, data residency has been expanded to 12 regions covering conversations, uploaded files, custom GPTs, and image-generation artifacts. Inference processing remains U.S.-only, but the change gives companies the regulatory flexibility to comply with GDPR and local data laws. New workspaces and API projects can now be tied to specific regions, easing global deployments.

ChatGPT’s voice mode is now fully integrated into the main chat interface, letting users speak and see responses, images, maps, and earlier messages in real time without switching screens. Standalone voice mode remains optional.

On hardware, OpenAI reveals they finally have an AI hardware prototype with former Apple designer Jony Ive. Expected within two years, the pocket-sized, screenless device is designed to offer a calm, context-aware experience, filtering information and presenting tasks at the optimal time. Jony Ive highlights a “naive simplicity” paired with sophisticated engineering. Capabilities remain under wraps. Meanwhile, a temporary restraining order bars OpenAI from using “Cameo” in its Sora video app, highlighting the legal and IP risks as the company expands into personalized content. Read more.

Google strikes back: Gemini 3, Aluminium OS, and TPUs put Nvidia and OpenAI on notice

Google, long seen as a sleeping giant in AI, is now fully awake and moving to widen its product moat across software, hardware, and cloud. The company is preparing Aluminium OS, a unified platform merging ChromeOS and Android for PCs, powered by Gemini large language models to handle demanding on-device AI tasks across CPU, GPU, and NPU. The rollout will cover entry-level laptops through premium detachables, tablets, and mini-PCs, with optional migration paths for legacy ChromeOS devices, and a planned launch in 2026 alongside Android 17.

The company is closing the gap with OpenAI in consumer AI. Gemini 3, its latest AI stack, pairs strong technical performance with refined product design, improving on Bard’s early missteps. Its Dynamic View turns text into interactive visual outputs in under a minute with scene-specific buttons, making complex topics from physics to finance more intuitive.

Nano Banana Pro is now available to all Paid tiers in Flow, with 2K and 4K resolution support coming soon. NotebookLM has been upgraded to turn notes into polished slide decks using Nano Banana Pro, with custom styles, personas up to 5,000 characters, and streamlined workflow via create notebook+, exporting to PDF now, with Google Slides and PowerPoint soon. High TPU usage underscores its processing intensity.

On hardware, Google is commercializing its Tensor Processing Units with the TPU@Premises program, enabling companies like Meta to deploy TPUs directly in their data centers starting in 2027, with cloud rental options in 2026. Executives estimate the program could capture up to 10 percent of Nvidia’s annual revenue, positioning Google as a serious competitor in AI infrastructure. Nvidia reacts to Google’s AI revival with a defensive 𝕏 post.

Additional initiatives include agentic calling in the U.S., letting AI contact local businesses to gather product information, and the Google AI Futures Fund supporting Indian startups with funding, compute credits, and early API access. Alphabet’s stock has surged nearly $1 trillion since mid-October, reflecting investor confidence in a full-stack AI strategy. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!