
Good morning. It’s Wednesday, May 13th.
There’s a lot of discussion right now around major AI companies eventually going public. OpenAI has openly explored an IPO path, Anthropic recently surpassed OpenAI’s implied valuation on some secondary markets, and rumors around SpaceX $2T+ valuation continue to swirl.
At the same time, a handful of publicly traded venture funds offering indirect exposure to companies like Anthropic and OpenAI have gone on massive runs, in some cases doubling within days.
Just be careful. Some of these funds are trading at 2 to 3x the implied value of the underlying private shares they hold. The companies themselves may end up being incredible long-term investments, but retail investors can get burned when hype pushes these exposure vehicles far beyond the actual value of their holdings.
Historically, if you truly believe in a company long term, buying after a direct IPO and holding patiently has often been the cleaner strategy. Not financial advice, just a trend worth paying attention to.
-Jeff
AI Breakfast
You read. We listen. Let us know what you think by replying to this email.
Your prompts are leaving out 80% of what you're thinking.
When you type a prompt, you summarize. When you speak one, you explain. Wispr Flow captures your full reasoning — constraints, edge cases, examples, tone — and turns it into clean, structured text you paste into ChatGPT, Claude, or any AI tool. The difference shows up immediately. More context in, fewer follow-ups out.
89% of messages sent with zero edits. Used by teams at OpenAI, Vercel, and Clay. Try Wispr Flow free — works on Mac, Windows, and iPhone.

Googlebook merges Android and ChromeOS to build a proactive agent computer
Google is rebuilding its software stack around Gemini Intelligence, a proactive layer that turns Android into a platform for AI agents.
The centerpiece of this strategy is the launch of Googlebook, a high-end laptop category created in partnership with Acer, ASUS, Dell, HP, and Lenovo. These devices merge Android and ChromeOS into a single interface, featuring the Magic Pointer, a cursor-based tool for contextual tasks like scheduling, and generative widgets that pull live data from Gmail and Calendar. This architecture allows Gemini to execute multi-step automation, such as building grocery carts or processing travel plans, across mobile, automotive, and wearable interfaces.
On Android, Gemini Intelligence is becoming a proactive automation layer, capable of executing multi-step tasks like building shopping carts or booking travel directly from images. The system integrates with Chrome while voice input allows users to refine messages and generate custom widgets on demand.
Google also surfaced Gemini Omni, a video model focused on in-chat editing rather than pure generation. It supports remixing, object swaps, and template-based editing, though early results suggest stronger editing than raw video quality.
Separately, Google Threat Intelligence detected the first confirmed AI-assisted zero-day exploit, weaponizing a semantic logic flaw to bypass 2FA in a mass cyberattack. To counter these threats, Google is deploying Big Sleep and CodeMender, AI-powered defensive agents designed to find and patch software flaws before they can be exploited.
The company is reportedly in talks with SpaceX for Project Suncatcher. Read more.
Watch: Introducing Googlebook Watch: Introducing Gemini Intelligence
OpenAI clears IPO path and employyes cash in
OpenAI is aggressively restructuring its business for a potential 2026 IPO, reportedly capping Microsoft’s revenue-share at $38 billion. This revised agreement preserves Microsoft as a primary partner through 2032 while allowing OpenAI to diversify infrastructure across Amazon and Google. The company’s $852 billion valuation was recently validated by a secondary share sale that minted 75 employee multimillionaires, each cashing out up to $30 million.
Simultaneously, OpenAI is moving into the enterprise security market with Daybreak. Powered by GPT-5.5 and the Codex Security agent, the platform automates threat modeling and verified patching to combat "triage fatigue." Customers already include Cisco, Cloudflare, and Oracle, putting OpenAI in more direct competition with Anthropic in defensive cybersecurity tooling. Read more.
Related: GPT-5.5 finds critical failures in elite mathematics evaluation sets
New /goal command in Claude Code enables autonomous task completion
Anthropic expanded Claude into a legal workflow layer with 20+ MCP connectors and 12 practice-area plugins, integrating into Microsoft Word, Outlook, iManage, NetDocuments, Ironclad, DocuSign, Box, and e-discovery systems. It supports drafting, redlining, clause comparison, document retrieval, and legal triage with permission controls and audit logs.
Plugins specialize Claude for corporate, litigation, privacy, IP, employment, and regulatory work, embedding firm playbooks and compliance rules. The system enables end-to-end legal workflows and is extensible via partners like Thomson Reuters and Harvey through open protocols.
Claude Code adds fast mode for Claude Opus 4.7 in API and IDE use, reducing latency for coding, debugging, and agent workflows, with wider default rollout planned.
New controls include /goal for completion-driven execution, plus /loop for iterative refactoring, /schedule for recurring tasks, stop hooks for CI gating, and auto mode for fewer interruptions. A new agent view centralizes parallel sessions, letting developers launch, monitor, and switch between running agents with inline responses and background execution.
Additionally, the Claude Platform on AWS is now generally available, offering full API feature parity with native AWS security and billing. Read more.
Watch: New agents for legal professionals | Claude Cowork
Mira Murati’s Thinking Machines Lab debuts real-time interaction model
Thinking Machines Lab, the startup led by former OpenAI CTO Mira Murati, has released its first research preview centered on a new "interaction model" architecture. Moving away from traditional turn-based AI, the system processes raw audio, video, and text in 200-millisecond micro-turns. This allows the AI to listen and speak simultaneously, handling interruptions and backchannel cues with a 0.40s latency that outpaces current leaders like GPT-Realtime-2 and Gemini Live.
The technical stack features TML-Interaction-Small, a 276B Mixture-of-Experts (MoE) model. To solve the speed-versus-depth trade-off, Thinking Machines uses a dual-engine approach: a fast-path model manages the live conversation while a separate asynchronous background model handles complex reasoning and tool execution. For the enterprise, this architecture promises more reliable real-time translation and collaborative assistants that function like human coworkers rather than reactive chatbots.
While the system's raw reasoning still trails the largest frontier models, its focus on "real-time co-presence" is seen as a shift toward AI that can finally keep up with the pace of live business operations. Read more.
Watch: Introducing interaction models | Thinking Machines Lab

Models & Research
Products & Features
Robotics
Industry & Business
Geopolitics & Policy
Security
Infrastructure & Energy
Industry Drama
Careers & How-To

Kelviq unifies real-time metering, global tax compliance, and dynamic entitlements into one merchant of record system.
Parsebridge converts complex PDFs into structured Markdown using parallel parsing engines and an enterprise-scale document extraction API.
Jotform Claude App creates and manages forms conversationally through natural language prompts and integrated MCP data analysis.
Graphbit PRFlow provides autonomous pull request reviews by identifying critical security vulnerabilities through adaptive learning and usage-based pricing.
OpenJobs AI leverages the Mira agent to automate end-to-end sourcing, personalized outreach, and preliminary candidate screening around the clock.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X!
Thinking of starting your own newsletter? AI Breakfast readers who sign up with Beehiiv receive a 14-day free trial and 20% off for 3 months.



