
Good morning. It’s Friday, April 17th.
There’s a lot of chatter about an AI bubble, but it doesn’t quite fit the classic setup. Bubbles need widespread, liquid overvaluation that can unwind fast. Right now, Nvidia is actually earning into its valuation, and most of the other major AI players are private, so there’s no easy exit valve for a sudden collapse.
What does feel real is the second-order effect: AI quietly compressing the value of traditional SaaS. The market’s already hinting at it, with public software companies sliding since AI tools started eating into their core use cases.
It may not look like Dot-com bubble. Less of a pop, more of a slow re-rating.
What would actually have to break for this to turn into a real crash?
-Jeff
AI Breakfast
You read. We listen. Let us know what you think by replying to this email.
In partnership with Adobe
Meet the Adobe Firefly AI Assistant
Today marks a shift in Adobe's AI offerings with a new agentic assistant that makes for one of the most (if not THE most) robust all-in-one creative AI Studio. Adobe’s approach to agentic creativity puts creators in control: they provide the vision, judgment, and creative direction, while the assistant handles the orchestration and execution. Firefly AI Assistant builds on Adobe’s investment in assistive, conversational and generative AI, extending that foundation to power a new era of agentic creativity.
Firefly AI Assistant enables creators to describe the outcome they want using their own words as the assistant orchestrates and executes complex, multi-step workflows across Adobe’s Creative Cloud apps including Firefly, Photoshop, Premiere, Lightroom, Express, Illustrator and more.
Adobe also significantly expanded Firefly's video and image editing capabilities, introducing new features in Firefly Video Editor including studio-quality sound, advanced color adjustments and Adobe Stock integration, as well as new precision image editing capabilities such as Precision Flow and AI Markup. Firefly’s roster of more than 30 top industry AI models now includes Kling 3.0 and Kling 3.0 Omni, joining Google’s Nano Banana 2 and Veo 3.1, Runway’s Gen-4.5, ElevenLabs’ Multilingual v2 and more, to offer creators unmatched choice and flexibility in how they create.
Thank you for supporting our sponsors!

Claude Opus 4.7 gains native design tools that could challenge Figma
Anthropic just released its latest flagship, Claude Opus 4.7, a massive step toward autonomous coding, hitting 64.3% on SWE-bench Pro. It’s currently beating GPT-5.4 but still playing catch-up to Anthropic's own unreleased internal giant, Mythos. The vision resolution jumped to ~3.75MP, making it much better at parsing dense diagrams. Interestingly, while it’s getting smarter, Anthropic is intentionally nerfing its cyber capabilities through Project Glasswing to prevent misuse, though its Mythos system is already pushing DeFi vulnerability discovery to machine speeds.
CPO Mike Krieger ditched his seat on Figma’s board as Opus 4.7 starts rolling out native design tools that could eat Figma’s lunch. This "SaaSpocalypse" energy comes as Anthropic eyes an $800B valuation and plans a massive 800-person London hub.
Anthropic is breaking ranks with the industry by opposing an Illinois liability shield, signaling they're ready to take accountability as capabilities scale. Read more.
OpenAI turns Codex into a cursor-controlling background agent
The Codex desktop app is evolving into an always-on agent, now capable of background computer use, controlling your cursor and interacting with macOS and Windows apps autonomously. It adds an in-app browser, gpt-image-1.5, and 100+ plugins (GitLab, CircleCI), enabling parallel tasks, long-running automations, and persistent workflow memory pushing the tech into full OS-level execution.
On the specialized front, the lab introduced GPT-Rosalind, a life sciences–focused reasoning model that acts as an orchestration layer across drug discovery and genomics workflows. Rather than targeting single tasks like protein folding, it integrates signals from 50+ biological databases and already exceeds expert baselines in RNA prediction. Access is limited to a U.S.-only Trusted Access program.
Underpinning all these is a new Agents SDK with built-in sandboxing and Model Context Protocol (MCP), letting agents edit code and run long tasks in secure environments. To support it, OpenAI is reportedly spending $20B with Cerebras for ultra-fast inference, enough that internal teams are already chewing through a billion tokens a week.
The user base is changing too. ChatGPT usage has flipped from its early ~80/20 male skew to a slight female majority, marking a noticeable demographic shift as the product moves further into mainstream adoption. Read more.
Google brings Gemini app to Mac, adds split-screen to Chrome AI Mode
Google is making a serious move into your OS with the Gemini app for Mac. By hitting Option + Space, you get a native assistant that can actually see your screen and local files. This feeds into the new Personal intelligence feature, which uses your own Google Photos to generate custom images of you or your pets without the prompt engineering headache. They also launched Gemini 3.1 Flash TTS, giving developers way more control over speech tone and pacing across dozens of languages, all with SynthID watermarking to keep things traceable.
Chrome's AI Mode just got a split-screen upgrade, so you can research across multiple tabs and open links side-by-side with your chat to maintain context. For the power users, NotebookLM now supports custom banners and descriptions, while Android developers get a new CLI and "Android Skills" repo that slashes token usage by 70% for coding agents.
Perplexity Max users get first dibs on new 24/7 persistent AI workflows
Perplexity has launched Personal Computer, a major expansion of its agentic ecosystem that brings multi-model orchestration directly to your local machine. By pressing both CMD keys on a Mac, the system can autonomously work across local files, native apps like iMessage or Email, and the web to execute, not just read, your to-do list.
The system is designed to handle messy, repetitive workflows, such as restructuring a cluttered Downloads folder or comparing local documents against real-time web data.
To ensure security, it operates within a secure sandbox where all actions are auditable and reversible. While optimized for 24/7 persistence on hardware like the Mac mini, the feature is currently rolling out exclusively to Perplexity Max subscribers. Read more.


MindMarks turns scattered AI chats into organized, searchable knowledge with easy navigation, smarter prompts, and cross-platform workflow.
Runsight helps you design, run, and manage AI agent workflows with YAML pipelines, cost tracking, testing tools, and on-premise control.
X-Pilot helps you turn documents into accurate, narrated video courses you can edit just by typing instructions.
Fathom 3.0 helps you capture meetings without bots, with AI notes, summaries, insights, and ChatGPT and Claude integrations.
Fluq helps you keep an eye on AI agents with real-time visibility, cost tracking, and controls to manage behavior and spending.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X!
Thinking of starting your own newsletter? AI Breakfast readers who sign up with Beehiiv receive a 14-day free trial and 20% off for 3 months.


