
Good morning. It’s Wednesday, April 29th.
I wrote a guide on how to setup OpenClaw (for non-technical people).
21 pages, ~30 minutes, no advanced skills required.
I walk you through the install, model selection (where most people overspend by 5x), Telegram setup, safe email integration, and 20+ real use cases.
(it’s free, just enter $0 at checkout. Leave a review if you found it helpful!)
-Jeff
AI Breakfast
You read. We listen. Let us know what you think by replying to this email.

Your favorite creative tools just got Claude superpowers
Anthropic is starting to meter intelligence more explicitly. Its highest-compute model, Claude Opus, now sits behind an opt-in usage layer even for Pro users, effectively separating baseline access from expensive long-context reasoning and heavy inference workloads. At the same time, Claude Code introduces /model and --model flags, letting developers dynamically route tasks across models based on latency, cost, or capability.
This resource management coincides with the launch of Claude for Creative Work. New connectors for Blender, Autodesk, Adobe, and Ableton expose APIs and internal data layers to Claude, enabling natural-language control. Blender’s connector, built on its Python API and MCP, allows scene inspection and procedural edits via generated scripts.
Adobe’s integration extends this further, connecting Claude to 50+ Creative Cloud tools to execute multi-step production tasks across apps like Photoshop and Premiere through natural language.
Claude Code gains Remote Control push notifications, allowing users to step away from the terminal while being alerted when long tasks finish or input is needed, provided sessions remain active locally.
Anthropic is also scaling globally, including a new Sydney office and integrations with Canva and Xero. Read more.
OpenAI wants to replace all apps with agents
OpenAI is reportedly launching a 2028 smartphone project with MediaTek and Qualcomm designed to kill the app era. Instead of clicking icons, users interact with agents via gpt-realtime-1.5, which uses low-latency voice to control app state more naturally. This hardware push extends to speakers and wearables, all powered by custom silicon and manufactured by Luxshare.
To fuel this, OpenAI recently gutted its Microsoft partnership removing exclusivity and AGI clauses while allowing deployment across any cloud provider. Its new partnership with Amazon Web Services now brings GPT-5.5, Codex, and Managed Agents into Amazon Bedrock. Codex handles code generation through APIs, while Managed Agents are built for multi-step, production-grade automation.
But the tension lies in the math. Reports of missed revenue and user targets triggered a massive selloff for infrastructure anchors like Oracle and Nvidia, raising questions about whether OpenAI can afford its $300 billion compute commitments.
But the company’s internal engineering is still moving at a rapid pace. Using an open-source tool called Symphony, they’ve turned project trackers into autonomous control planes for Codex agents. By letting AI workers handle tasks in parallel rather than waiting for human prompts, they’ve spiked their internal pull request volume by 500 percent. Read more.
Google cedes operational veto rights in classified Pentagon AI deal
The Pentagon has secured a classified deal to use Google’s AI models for any lawful purpose, effectively removing Google's ability to veto specific operational deployments. This agreement places Gemini in the defense supply chain alongside OpenAI and xAI, though it includes technical guardrails against domestic surveillance and fully autonomous target selection.
DeepMind researchers are questioning the fundamental nature of the tech. A new paper by Alexander Lerchner argues that LLMs are structurally incapable of consciousness, defining them as mapmaker-dependent simulators that lack the physical embodiment required for genuine experience.
At the product layer, Google just rolled out Ask YouTube, a conversational layer for Premium users that replaces standard video results with synthesized itineraries and text-based summaries. Read more.
Mistral launches ‘Workflows’ for production-grade AI orchestration
Mistral is moving to solve the "production gap" where fragile AI demos die when they hit the real world. They just launched [Watch:] Workflows, a Python-based orchestration layer that replaces messy, ad-hoc prompting with deterministic business logic. By building on the Temporal engine, the same tech Netflix uses to survive massive infrastructure failures, Mistral ensures these AI pipelines are "durable." If a server crashes mid-task, the workflow doesn’t just break; it resumes exactly where it left off.
Unlike "black box" agents that make their own rules, Workflows lets developers code rigid guardrails. A single line of code, wait_for_input(), can pause a massive logistics operation or a banking KYC check to wait for a human signature, consuming zero compute while it sits idle. With native Model Context Protocol (MCP) support, these workflows can pull data from GitHub or Stripe, while keeping the actual data processing inside the customer’s own environment to satisfy strict sovereignty laws. Read more.


Odyssey-2 Max is a causal autoregressive world model delivering physically accurate, real-time simulation via next-state prediction scaling.
SureThing is an always-on AI team that executes work across apps with shared memory and no silos.
ProdSift instantly extracts complete Shopify or WooCommerce catalogs into import-ready CSV files without any setup.
GitBar is a macOS menubar app centralizing GitHub, GitLab, and Azure pull requests with real-time status badges.
Actian VectorAI DB is a high-performance portable vector database for low-latency local AI search on edge or hybrid systems.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X!
Thinking of starting your own newsletter? AI Breakfast readers who sign up with Beehiiv receive a 14-day free trial and 20% off for 3 months.

