- AI Breakfast
- Posts
- OpenAI's Agent Builder
OpenAI's Agent Builder
Good morning. It’s Monday, October 6th.
On this day in tech history: In 1983, a Fukushima lab paper that detailed how the neocognitron actually worked. It covered layer-by-layer receptive fields, unsupervised competition between the “S” and “C” cells, and how the model handled shift-invariant pattern recognition. It’s one of the clearest early ancestors of modern CNNs, with actual training details and implementation matrices included.
In today’s email:
OpenAI’s DevDay 2025
Tesla Optimus Knows Kung Fu
Gamer builds 5M-parameter ChatGPT model inside Minecraft
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
In partnership with WorkOS

As your app grows, managing “who can do what” becomes complex. Hard-coded roles and scattered permissions slow you down and fail to meet enterprise demands for fine-grained access.
WorkOS RBAC is the fastest way to implement structured, scalable permissions. Define roles, group permissions, and update access for entire user groups in a single step. With developer-friendly APIs, a powerful dashboard, and native integrations for SSO and Directory Sync, WorkOS gives you enterprise-grade access control out of the box.
Thank you for supporting our sponsors!

Today’s trending AI news stories
DevDay 2025 opens as OpenAI rolls out Agent Builder while Sora enters IP lockdown
OpenAI’s third DevDay on October 6 in San Francisco is pulling in more than 1,500 developers as Sam Altman opens with a slate of product drops centered on new model capabilities, API updates, and agent infrastructure.
New ships.
@sama keynote streaming live.
DevDay [2025] starts tomorrow.
— OpenAI (@OpenAI)
11:00 PM • Oct 5, 2025
The marquee reveal is Agent Builder, a visual orchestration tool that lets developers drag together entire AI workflows with blocks for logic, loops, approvals, data transforms, file search, MCP connectors, and ChatKit modules. It’s built to handle both quick prototyping and production deployment. The rest of the agenda covers model behavior research, speech tooling, platform integrations, and Sora showcases, including interactive demos.

Image: TestingCatalog
But Sora is also the company’s biggest liability at the moment. After users pumped out viral videos riffing on South Park and other protected IP, OpenAI is abandoning its “opt-out and deal with it later” stance and shifting to full opt-in for IP owners. Altman says studios and rights holders will get granular controls modeled after the system Sora already uses to block unauthorized biometric cameos. A potential revenue-sharing setup is on the table, though no one seems to know what that looks like yet. Enforcement is going to be messy. Style mixing, overlapping ownership claims, and the still-missing Media Manager tool mean OpenAI is plugging leaks without a real framework. Legal pressure is clearly dictating the timeline.
This Sora 2 copyright guardrail bs is just so sad. I had fun seeing familiar IPs having videos made of them. SpongeBob and Rick and Morty and all that. I don’t understand it. These tiny fan made videos benefit these companies. I haven’t thought about Rick and Morty in forever.
— Dan (@Dan60784096)
7:16 AM • Oct 4, 2025
And yet, the tech continues to advance: in a benchmark test from Epoch AI, Sora 2 answered GPQA science questions by generating short videos of a professor holding handwritten responses. It scored 55 percent, not close to GPT-5’s 72 percent via text, but enough to show video generation is starting to double as reasoning output.
Sora 2 can solve questions from LLM benchmarks, despite being a video model.
We tested Sora 2 on a small subset of GPQA questions, and it scored 55%, compared to GPT-5’s score of 72%.
— Epoch AI (@EpochAIResearch)
6:00 PM • Oct 3, 2025
On hardware, the Jony Ive collaboration is hitting delays that may push it past the 2026 target. Sources say the team is still stuck on core technical problems: how the assistant should sound and behave, how to protect privacy on a device that’s always listening, and how to afford the compute needed for low-latency inference in a tiny, screenless form factor. The privacy model is also unclear with on-device versus cloud processing is still being debated. And the hardware needed to make it all work at scale may blow past budget constraints.
OpenAI is also continuing its quiet talent land grab. It just acqui-hired Roi, a personalized finance assistant that lets users define both tone and behavior while tracking assets across crypto, stocks, DeFi, and real estate. The move aligns with OpenAI’s growing focus on consumer-facing, adaptive products. Read more.
Tesla Optimus hits real-time kung fu milestone with AI-driven motion
Tesla’s humanoid robot, Optimus, just got a serious upgrade. A new 36-second clip shows it sparring in Kung Fu with a human partner—real-time moves, not sped-up footage. Optimus blocks, sidesteps, and even lands a sidekick, showing off improved balance, weight-shifting, and recovery. Footwork is smoother, but hands remain mostly idle, hinting the 22-DOF hands are still in the lab.
Elon Musk confirms the demo is AI-driven, not remote-controlled. Optimus v2.5 is processing inputs and generating responses on the fly, a big step toward robots that can actually interact with humans and handle unpredictable environments. Kung Fu isn’t the end goal—it’s a stress test for speed, stability, and adaptability, skills critical for lifting, carrying, and walking over uneven terrain. Tesla plans 5,000 units in 2025, scaling parts production toward 50,000 by 2026. Read more.
Gamer builds 5M-parameter ChatGPT model inside Minecraft using 439M blocks
Sammyuri has built CraftGPT, a 5-million-parameter language model running entirely inside Minecraft using 439 million Redstone blocks. The model has six layers, a 240-dimensional embedding space, a 1920-token vocabulary, and a 64-token context window, with most weights at 8-bit and key embeddings and LayerNorm weights at 18–24 bits.
Trained on TinyChat, it runs on an in-game 1020×260×1656 Redstone computer, processing prompts in hours even at a 40,000× tick speed. Outputs are rough, often off-topic or ungrammatical, but the project is a masterclass in virtual computation, showing how AI logic, memory, and tokenization can be mapped to pure game mechanics. It’s less about usability and more about proving the boundaries of computation in an abstract, fully sandboxed environment. Read more.

New antibiotic targets IBD — and AI predicted how it would work before scientists could prove it
New Factory.ai guide details how to build multi-agent software dev teams
EU to unveil new AI strategy to reduce dependence on US and China
Jeff Bezos says AI is in an industrial bubble but society will get 'gigantic' benefits from the tech
This startup wants to put its brain-computer interface in the Apple Vision Pro
Inside the $40,000 a year school where AI shapes every lesson, without teachers
Anthropic paper pushes security teams to experiment with AI and measure impact
New neural network design cuts simulation time from days to minutes
The world pushes ahead on AI safety — with or without the U.S.
Meta's Yann LeCun reportedly clashed with the company over new publication rules
Alibaba's Qwen group has released two new small-scale multimodal models
2-metric-ton advanced nuclear fuel boost planned under US-French collaboration
Chinese tech company develops creepy ultra-lifelike robot face — watch it blink, twitch and nod
Here's JPMorgan Chase's blueprint to become the world’s first fully AI-powered megabank
Precision Strike Missile's tests offer long-range target neutralization

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.


Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!