- AI Breakfast
- Posts
- Runway Just Solved AI Video's Biggest Problem
Runway Just Solved AI Video's Biggest Problem
Good morning. It’s Wednesday, April 2nd.
On this day in tech history: 1987: IBM introduced the Personal System/2 (PS/2), a line of personal computers that set new standards for PCs. The PS/2 brought innovations like the PS/2 port for keyboards and mice, which became widely adopted in the industry and influenced PC design for years to come.
In today’s email:
Runway Gen-4 Video
OpenAI’s Image Generator Goes Free
Luma Lab’s Camera Control For AI Video
4 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.

Today’s trending AI news stories
Runway Gen-4 solves AI video's biggest problem: character consistency across scenes
Runway AI has launched its Gen-4 video generation model, offering a significant breakthrough in maintaining character and scene consistency across multiple shots. Unlike previous AI video systems, Gen-4 creates continuous visual narratives by preserving objects, characters, and styles throughout various angles and lighting conditions. This model tackles a major flaw in AI video production, ensuring that characters and settings remain consistent in every frame which is a crucial factor for believable storytelling.
Available to paid subscribers and enterprise clients, Gen-4 uses reference images combined with text prompts to generate high-quality, realistic motion in video clips. Runway has demonstrated this model through short films, such as “New York is a Zoo,” showcasing consistent visual effects. As the company pushes forward with a complete digital production pipeline, the release of Gen-4 marks a pivotal moment in AI filmmaking. Read more.
OpenAI's Image Generator Goes Free, $300B Valuation Secured, and Open-Weight Model on the Horizon
OpenAI has made ChatGPT’s latest image generation tool available for free, following an initial delay. CEO Sam Altman confirmed the tool’s availability to free-tier users, albeit with unspecified limits. He also warned that capacity constraints will delay upcoming product releases, citing overwhelming demand. The GPT-4o-powered model, known for Studio Ghibli-style recreations, was postponed after ChatGPT gained a million users in an hour.
y'all are not ready for images v2...
— Sam Altman (@sama)
11:04 PM • Apr 1, 2025
ChatGPT’s subscriber now exceeds 20 million paying users—a 4.5 million surge since last year. This translates to an estimated $415 million in monthly revenue, setting OpenAI on track for an enviable $12.7 billion in annual earnings. The company has also just wrapped up a $40 billion funding, the largest private tech deal on record, led by SoftBank, with Microsoft and others in tow. But SoftBank’s hefty $30 billion stake is contingent on OpenAI hitting its for-profit target by year’s end.
OpenAI’s next move is to release an open-weight AI model—its first since GPT-2. It’s a strategic counterpunch to Meta’s Llama, which imposes deployment limits. They are currently gathering feedback through global developer sessions to refine this open model. It has also launched a free learning platform for AI skills with OpenAI Academy.
we will not do anything silly like saying that you cant use our open model if your service has more than 700 million monthly active users.
we want everyone to use it!
— Sam Altman (@sama)
11:06 PM • Mar 31, 2025
However, OpenAI is no stranger to controversy. A recent report claims its GPT-4o model was trained on O'Reilly’s paywalled books without consent, a move that’s drawing further scrutiny and could ignite new legal fires. With all eyes on its next steps, the company continues to balance progress with the growing legal and ethical tightrope it’s walking. Read more.
Luma Labs released Camera Motion Concepts for its Ray2 video model with 20+ precision-tuned camera motions
Luma Labs introduces Camera Motion Concepts for its Ray2 video model. This innovation allows users to direct precise, cinematic camera movements via natural language commands, utilizing over 20 precision-tuned motions. The feature, now available on Dream Machine, employs "Concepts," a novel method for teaching generative models new controls from minimal examples.
Introducing #Ray2 Camera Motion Concepts in #DreamMachine — 20+ precision-tuned camera motions designed for smooth cinematic control and great reliability. Concepts compose with each other making hundreds of impossible new camera moves possible. Available now.
— Luma AI (@LumaLabsAI)
4:59 PM • Mar 31, 2025
Unlike LoRA or finetuning, Concepts enable composition of multiple motions without degrading Ray2's core quality. This facilitates unique, combinatorial camera moves, including those physically impossible. The technology maintains Ray2’s high-fidelity output and stylistic versatility. Read more.

Manus AI rolls out paid plans, mobile plans, and a new AI backend
Google Slides Adds Imagen 3 for Image Generation and New Features
Microsoft Copilot Experiments Reveal Deep Research, Avatars, and Podcast Creation
Nova Act is Amazon's foray into agentic AI that navigates your browser
Watch: Unitree debuts Dex5 dexterous hand with 20 degrees of freedom and advanced touch sensitivity
Watch: PUDU Robotics unveils FlashBot Arm with 7-DoF arm and 40kg lifting capacity
D-Wave and Japan Tobacco use quantum to build a better AI model for drug discovery
Watch: BMW and Figure advance real-world production with Helix AI and end-to-end autonomy
Alphabet spinout Isomorphic Labs raises $600M for its AI drug design engine
Google Slides now uses Imagen 3 and adds other new visual tools
Apple Intelligence comes to Apple Vision Pro today with visionOS 2.4
Krea AI Launches New 3D Tool, Website Redesign and Discounts
AI is helping scientists decode previously inscrutable proteins
DeepSeek is even more efficient than Nvidia, says analyst, and the industry could copy them
Microsoft expands AI features across Intel and AMD-powered Copilot Plus PCs

4 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.
📄 Exploring the Effect of Reinforcement Learning on Video Understanding: Insights from SEED-Bench-R1


Thank you for reading today’s edition.
Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!