• AI Breakfast
  • Posts
  • Will OpenAI Have Its Biggest Release Ever Today?

Will OpenAI Have Its Biggest Release Ever Today?

In partnership with

Good morning. It’s Friday, December 20th.

Did you know: Today is the final day of OpenAI’s 12 Days of Christmas Release? Watch the livestream here at 10:00am Pacific

In today’s email:

  • 4D Generative AI

  • OpenAI’s Day 10 & 11

  • Google’s Reasoning Model

  • 3 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In partnership with

Writer RAG tool: build production-ready RAG apps in minutes

  • Writer RAG Tool: build production-ready RAG apps in minutes with simple API calls.

  • Knowledge Graph integration for intelligent data retrieval and AI-powered interactions.

  • Streamlined full-stack platform eliminates complex setups for scalable, accurate AI workflows.

Today’s trending AI news stories

Meet Genesis, an Open-Source Universal Physics Engine That Builds 4D Worlds

Genesis, an open-source physics engine, redefines simulation speed and versatility for robotics and embodied AI. In a 24-month collaboration across 20+ research labs, it achieves up to 80 times faster simulations than traditional GPU-driven platforms like Isaac Gym and Mujoco MJX, all while maintaining precision.

  • 430,000x faster than real-time physics, processes 43M FPS on an RTX 4090

  • Unified framework with multiple physics solvers: Rigid body, MPM, SPH, FEM, PBD, Stable Fluid

  • Easy installation via pip: pip install genesis-world

  • Supports robots: arms, legs, drones, soft robots; compatible with MJCF, URDF, obj, glb

  • Photorealistic ray-tracing rendering

  • Built in Python, 10-80x faster than Isaac Gym GPU solutions

  • Cross-platform: Linux, MacOS, Windows with CPU, NVIDIA, AMD, Apple Metal support

  • Upcoming ".generate" method/generative framework

  • 26 seconds to train transferrable robot locomotion policies

  • Fully open-sourced physics engine and simulation platform

    Read more.

Sam Altman: 10m context window in months, infinite context within several years 

ChatGPT Expands to Phones and MacOS in OpenAI’s Penultimate Shipmas Rollout

In the final stretch of OpenAI’s Shipmas rollout, ChatGPT broadens its accessibility and functionality. Day 10 introduces expanded access with ChatGPT’s toll-free phone service in the U.S., offering 15-minute monthly sessions for both mobile and landline users. Additionally, global WhatsApp integration now enables text-based interactions with ChatGPT, with upcoming features like image recognition in the pipeline.

For Day 11, OpenAI enhances its macOS desktop app, enabling ChatGPT to read content from applications such as Apple Notes, Xcode, and Git repositories. However, due to the lack of write-back functionality, users must manually transfer responses. These updates highlight OpenAI's continued effort to serve a broad spectrum of users—from those without smartphones to developers—further solidifying its rapid expansion during the Shipmas rollout.

Google releases its own 'reasoning' AI model

Google has introduced its experimental "reasoning" AI model, Gemini 2.0 Flash Thinking Experimental, designed to tackle complex tasks in fields like programming, math, and physics. Hosted on AI Studio, it promises advanced multimodal reasoning, pausing to consider interrelated prompts before generating an answer. While it excels in abstract problem-solving, it falters on simple tasks—like counting letters in a word.

Built on the Gemini 2.0 Flash framework, it shares the self-correcting capabilities of other reasoning models, helping it avoid common pitfalls. However, its heavy computational load raises questions about scalability and long-term performance. Though early results show promise, the real challenge will be maintaining this momentum without exhausting resources. Read more.

5 new AI-powered tools from around the web

Featured Tool:

Writer RAG tool: build production-ready RAG apps in minutes

RAG in just a few lines of code? We’ve launched a predefined RAG tool on our developer platform, making it easy to bring your data into a Knowledge Graph and interact with it with AI. With a single API call, writer LLMs will intelligently call the RAG tool to chat with your data.

Integrated into Writer’s full-stack platform, it eliminates the need for complex vendor RAG setups, making it quick to build scalable, highly accurate AI workflows just by passing a graph ID of your data as a parameter to your RAG tool.

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X!