• AI Breakfast
  • Posts
  • DeepMind AI is officially running Boston Dynamics’ next-gen robots

DeepMind AI is officially running Boston Dynamics’ next-gen robots

In partnership with

Good morning. It’s Wednesday, January 7th.

On this day in tech history: In 2011, Aaron Swartz was arrested for bulk-downloading JSTOR articles, fueling open access debates. His manifesto pushed for free knowledge, aiding AI's need for unlabeled corpora in unsupervised learning. It amplified data ethics crucial to generative AI.

In today’s email:

  • DeepMind AI is officially running Boston Dynamics’ next-gen robots

  • The AI stack is going physical—and Nvidia is owning it

  • 40 Million Use ChatGPT for Health Advice, While Only 5% Pay

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

Become An AI Expert In Just 5 Minutes

If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.

This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.

Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.

Today’s trending AI news stories

DeepMind AI is officially running Boston Dynamics’ next-gen robots

At CES 2026, Google DeepMind put Gemini AI at the core of next-gen robots. Boston Dynamics’ Atlas and Spot gain real-time reasoning, object handling, and context-aware decisions. Atlas moves beyond scripted routines, sorting auto parts at Hyundai factories with predictive safety and continuous learning. Gemini Robotics 1.5 handles direct vision-language-action control, while Robotics-ER 1.5 manages planning, tool use, and task evaluation. An on-device variant runs fully offline, making Gemini a general-purpose AI layer across robot platforms.

Atlas hit the stage with 56 degrees of freedom, tactile hands, and 360-degree cameras. It lifts 110 pounds, handles repetitive tasks, and learns new actions from a few examples. Training mixes real-world and simulated motion data via Hyundai’s Robot Metaplant Application Center, preparing Atlas for industrial deployment in 2026.

Google is also supercharging Gemini on Google TV. Users can generate AI videos, images, and slideshows from text or Google Photos, control settings with voice commands, and explore interactive narrated “deep dives.” Launch starts on select TCL TVs, rolling out to more devices soon.

Google’s Product Lead, Logan Kilpatrick also teased a return to shipping, hinting at new devices arriving soon. Read more.

The AI stack is going physical—and Nvidia is owning it

Nvidia used CES 2026 to make its position clear: The next phase of AI is physical. Hyperscalers are lining up as Nvidia launches its open physical AI platform.

At the core is Vera Rubin, Nvidia’s next-generation AI supercomputer. The six-part system combines the Vera CPU, Rubin GPU, NVLink, BlueField-4 DPU, ConnectX-9 networking, and Spectrum-X CPO. The Rubin GPU delivers up to 50 petaflops of inference and roughly 5× the training performance of Blackwell, while cutting GPU requirements and token costs by an order of magnitude for large MoE models.

Hyperscalers including AWS, Microsoft, Google Cloud, and OpenAI have already committed, with deployments expected in the second half of 2026.

On top of that compute layer, Nvidia launched Alpamayo, an open-source physical AI stack for autonomous vehicles. Alpamayo 1 is a 10B-parameter vision-language-action model that reasons step by step, explains its decisions, and handles rare driving scenarios without prior examples. The release includes 1,700+ hours of driving data and AlpaSim, an open simulation framework. Automakers like Mercedes-Benz, Lucid, and Jaguar Land Rover, plus Uber, are already testing it.

Nvidia also expanded Cosmos, Isaac GR00T, and Nemotron to deliver generalist perception, planning, multimodal reasoning, and safety monitoring.

Early proof came from Runway, which ported its Gen-4.5 video and GWM-1 world models on NVIDIA’s Rubin platform in a single day, enabling real-time, long-context video generation and physics-aware world models. Read more.

40 Million Use ChatGPT for Health Advice, While Only 5% Pay

ChatGPT is quietly becoming a healthcare backstop. More than five percent of all ChatGPT messages now involve medical questions. Over 40 million people use it daily to check symptoms, decode medical jargon, and sort out insurance bills. Nearly two million insurance-related questions hit the system every week, spiking after federal health subsidies expired. Most of this use happens at night or in areas far from hospitals, exposing real gaps in the healthcare system.

But scale doesn’t equal sustainability. Only five percent of ChatGPT’s 900 million weekly users pay. Nearly 90 percent live outside North America, where ad revenue is minimal. OpenAI is betting on ads, cheap plans like $5 ChatGPT Go, and healthcare use to reach $110 billion in free-user revenue by 2030, money needed to fund massive data center expansion.

At the same time, OpenAI is losing core builders. Jerry Tworek, OpenAI’s VP of Research and a key architect of GPT-4, ChatGPT, and the o1 and o3 reasoning models, is leaving after nearly seven years. Tworek said he plans to pursue research that is “hard to do at OpenAI.” Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!