- AI Breakfast
- Posts
- NVIDIA just paid $20B in cash for this company
NVIDIA just paid $20B in cash for this company
Good morning. It’s Friday, December 26th.
On this day in tech history: In 1995, the concept of “self-organizing maps” (SOMs) gained traction in anomaly detection and clustering research. Introduced by Teuvo Kohonen, SOMs offered a topologically-preserving way to visualize high-dimensional data in 2D grids, influencing early AI explorations in unsupervised feature learning and pattern recognition, techniques that later inspired visualization in deep embeddings.
In today’s email:
Nvidia spends $20B to lock down inference future, neutralize Google GPU threat
OpenAI tests ads inside ChatGPT replies
Nano Banana Flash incoming, plus your Gmail freedom
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
How could AI help your business run smoother?

You don’t need AI everywhere, you need it where it drives decisions. Data Rush helps businesses turn messy data, reports, and disconnected systems into clear, AI-powered analytics dashboards and automated workflows. If you have data you can’t easily analyze—or questions you can’t quickly answer—we’ll help you identify what’s possible and build custom systems that surface insights fast.

Today’s trending AI news stories
Nvidia spends $20B to lock down inference future, neutralize Google GPU threat
In its largest deal ever, Nvidia is paying roughly $20 billion in cash for a non-exclusive license to Groq's breakthrough inference technology, effectively bringing the startup's core IP, designs, and top talent in-house. Officially framed as a licensing agreement (to sidestep regulatory heat), this move pulls Groq founder Jonathan Ross, the original architect of Google's TPU, along with president Sunny Madra and key engineers straight into Nvidia's ranks.
Groq's LPUs (Language Processing Units) aren't about training massive models; they're built for ultra-low-latency inference. Groq has claimed speeds 10x faster and energy use a fraction of traditional GPUs, making it a direct threat in the exploding inference market where Google TPUs, Apple chips, Anthropic, OpenAI, and Meta's in-house efforts are all gunning for dominance.
Nvidia is neutralizing a rising challenger founded by ex-Google TPU creators, integrating their deterministic, high-efficiency tech into its "AI factory" architecture. Read more.
OpenAI tests ads inside ChatGPT replies
According to reports gathered by The Information, the company is experimenting with advertising formats that would embed sponsored content directly into ChatGPT responses. Options under discussion include AI-generated answers that surface paid recommendations, contextual ads placed beside chat outputs, and sponsored links triggered only after users ask for deeper detail.

Some concepts would use ChatGPT’s memory feature to personalize ads based on prior conversations, a move that introduces both technical leverage and trust risk. OpenAI says it is exploring monetization without compromising user confidence, even as CEO Sam Altman has previously warned that ad-shaped AI responses, especially those informed by private chat history, could cross into dystopian territory.
At the same time, Sam Altman is projecting a long-term upside to AI-driven disruption. Altman predicts that in just 10 years, college graduates will land exciting, ultra-high-paying jobs exploring the solar system, making today's careers look dull by comparison. He says, he envies young people entering the workforce amid AI-driven abundance and new frontiers like space missions. Read more.
Nano Banana Flash incoming, plus your Gmail freedom
Google is on the verge of releasing Nano Banana 2 Flash, the efficient sibling to Nano Banana Pro (internally dubbed Ketchup). Codenamed Mayo, this Flash variant delivers near-Pro image quality at lower costs and blazing speed, ideal for high-volume creators and builders scaling ideas without premium overhead. Early leaks and code dives show outputs rivaling the top tier, seamlessly plugging into Gemini for massive inference runs.
Five years after AlphaFold 2 solved protein folding and revolutionized biology, DeepMind VP Pushmeet Kohli discussed recent advances. AlphaFold 3 now handles DNA, RNA, and small molecules using diffusion models, with strict checks to avoid hallucinations. The key development is the "AI co-scientist," a Gemini-powered multi-agent system that generates hypotheses, debates options, and cuts research time from months to hours. Kohli's goal is to simulate entire human cells for faster biological discoveries.
And on the personal front, freedom at last. Google is rolling out the ability to change your @gmail.com address without losing a thing: emails, Drive files, subscriptions, all intact. Old address stays as an alias (mail flows in, sign-ins work). Starting in India (spotted in Hindi docs), with limits, one change per year, three lifetime. No more chained to that teenage username. This levels personal accounts with Workspace flexibility. Read more.

AI cracks Zelda's color-switching puzzle with six-move planning
Alphabet-backed Motive Technologies plans an IPO as AI-driven software hits Wall Street
Turning off training isn’t privacy: your thumbs-up still feeds AI learning pipelines
China shows how a single voice command can hijack humanoid robots
Deep learning just made quantum chemistry exponentially faster
Dwarkesh Patel: AGI isn’t coming next year, and that’s why the hype is missing the real bottleneck
CES 2026 is set to showcase AI wearables, trifold phones, and robots that might finally feel useful
Vanguard, $12T asset manager, bets on AI voicebots for personalized advice to 50M clients
CFOs predict 2026 will be the year AI stops automating and starts transforming finance
Waymo taps Google Gemini to bring a human-like voice to its autonomous vehicle experience
The era of chasing one AI is over: 2025 rewards smart stacks and task-specific models
Humanoids failed to meet expectations, but the work that mattered kept going
People are getting their news from AI, and it's altering their views
2025 was the year AI surprised everyone, even the experts who predicted it
The AI gold rush is getting funded with trillions in corporate bonds
World's first 800V immersion-cooled backup battery targets megawatt AI racks at CES 2026
Australia’s financial watchdog pushes back on AI-generated bank reports
LG’s new home assistant robot can do chores and learn user preferences at CES 2026
Salesforce executives reveal declining trust in LLMs, turning to rule-based systems for Agentforce
AI maps 360,000 hidden DNA knots that control gene switches and fuel cancer growth
Norway's AV1 robot lets sick students attend class and stay socially connected
The silicon ‘iron curtain’ descends as US-China AI rivalry splits the world in 2025
Liquid AI drops 3B open model that beats DeepSeek R1 on key benchmarks
New LangVAE technique makes explainable AI 10x cheaper without touching the model
How AI coding agents work—and what to remember if you use them
Stanford and Harvard paper reveals why agentic AI dazzles in demos but crumbles in real-world tasks

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!




