• AI Breakfast
  • Posts
  • NVIDIA just paid $20B in cash for this company

NVIDIA just paid $20B in cash for this company

Good morning. It’s Friday, December 26th.

On this day in tech history: In 1995, the concept of “self-organizing maps” (SOMs) gained traction in anomaly detection and clustering research. Introduced by Teuvo Kohonen, SOMs offered a topologically-preserving way to visualize high-dimensional data in 2D grids, influencing early AI explorations in unsupervised feature learning and pattern recognition, techniques that later inspired visualization in deep embeddings.

In today’s email:

  • Nvidia spends $20B to lock down inference future, neutralize Google GPU threat

  • OpenAI tests ads inside ChatGPT replies

  • Nano Banana Flash incoming, plus your Gmail freedom

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

How could AI help your business run smoother?

You don’t need AI everywhere, you need it where it drives decisions. Data Rush helps businesses turn messy data, reports, and disconnected systems into clear, AI-powered analytics dashboards and automated workflows. If you have data you can’t easily analyze—or questions you can’t quickly answer—we’ll help you identify what’s possible and build custom systems that surface insights fast.

Today’s trending AI news stories

Nvidia spends $20B to lock down inference future, neutralize Google GPU threat

In its largest deal ever, Nvidia is paying roughly $20 billion in cash for a non-exclusive license to Groq's breakthrough inference technology, effectively bringing the startup's core IP, designs, and top talent in-house. Officially framed as a licensing agreement (to sidestep regulatory heat), this move pulls Groq founder Jonathan Ross, the original architect of Google's TPU, along with president Sunny Madra and key engineers straight into Nvidia's ranks.

Groq's LPUs (Language Processing Units) aren't about training massive models; they're built for ultra-low-latency inference. Groq has claimed speeds 10x faster and energy use a fraction of traditional GPUs, making it a direct threat in the exploding inference market where Google TPUs, Apple chips, Anthropic, OpenAI, and Meta's in-house efforts are all gunning for dominance.

Nvidia is neutralizing a rising challenger founded by ex-Google TPU creators, integrating their deterministic, high-efficiency tech into its "AI factory" architecture. Read more.

OpenAI tests ads inside ChatGPT replies

According to reports gathered by The Information, the company is experimenting with advertising formats that would embed sponsored content directly into ChatGPT responses. Options under discussion include AI-generated answers that surface paid recommendations, contextual ads placed beside chat outputs, and sponsored links triggered only after users ask for deeper detail.

Some concepts would use ChatGPT’s memory feature to personalize ads based on prior conversations, a move that introduces both technical leverage and trust risk. OpenAI says it is exploring monetization without compromising user confidence, even as CEO Sam Altman has previously warned that ad-shaped AI responses, especially those informed by private chat history, could cross into dystopian territory.

At the same time, Sam Altman is projecting a long-term upside to AI-driven disruption. Altman predicts that in just 10 years, college graduates will land exciting, ultra-high-paying jobs exploring the solar system, making today's careers look dull by comparison. He says, he envies young people entering the workforce amid AI-driven abundance and new frontiers like space missions. Read more.

Nano Banana Flash incoming, plus your Gmail freedom

Google is on the verge of releasing Nano Banana 2 Flash, the efficient sibling to Nano Banana Pro (internally dubbed Ketchup). Codenamed Mayo, this Flash variant delivers near-Pro image quality at lower costs and blazing speed, ideal for high-volume creators and builders scaling ideas without premium overhead. Early leaks and code dives show outputs rivaling the top tier, seamlessly plugging into Gemini for massive inference runs.

Five years after AlphaFold 2 solved protein folding and revolutionized biology, DeepMind VP Pushmeet Kohli discussed recent advances. AlphaFold 3 now handles DNA, RNA, and small molecules using diffusion models, with strict checks to avoid hallucinations. The key development is the "AI co-scientist," a Gemini-powered multi-agent system that generates hypotheses, debates options, and cuts research time from months to hours. Kohli's goal is to simulate entire human cells for faster biological discoveries.

And on the personal front, freedom at last. Google is rolling out the ability to change your @gmail.com address without losing a thing: emails, Drive files, subscriptions, all intact. Old address stays as an alias (mail flows in, sign-ins work). Starting in India (spotted in Hindi docs), with limits, one change per year, three lifetime. No more chained to that teenage username. This levels personal accounts with Workspace flexibility. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!