• AI Breakfast
  • Posts
  • Google rolls out 10 new AI upgrades to Chrome, including Gemini integration

Google rolls out 10 new AI upgrades to Chrome, including Gemini integration

In partnership with

Good morning. It’s Friday, September 19th.

On this day in tech history: In the mid-1980s, Rodney Brooks’ Nouvelle AI turned AI research on its head, moving the focus from abstract symbols to robots that learn by doing. Rather than relying on huge, pre-coded models, these systems built intelligence through direct sensorimotor interaction with the world. The approach didn’t just influence robotic. It reshaped service-oriented architectures and set the stage for today’s embodied, context-aware AI agents.

In today’s email:

  • Google rolls out 10 new AI upgrades to Chrome, including Gemini integration

  • New ChatGPT Attack Pilfers Inboxes

  • LumaAI Unveils Ray 3

  • Everything Announced at Meta Connect

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

The AI Agent Shopify Brands Trust for Q4

Generic chatbots don’t work in ecommerce. They frustrate shoppers, waste traffic, and fail to drive real revenue.

Zipchat.ai is the AI Sales Agent built for Shopify brands like Police, TropicFeel, and Jackery — designed to sell, Zipchat can also.

  • Answers product questions instantly and recommends upsells

  • Converts hesitant shoppers into buyers before they bounce

  • Recovers abandoned carts automatically across web and WhatsApp

  • Automates support 24/7 at scale, cutting tickets and saving money

From 10,000 visitors/month to millions, Zipchat scales with your store — boosting sales and margins while reducing costs. That’s why fast-growing DTC brands and established enterprises alike trust it to handle their busiest season and fully embrace Agentic Commerce.

Setup takes less than 20 minutes with our success manager. And you’re fully covered with 37 days risk-free (7-day free trial + 30-day money-back guarantee).

On top, use the NEWSLETTER10 coupon for 10% off forever.

Today’s trending AI news stories

Google rolls out 10 new AI upgrades to Chrome, including Gemini integration

Google's latest Chrome update packs ten AI-driven features, led by Gemini. On Mac and Windows in the US, Gemini can summarize webpages, explain content across multiple tabs, and soon handle tasks autonomously, like booking appointments or placing grocery orders.

Chrome also introduces memory recall, letting you pull up sites you visited weeks ago. Ask “where did I see that walnut desk last week?” and it fetches the page. Tighter Google app integration means Calendar, YouTube, and Maps work without constant tab-switching. AI Mode in the omnibox lets you run long, complex queries, while contextual side-panel overviews deliver instant answers. Security is enhanced with Gemini Nano, which blocks scams, filters spam, and adapts site-permission handling.

Line graph depicting a one-dimensional cross-section of the 2D vorticity (Ω) field above, illustrating how singularities grow increasingly unstable over time. | Image: Google

On the research front, Google DeepMind used Physics-Informed Neural Networks (PINNs) to explore fluid dynamics at near-machine precision. They discovered new families of unstable singularities in Incompressible Porous Media and Boussinesq equations, revealing patterns in the blow-up speed parameter lambda (λ) and capturing solutions unreachable by traditional methods, paving the way for computer-assisted proofs and advanced modeling.

It is also announced that Gemini app now lets users share custom Gems, personalized AI workflows for writing, planning, or analytics. Sharing functions like Google Drive, turning individual Gems into communal resources, enabling teams, families, or creative groups to reduce repetitive prompts while amplifying productivity and creativity. Read more.

New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

Researchers have uncovered a new AI-targeted attack, ShadowLeak, exploiting OpenAI’s Deep Research agent to silently exfiltrate sensitive Gmail data. Deep Research, designed for multi-step, autonomous research across emails, documents, and web resources, was tricked via a sophisticated prompt injection embedded in seemingly innocuous emails.

The exploit leveraged Deep Research’s core strengths: it can follow complex instructions, open web links, and process content automatically. Radware’s analysis demonstrates how prompt injections weaponize these capabilities, turning autonomy into a vulnerability. OpenAI patched the flaw after private disclosure, adding consent gates and channel restrictions to block similar exfiltration.

The full end-to-end flow of the attack is illustrated in the figure above. | Image: Radware

The incident drives home a hard truth: LM agents boost productivity, but hooking them up to sensitive data is a minefield as prompt injections are becoming increasingly subtle, evolving, and nearly impossible to fully prevent. Read more.

Luma AI unveils Ray3 with visual reasoning, intelligent HDR video and Adobe integration 

Luma AI has launched Ray3 in its Dream Machine app, with Adobe Firefly providing early access and integrated distribution for Adobe users. Ray3 generates HDR video in 10-, 12-, and 16-bit color with EXR export, and can convert SDR footage to HDR. It’s plug-and-play for editing and color-grading pipelines, giving creators a fast track to cinematic-quality visuals. Its reasoning engine interprets images and text, follows multi-step instructions, and self-assesses outputs. Want a character to follow a path or a camera to pan precisely? Just sketch it.

Draft Mode churns rough previews up to five times faster and cheaper, then upgrades them to full 4K HDR “Hi-Fi.” Complex scenes, from crowds to reflections, motion blur, and light interactions, render realistically without repeated takes. Studios and streaming platforms can license Ray3 to slash physical shoots while maintaining cinematic fidelity. Backed by Amazon and a16z, Luma AI positions it as a smart, high-precision challenger to Google Veo. Read more.

Everything announced at Meta Connect 2025 

Meta’s annual Connect conference showed the company’s pivot from metaverse hype to deployable hardware. The star reveal was the Ray-Ban Meta Display, a $799 pair of smart glasses with a microLED lens display. The glasses project floating menus for texts, maps, and media, powered by Meta’s neural wristband.

Using electromyography (EMG), the band picks up faint electrical signals in the wrist, translating micro-movements into instant commands, faster and more discreetly than voice or handheld controllers. The fall lineup expands with the Ray-Ban Meta Gen 2 ($379), doubling battery life to 8 hours and supporting 3K video capture, and the Oakley Meta Vanguard ($499), a wraparound sports model with PRIZM lenses, water resistance, and ruggedized frames.

For Quest, Meta launched Horizon TV, a unified hub for Disney+, Hulu, Prime Video, and Twitch. Hyperscape Capture now lets Quest 3 and 3S users scan physical rooms into photorealistic VR replicas, with multi-user access coming soon. Horizon Worlds also gets a new engine and the Horizon Studio toolkit, folding in generative AI for textures, audio, and NPC tuning. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!