Adobe Launches Inside ChatGPT

Good morning. It’s Wednesday, December 10th.

On this day in tech history: In 1993, the Mosaic 1.0 web browser was released for Windows and Macintosh. Overnight it turned the web from a text-only academic tool into a hypermedia platform, without which we’d have no Common Crawl, no billions of scraped webpages, and no foundation models as we know them.

In today’s email:

  • Adobe Brings Photoshop, Express, and Acrobat Directly Into ChatGPT

  • Google’s 2026 smart glasses put AI front and center with Project Aura and Gemini 3 Pro

  • GPT-5.2 and Image-2 models enter early testing as enterprise AI soars

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

Deepfakes Are Now Multimodal. Your Defense Should Be Too.

Introducing Resemble AI’s DETECT-3B Omni — a 3-billion-parameter model that detects fake voice, images, and video through one API.

🏆 #1 on HuggingFace
🏆 #1 on DFBench for speech + image detection

✔ 40+ languages
✔ Replay-attack protection
✔ Detects partial image edits
✔ Works on major commercial AI outputs (GPT-4o, Midjourney, Veo, etc.)

If your organization handles identity, finance, call centers, or video comms, this is production-grade deepfake defense for 2025 threat models.

Today’s trending AI news stories

Adobe Brings Photoshop, Express, and Acrobat Directly Into ChatGPT

Adobe announced a major integration today, bringing three of its flagship applications — Photoshop, Express, and Acrobat — directly into ChatGPT. The rollout delivers Adobe’s creative and productivity tools to more than 800 million ChatGPT users for free, redefining how people edit visuals, design content, and manage documents.

Adobe apps for ChatGPT allow anyone to create, adjust, and transform media using plain language inside the chat. Users can type commands like “Adobe Photoshop, blur the background of this image,” and ChatGPT automatically launches the appropriate Adobe experience, guiding them step by step.

The apps adapt familiar Adobe features to a conversational interface. Users can describe edits or engage more manually through embedded UI controls that appear contextually — such as sliders for brightness or contrast inside Photoshop. The experience surfaces the right tools at the right moment, blending AI assistance with hands-on control.

This launch extends Adobe’s ongoing investment in conversational AI and agent-driven workflows. It follows the introduction of Acrobat Studio, new AI Assistants across Adobe’s creative tools, and Adobe’s work leveraging the Model Context Protocol to unify experiences across applications.

With Adobe apps for ChatGPT, users can:

Edit images in Photoshop — adjust exposure, refine areas, and apply effects like Glitch or Glow while retaining image quality.
Design content through Express — browse templates, customize text, animate assets, and generate visuals entirely inside chat.
Organize documents with Acrobat — edit PDFs, extract data, merge files, convert and compress formats, and redact sensitive material.

Work started in ChatGPT can be continued seamlessly inside Adobe’s native apps for deeper precision. Read more.

Google’s 2026 smart glasses put AI front and center with Project Aura and Gemini 3 Pro

Google’s 2026 smart-glasses roadmap introduces two lines: display-equipped models for navigation, translations, and side-by-side Android and Windows apps, and audio-focused glasses for screen-free, voice-driven AI via Gemini. Project Aura, developed with Xreal, adds a 70-degree optical see-through display, XR Gen 2 Plus tethered compute, and multimodal sensing for contextual understanding.

Image: Google, XReal

The Android XR ecosystem will also debut “autospatialization,” converting 2D content, including streamed PC games, into interactive 3D experiences. Travel Mode stabilizes visuals in motion, and Likeness avatars mirror expressions for realistic video calls. Developers can use the updated Android XR SDK to build immersive apps.

Chrome is adding a Gemini-powered safety layer that uses a “User Alignment Critic” to block risky actions, limit AI agents to relevant sites, record what they do, and require approval for sensitive steps.

Gemini 3 Pro also set new scores in vision tasks like OCR, spatial reasoning, video analysis, and turning messy documents into working code. BNY Mellon is already adopting it to automate complex financial workflows. Google’s Stitch tool can now turn AI-made UI designs straight into usable HTML. Google also reaffirmed that Gemini will stay ad-free.

In a bid to reclaim product discovery from Amazon and social platforms, Google expanded its Doppl virtual try-on app with a TikTok-style discovery feed made entirely from AI-generated video. Every clip links directly to a store, turning Doppl into a full shopping funnel. Read more.

GPT-5.2 and Image-2 models enter early testing as enterprise AI soars

OpenAI claims big enterprise win days following internal ‘code red.’ According to its 2025 Enterprise Report, usage has grown eightfold since late 2024, with organizations consuming 320 times more reasoning tokens for analytics, coding, and research. Custom GPTs now handle 20 percent of enterprise traffic, with major clients like BBVA running thousands of task-specific agents. A survey of 9,000 workers shows that AI adoption saves 40–60 minutes per day, rising to 80 minutes in technical roles, although adoption still varies across industries.

OpenAI's data shows users who apply AI to more task types report greater time savings. | Image: OpenAI

Denise Dresser, former Slack CEO, joins OpenAI as chief revenue officer to lead the growing enterprise team under COO Brad Lightcap. The company faces structural risks, according to a Financial Times analysis, as the US power grid may fall 19 GW short by 2028 while AI data centers expand, compared with China adding 429 GW in 2024. To cope, companies are using on-site power, gas turbines, and nuclear plants.

The company is also spotted preparing its next-generation AI models. Image-2 and Image-2-mini, codenamed Chestnut and Hazelnut, are being tested on LM Arena, showing better color accuracy, image structure, and code-in-image features. GPT-5.2, labeled “Olive Oil Cake,” is appearing in Notion as part of early testing before a wider enterprise release.

OpenAI also launched AI certification courses this week for workers and teachers. It is now being piloted at major companies and universities, with the goal of certifying 10 million Americans by 2030. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!