• AI Breakfast
  • Posts
  • Midjourney v7 Introduces Enhanced Image Quality and Precision Reference

Midjourney v7 Introduces Enhanced Image Quality and Precision Reference

Plus, a Sneak Peak at DeepSeek’s Next AI Play

In partnership with

Good morning. It’s Friday, May 2nd.

On this day in tech history: 2000: President Bill Clinton announced that accurate GPS access would no longer be restricted to the U.S. military

In today’s email:

  • Midjourney V7

  • Meta’s AI Strategy

  • Sneak Peak at DeepSeek’s Next AI Play

  • 3 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In partnership with RIME

A startup called Rime just unveiled Arcana, a new spoken language (TTS) model, which can capture the “nuances of real human speech,” including laughter, accents, vocal stumbles, breathing, and more, with unprecedented realism.

It's available via API, and you can try it out right in your browser.

Thank you for supporting our sponsors!

Today’s trending AI news stories

Midjourney v7 Introduces Enhanced Image Quality and Omni-Reference for Precision

Midjourney has rolled out v7 of its image generation model, sharpening image quality and refining prompt fidelity. The update improves renderings of hands and bodies—longstanding trouble spots—and adds tools like “Vary,” “Upscale,” and an enhanced image preview for smoother editing. A new --exp parameter lets users fine-tune aesthetics, though cranking it up may trade off some prompt precision. Intelligent segmentation features have also been added to streamline post-generation tweaks.

The launch of Omni-Reference expands creative control, letting users anchor specific elements—whether characters, objects, or vehicles—within their images. With its “omni-weight” slider (0–1000), creators can dial in how strictly the model follows a reference, from loose style matching to high-fidelity replication.

Omni-Reference Examples

While still experimental, Omni-Reference hints at a deeper push toward granular customization in image generation. Read more.

Meta’s AI Strategy Includes Voice Data Collection and Automating Ads

Meta is ramping up its AI agenda with new data collection methods and deeper automation. In the U.S., Ray-Ban smart glasses now default to recording user voices, funneling data into Meta’s AI systems to refine algorithmic performance. Opt-outs require disabling voice control or manually deleting recordings, triggering fresh privacy debates despite Meta’s claims of user control.

Image: Meta

Alongside this, Mark Zuckerberg confirmed a forthcoming paid tier for the Meta AI app—approaching 1 billion users—offering enhanced compute and advanced features, mirroring moves by rivals. Meta also announced a sweeping overhaul of its ad business: advertisers now set objectives, and AI handles everything from creative production to targeting and performance tracking. While pitched as “infinite creative,” the model is already stirring backlash over brand safety and trust in ad measurements. Read more.

Sneak Peak at DeepSeek’s next AI play

DeepSeek has quietly dropped Prover-V2, a 671B-parameter open-source AI built for theorem proving and informal maths reasoning. Released on Hugging Face with little noise, Prover-V2 clocks 88.9% on the MiniF2F benchmark and uses a ‘cold-start’ method to break down complex proofs into subgoals before formal verification.

The model, based on DeepSeek’s V3 foundation and mixture-of-experts architecture, sharpens mathematical precision within its stack—despite DeepSeek’s limited access to Nvidia’s top-tier chips. Read more.

3 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!