- AI Breakfast
- Posts
- Midjourney v7 Introduces Enhanced Image Quality and Precision Reference
Midjourney v7 Introduces Enhanced Image Quality and Precision Reference
Plus, a Sneak Peak at DeepSeek’s Next AI Play
In partnership with
Good morning. It’s Friday, May 2nd.
On this day in tech history: 2000: President Bill Clinton announced that accurate GPS access would no longer be restricted to the U.S. military
In today’s email:
Midjourney V7
Meta’s AI Strategy
Sneak Peak at DeepSeek’s Next AI Play
3 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
In partnership with RIME
A startup called Rime just unveiled Arcana, a new spoken language (TTS) model, which can capture the “nuances of real human speech,” including laughter, accents, vocal stumbles, breathing, and more, with unprecedented realism.
It's available via API, and you can try it out right in your browser.
Thank you for supporting our sponsors!

Today’s trending AI news stories
Midjourney v7 Introduces Enhanced Image Quality and Omni-Reference for Precision
Midjourney has rolled out v7 of its image generation model, sharpening image quality and refining prompt fidelity. The update improves renderings of hands and bodies—longstanding trouble spots—and adds tools like “Vary,” “Upscale,” and an enhanced image preview for smoother editing. A new --exp parameter lets users fine-tune aesthetics, though cranking it up may trade off some prompt precision. Intelligent segmentation features have also been added to streamline post-generation tweaks.
3 quick updates! We've updated the V7 model with improved image quality and coherence. There's a new lightbox editor that's easier to use. Last we've added a new experimental aesthetics parameter --exp (goes 0 to 100, 0 is default) that pumps up details and creativity. Have fun!
— Midjourney (@midjourney)
11:00 PM • Apr 30, 2025
The launch of Omni-Reference expands creative control, letting users anchor specific elements—whether characters, objects, or vehicles—within their images. With its “omni-weight” slider (0–1000), creators can dial in how strictly the model follows a reference, from loose style matching to high-fidelity replication.

Omni-Reference Examples
While still experimental, Omni-Reference hints at a deeper push toward granular customization in image generation. Read more.
Meta’s AI Strategy Includes Voice Data Collection and Automating Ads
Meta is ramping up its AI agenda with new data collection methods and deeper automation. In the U.S., Ray-Ban smart glasses now default to recording user voices, funneling data into Meta’s AI systems to refine algorithmic performance. Opt-outs require disabling voice control or manually deleting recordings, triggering fresh privacy debates despite Meta’s claims of user control.

Image: Meta
Alongside this, Mark Zuckerberg confirmed a forthcoming paid tier for the Meta AI app—approaching 1 billion users—offering enhanced compute and advanced features, mirroring moves by rivals. Meta also announced a sweeping overhaul of its ad business: advertisers now set objectives, and AI handles everything from creative production to targeting and performance tracking. While pitched as “infinite creative,” the model is already stirring backlash over brand safety and trust in ad measurements. Read more.
Sneak Peak at DeepSeek’s next AI play
DeepSeek has quietly dropped Prover-V2, a 671B-parameter open-source AI built for theorem proving and informal maths reasoning. Released on Hugging Face with little noise, Prover-V2 clocks 88.9% on the MiniF2F benchmark and uses a ‘cold-start’ method to break down complex proofs into subgoals before formal verification.
The model, based on DeepSeek’s V3 foundation and mixture-of-experts architecture, sharpens mathematical precision within its stack—despite DeepSeek’s limited access to Nvidia’s top-tier chips. Read more.
We just released DeepSeek-Prover V2.
- Solves nearly 90% of miniF2F problems
- Significantly improves the SoTA performance on the PutnamBench
- Achieves a non-trivial pass rate on AIME 24 & 25 problems in their formal versionGithub: github.com/deepseek-ai/De…
— Zhihong Shao (@zhs05232838)
3:23 PM • Apr 30, 2025

FutureHouse Platform brings super-intelligent AI research tools to scientists via web and API
Ai2's new small AI model outperforms similarly-sized models from Google, Meta
Runway launches Gen-4 References, letting users place characters in any scene
Suno Unveils v4.5: Enhanced Vocals, Genre Fusion, and Expanded AI Music Capabilities
US-built quantum computer outshines world's top supercomputers in key tests
Popular AI benchmark LMArena allegedly favors large providers, study claims
Anthropic Adds Integrations and Research Tools but Token Inflation Clouds Cost Picture
Wikipedia announces new AI strategy to “support human editors”
Visa wants to give artificial intelligence 'agents' your credit card
Watch: World's first wheeled robot dog powers through mud, rubble, mountains
Sam Altman-backed World expands ID tech with Tinder, Visa card, Stripe integration
Google funding electrician training as AI power crunch intensifies
Xiaomi introduces MiMo-7B, a compact model for math and coding tasks
Duolingo said it just doubled its language courses thanks to AI
Brave’s Latest AI Tool Could End Cookie Consent Notices Forever
Benchmark shows AI agents can't yet replace human analysts in finance
'Robotability score' ranks NYC streets for future robot deployment
KREA AI Launches Enhanced Model and Topaz Labs Integration for 22K Upscaling
Ideogram Launches 3.0 Update with Enhanced Realism and New Canvas Editing Features
Kling AI Launches Instant Film Effect for 3D Polaroid-Style Photos

3 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.


Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!