Google nears release of GPT-4 competitor

Plus, royalty-free AI generated beats

Good morning. It’s Monday, September 18th.

Did you know: The website ThisPersonDoesNotExist shows an endless number of AI generated humans that look convincingly real?

In today’s email:

  • AI Language and Content Generation

  • AI in Transportation and Navigation

  • AI in Urban Planning and Design

  • AI Hardware and Industry Movements

  • AI in Art and Creativity

  • Innovative Robotics

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think of this edition by replying to this email, or DM us on Twitter.

Today’s edition is brought to you by:

The future of artist-friendly music technology

Lemonaide Seeds is the ultimate melodic idea generator.

100% Royalty Free AI-generated MIDI beats to match any key, style, or tempo made with the click of a button.

Today’s trending AI news stories

AI Language and Content Generation

Google nears release of AI software Gemini. Google has provided select companies with early access to its conversational AI software, aiming to rival OpenAI’s GPT-4 model. Gemini comprises large-language models that power chatbots, text summarization, and original content generation. Google is currently offering developers access to a relatively large version of Gemini and plans to make it available through Google Cloud Vertex AI. Gemini is said to be a multimodal language model with a similar number of training parameters as GPT-4.

Salesforce’s "Chain of Density" prompt aims to improve AI summaries by packing more info into fewer words. CoD involves iterative summary creation and revision, resulting in more abstract, coherent, and unbiased summaries compared to generic prompts. Tested on 100 news articles from CNN and DailyMail, summaries created after about three iterations were rated the highest by human reviewers. This complex prompt offers a promising method for improving the quality of AI-generated article summaries, with the potential to advance the field of summarization.

Is ChatGPT a Better Entrepreneur Than Most? In an experiment led by Wharton professor Christian Terwiesch, ChatGPT showcased its ability to outperform MBA students in generating new product ideas, doing so more efficiently and cost-effectively. The study involved both MBA students and ChatGPT creating ideas for products targeting college students under $50, with ChatGPT not only producing more ideas in a shorter time but also demonstrating higher purchase probability and preference for “seeded” ideas. Terwiesch advocates using generative AI as a creative co-pilot, facilitating innovation and idea generation across various domains.

AI in Transportation and Navigation

New Google Deepmind Maps algorithm improves routing by up to 24 percent on average based on real driving data and user preferences. This algorithm incorporates 360 million parameters and employs an “inverse reinforcement learning” (IRL) approach, including a novel IRL algorithm called Receding Horizon Inverse Planning (RHIP). RHIP optimizes route suggestions, improving accuracy by 16 to 24 percent for driving and two-wheeled vehicles. This innovation addresses the complexity of real-world road networks and suggests a promising future for AI-driven route planning. Extensive user testing will determine its real-world effectiveness.

Knight Rider Lite: Wayve's Lingo-1 brings human-like reasoning to self-driving cars. Lingo-1, a model that combines machine vision with textual logic to enhance self-driving cars’ decision-making processes. Lingo-1 allows cars to provide textual justifications for their actions, making their decisions more transparent and safer. It can be trained through text prompts and adapts flexibly. While currently trained only on London data, it has shown promising results, achieving 60% human driver accuracy. This innovative approach could improve the safety and training of autonomous vehicles.

AI in Urban Planning and Design

AI Can Already Design Better Cities Than Humans, Study Shows. Researchers at Tsinghua University in China developed an AI system that outperformed human designs by 50% in terms of access to services, green spaces, and traffic levels. The AI, trained over two days, swiftly handled tasks that would take human planners up to 100 minutes, enabling urban planners to focus on more complex, human-centric aspects of design. The AI is envisioned as an assistant to human planners, with human experts reviewing and optimizing AI-generated concepts based on community feedback for more efficient, sustainable, and accessible cities.

AI Hardware and Industry Movements

NVIDIA Reportedly Shipping 900 Tons of H100 AI GPUs This Quarter, Amounts to 300,000 Units This disclosure by research firm Omdia highlights NVIDIA’s dominance in the AI space and its progress toward the company’s goal of shipping 1.5 million to 2 million AI GPUs by 2024. NVIDIA’s H100 GPUs have been instrumental in achieving these record-breaking sales figures, creating a strong monopoly in the industry. This signifies exciting times for NVIDIA as it continues to lead in AI GPU shipments.

AI in Art and Creativity

Microscale internal combustion engine powers insect robot. This insect-sized robot, featured in Science, exhibits the ability to jump 59 centimeters vertically and carry a load 22 times its own weight while walking. The scientists intend to apply this combustion actuator technology to create stronger and more agile robots with large-scale, variable-recruitment musculature. This development could potentially lead to dexterous and rapid land-based hybrid robots, enhancing various applications.

Innovative Robotics

Canadian startup Acrylic Robots has created an AI-powered robot capable of reproducing artists’ works at scale. The robot, developed by founder Chloe Ryan, employs machine learning and neural networks to recreate existing artwork with small variations, adding a touch of uniqueness. This device offers digital tracking of brushstrokes, making it accessible to artists with a laptop or tablet. The imperfections introduced by the robot’s reproductions appeal to artists and consumers, according to Ryan, who sees it as a creative co-pilot for artists.

5 new AI-powered tools from around the web

KBaseBot revolutionizes static content by effortlessly transforming it into dynamic chatbots. This innovation allows you to embed these chatbots on your platform, facilitating real-time dialogues and enhancing knowledge accessibility. KBaseBot offers automated conversations, lead generation, sample embedding, full customization, and GDPR compliance, making it a transformative tool for boosting engagement and interaction.

Snapy.ai is an AI-powered tool that streamlines video and audio content creation. It automates tasks like trimming, editing, and removing silent parts from videos and audio, making content creation more efficient. Users can generate engaging content for platforms like YouTube, Reels, TikTok, podcasts, and more.

NoiseGPT is a decentralized AI platform that promotes freedom of speech by avoiding biases and censorship. It offers hyper-realistic text-to-speech, human-like dialogue bots, and voice cloning from short audio clips. It’s used in content creation, podcasts, advertising, and more. The noiseGPT token supports ecosystem growth.

Parea AI is an engineering platform designed to empower developers in creating, optimizing, and sharing LLM-powered products. It offers a range of features such as side-by-side prompt comparison, CSV test case import, automatic prompt optimization, API access, and analytics. Additionally, Parea provides personalized feature development and dedicated support to enhance the developer experience.

Morph Beta is a cutting-edge, no-code data management tool and “All-in-One Data Studio” that simplifies data storage, analysis, and sharing. Featuring AI-powered no-code capabilities, serverless Postgres, and seamless integrations, it empowers non-developers to engage with data effortlessly.

arXiv is a free online library where scientists share their research papers before they are published. Here are the top AI papers for today.

This research, conducted by Google Research, introduces a novel approach to modeling scene dynamics within image space. It leverages an AI-driven neural stochastic motion texture to predict long-term motion representations from individual images. This technique has practical applications in transforming static images into dynamic videos, creating seamless loops, and simulating object dynamics in response to external forces. The model’s training utilizes latent diffusion, and to address issues related to amplitude distribution, it employs a frequency-adaptive normalization technique. Furthermore, a frequency-coordinated denoising strategy enhances motion prediction accuracy, resulting in the generation of coherent stochastic motion textures across various frequencies, thereby producing more realistic animations.

A study conducted at Stanford University investigated the effectiveness of large language models (LLMs) in clinical summarization tasks. This research encompassed eight LLMs and four summarization tasks, including radiology reports and patient questions. The study revealed that appropriately adapted LLMs surpassed human experts in terms of completeness and correctness in clinical text summarization. This suggests that integrating LLMs into clinical workflows at Stanford could alleviate the documentation burden on clinicians, enabling them to focus more on personalized patient care and improving various aspects of medicine. The research also highlighted challenges faced by both LLMs and human experts in this context.

This research addresses challenges in selecting effective demonstrations for in-context learning (ICL) with large language models (LLMs). The proposed AMBIG-ICL method mitigates LLMs’ sensitivity to prompts by considering semantic similarity, label ambiguity, and model misclassifications when choosing ICL demonstrations. Experiments on three text classification tasks demonstrate substantial performance gains over baseline methods. By leveraging the LLM’s existing knowledge about the task, this approach provides a more effective strategy for ICL, reducing the need for task-specific fine-tuning while enhancing performance, making it a valuable contribution to the field of natural language processing and machine learning.

This research investigates the potential of Large Language Models (LLMs) as evaluators for scaling up multilingual evaluation. It addresses the limitations of current evaluation techniques, particularly for languages beyond the top 20, which lack systemic evaluation. LLM-based evaluators can theoretically cover a wide range of languages without relying on human annotators, but their bias and performance need examination. The study conducts evaluations across three text generation tasks in eight languages, calibrating LLM-based judgments against 20,000 human judgments. Findings suggest that caution is necessary, particularly in low-resource and non-Latin script languages, as LLM-based evaluators may exhibit biases towards higher scores.

The paper introduces a novel open-source library designed for creating autonomous language agents. These agents leverage large language models to perform various tasks and interact with humans and environments using natural language interfaces. The AGENTS framework addresses the need for more accessible and customizable language agents by offering features like memory management, tool usage, multi-agent communication, and fine-grained symbolic control. It caters to both non-specialists and researchers, simplifying agent development and deployment. AGENTS also encourages community collaboration through its Agent Hub, fostering the sharing and customization of language agents.

Thank you for reading today’s edition.

Your feedback is valuable.


Respond to this email and tell us how you think we could add more value to this newsletter.