OpenAI's Autonomous Agents

Sponsored by

Good morning. It’s Friday, February 9th.

Did you know: On this day in 2005, Google Maps was launched?

In today’s email:

  • Advancements in AI Technology and Applications

  • AI Ethics and Regulation

  • Security and Privacy Concerns

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

Today’s trending AI news stories

Advancements in AI Technology and Applications

> OpenAI is reportedly working on two AI agents to automate diverse tasks, as revealed by The Information. One agent focuses on device and application tasks, managing actions like data transfer between documents and completing expense reports. The other agent specializes in web-based tasks such as gathering public data and booking travel. The other agent specializes in web-based tasks such as gathering public data and booking travel. This development aligns with OpenAI’s ambition to transform ChatGPT into a sophisticated personal assistant for work, capable of understanding individual employees and their tasks, although it’s uncertain if these agents will be standalone products or part of a broader software suite.

> Google has embarked on a rebranding initiative, consolidating its AI efforts under the name Gemini. The company’s Bard chatbot now operates under the Gemini moniker, alongside a dedicated Android app. Additionally, Google has integrated its Duet AI features within Google Workspace into the Gemini brand. Gemini Ultra 1.0, the most advanced iteration of Google’s large language model, has been unveiled. Users can now set Gemini as their default assistant on Android, signaling a strategic shift from Google Assistant. The subscription-based Gemini Ultra offers enhanced capabilities across various benchmarks.

> OpenAI's CEO, Sam Altman, is in talks with investors, including the UAE government, to raise $5-7 trillion, aiming to transform the semiconductor industry and tackle the shortage of AI chips. Altman plans to expand chip-building capacity and advance AI through this effort, targeting constraints holding back OpenAI's growth, notably the scarcity of essential AI chips. Success could profoundly impact the development of artificial general intelligence (AGI).

> GPT-4 prompts have achieved compatibility with Google's Gemini Ultra, a noteworthy revelation validated by Wharton School professor Ethan Mollick. In his study, Mollick underscores the remarkable harmony between the two models, suggesting that intricate prompts designed for GPT-4 seamlessly function on Gemini Ultra. Despite distinct strengths and weaknesses, such as Gemini Ultra's proficiency in explanations and integration compared to GPT-4's coding prowess, this insight provides assurance to prompt engineers. It implies a smooth transition to advanced AI models without the need for a complete overhaul of existing workflows.

AI Ethics and Regulation

> Huawei researchers advocate for "embodied artificial intelligence" (E-AI) as the next crucial step towards achieving artificial general intelligence (AGI), arguing that large language models (LLMs) like OpenAI’s ChatGPT lack real-world understanding. Despite many AGI experts adopting the mantra "scale is all you need," they propose that true understanding requires AI agents to interact with the world through perception, action, memory, and learning. However, challenges exist, notably the difficulty of implementing embodiment with current technology.

> The Federal Communications Commission (FCC) has decreed the utilization of AI-generated voices in robocalls as illegal, empowering states with enhanced measures against fraudulent activities. Effective immediately, this ruling aims to address the surge in deceptive calls employing voice-cloning technology, mimicking the voices of celebrities, politicians, and even family members. FCC Chairwoman Jessica Rosenworcel stressed the imperative to shield the public from misinformation and fraud, especially following incidents like the recent robocall impersonating President Biden during New Hampshire's primary. This move responds to calls from Senators to counter AI-driven disinformation campaigns amid the increasingly contentious 2024 election cycle.

Security and Privacy Concerns

> IBM researchers have issued a warning about the emergence of AI-powered audio-jacking, a concerning trend facilitated by generative AI technology. This method allows for the manipulation of live conversations without detection, achieved through the use of large language models (LLMs), voice cloning, and text-to-speech tools. By intercepting live calls and altering conversation contexts, attackers can maliciously substitute sensitive information, such as bank account details, with fake data. This presents significant security risks, extending to potential manipulation of medical records and even aircraft navigation. The alarming ease with which such attacks can occur underscores the urgent need for advanced cybersecurity measures, including blockchain-based solutions like Certihash.

> A Russian innovator, Aleksandr Zhadan, has ingeniously utilized OpenAI's ChatGPT-4 to revolutionize his Tinder experience, resulting in an unforeseen engagement. Zhadan's pioneering approach involved crafting a bespoke AI bot integrated with GPT-3, enabling automated interaction with potential matches. Leveraging the prompt, "You're a guy talking to a girl for the first time. Your task: not immediately but to invite the girl on a date," Zhadan fine-tuned the bot's conversational skills. Through meticulous training and integration of Torchvision with Tinder's web interface, Zhadan achieved remarkable success, orchestrating engagements with precision, underscoring the transformative impact of AI in interpersonal dynamics.

> The Biden Administration introduces the US AI Safety Institute Consortium (AISIC), a coalition of over 200 companies featuring tech leaders such as OpenAI, Google, Microsoft, and Amazon. AISIC's mission is to promote the safe advancement and utilization of generative AI, echoing President Biden's AI Executive Order objectives. It will formulate protocols for red-teaming, risk mitigation, security, and watermarking synthetic content. AISIC, under the umbrella of the US AI Safety Institute (USAISI), aims to pioneer a new era in AI safety measurement science, as per the Department of Commerce.

In partnership with THE RUNDOWN

Stay up-to-date with AI.

The Rundown is the world’s fastest-growing AI newsletter, with over 500,000+ readers staying up-to-date with the latest AI news, tools, and tutorials.

Our research team spends all day learning what’s new in AI, then distills the most important developments into one free email every morning.

5 new AI-powered tools from around the web

Coze is an AI bot builder that democratizes automation. It’s a no-code platform that integrates GPT-4, plugins, and workflows for bot creation.

Trademark Owl simplifies brand protection with AI-guided processes, transparent pricing, and attorney-filed applications ensuring efficient trademark registration for small businesses at half the cost.

HappyML helps you create, customize, and deploy AI-driven chatbots effortlessly. It supports auto-training on organization data, 100+ languages, and integration across platforms.

Analytics AI offers Fortune 500-level weekly insights by connecting Google Analytics to ChatGPT.

Figr AI, in its early preview, is an AI design partner shaping ideas into product designs. Craft wireframes effortlessly using top designs, streamlining the design process creatively.

Retell AI offers an API for developers to craft human-like conversational voice agents quickly. With an average response time of 800ms, it’s a game-changer.

Agent Gold presents personalized AI chatbots for YouTubers. Chat about anything, learn their stories, glean insights into content creation.

Modelit, a personal AI work assistant integrates with various knowledge sources and apps to enhance productivity, offering AI-powered chat, document search, co-writing, and customizable templates for privacy-focused data handling.

arXiv is a free online library where researchers share pre-publication papers.

Google DeepMind's study explores training a transformer model on 10M chess games, achieving a Lichess blitz Elo of 2895. Outperforming AlphaZero and GPT-3.5-turbo-instruct, it showcases strong play without explicit search. The research reveals that complex algorithms can be distilled into feed-forward transformers, implying a paradigm shift. The supervised approach requires a strong oracle solution for data annotation, limiting unknown capabilities. While impactful in AI, the closed domain of chess mitigates societal concerns. Overall, the study highlights the potential of scaling transformer architectures for general algorithm approximation.

IBM researchers propose TP-Aware Dequantization, addressing latency in distributed LLM deployment. Their method optimizes inference deployment with Tensor Parallel (TP), overcoming limitations of existing quantization kernels. By preserving data locality and reducing global communication, they achieve up to 1.81x speedup for Llama-70B and 1.78x for Granite-20B MLP layers on NVIDIA DGX Systems. The approach leverages GPTQ and TP properties, optimizing memory throughput and latency. Through reordering and minimizing global communication between column-TP and row-TP layers, TP-Aware Dequantization demonstrates significant speed improvements, critical for efficient large-scale LLM deployment.

The ScreenAI model, developed by researchers at Google Research, is a vision-language model designed to understand user interfaces (UIs) and infographics. It combines innovative techniques such as the PaLI architecture and flexible patching from pix2struct, trained on diverse datasets to achieve state-of-the-art results on various UI and infographic tasks. With a modest parameter size of 5B, the model outperforms others of similar scale. Additionally, Google Research releases three new datasets alongside ScreenAI, facilitating comprehensive benchmarking and further research in digital content understanding.

The paper introduces an Interactive Agent Foundation Model, transitioning AI from task-specific to dynamic agent-based systems. Their approach, a multi-task agent training paradigm, unifies pre-training strategies like visual masked autoencoders and language modeling for versatile AI. Demonstrating its efficacy across Robotics, Gaming AI, and Healthcare domains, the model exhibits human-level reasoning. By jointly pre-training visual and language submodules, they address challenges of hallucination and multimodal understanding. The framework, integrating perception, planning, and interaction, envisions embodied agents for practical applications. The model's impact spans gaming immersion, healthcare assistance, and robotic interactions, emphasizing ethical considerations and ongoing model refinement.

The paper introduces QA-ViT, a method to enhance multimodal reasoning by embedding question awareness directly within the vision encoder. Traditional approaches in Vision-Language (VL) architectures often overlook user queries, leading to suboptimal alignment between visual features and questions. QA-ViT addresses this limitation by conditioning the vision encoder on textual prompts, resulting in dynamic visual features tailored to the posed questions. Through extensive experiments, QA-ViT consistently improves performance across diverse tasks and architectures, showcasing its potential for enhancing visual and scene-text understanding within VL models. The method represents a significant advancement in question-aware vision modeling within the VL domain.

ChatGPT Creates Comics

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, apply here.