China's "AGI Chip"

In partnership with

Good morning. It’s Wednesday, October 9th.

Did you know: On this day in 2006, Google announced their purchase of YouTube for $1.65 billion in stock.

In today’s email:

  • Huggingface’s App Building Tool

  • Are LLMs Conscious?

  • China’s “AGI Chip”

  • 6 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email. Here’s how to upgrade to ad-free and support this newsletter.

Writer RAG tool: build production-ready RAG apps in minutes

RAG in just a few lines of code? We’ve launched a predefined RAG tool on our developer platform, making it easy to bring your data into a Knowledge Graph and interact with it with AI. With a single API call, writer LLMs will intelligently call the RAG tool to chat with your data.

Integrated into Writer’s full-stack platform, it eliminates the need for complex vendor RAG setups, making it quick to build scalable, highly accurate AI workflows just by passing a graph ID of your data as a parameter to your RAG tool.

Today’s trending AI news stories

Hugging Face's new tool lets devs build AI-powered web apps with OpenAI in just minutes

Hugging Face's latest tool, openai-gradio, offers developers a simplified process of building AI-powered web apps. With just a few lines of Python, developers can spin up interactive apps using OpenAI’s language models, slashing development time from months to minutes. By abstracting the backend complexity, openai-gradio lets small teams focus on innovation rather than infrastructure headaches.

The tool provides flexibility for customization, from input fields to output formats, making it easy to tailor AI interfaces for specific use cases. The simplicity of the setup means startups and enterprises alike can experiment with AI without breaking a sweat—or the bank. Hugging Face has essentially handed developers a fast pass to scalable AI, removing the usual technical roadblocks. Read more.

LLMs are 'consensus machines' similar to crowdsourcing, Harvard study finds

A recent Harvard study uncovers that large language models (LLMs) function much like crowdsourcing platforms, generating responses based on the statistical likelihood of word pairings rather than expert knowledge. Researchers Jim Waldo and Soline Boussard found that while LLMs excel at providing accurate answers on well-trodden topics, they stumble on specific or contentious questions.

The study notes, “A GPT will tell us that grass is green because the words 'grass is' are most commonly followed by 'green.' It has nothing to do with the color of the lawn.” This highlights the models’ tendency to offer consensus-driven responses while glossing over the subtleties of less popular subjects.

The authors advocate for a critical lens when interpreting LLM outputs, especially in complex scenarios where consensus may be lacking, reinforcing the notion that “crowdsourced” knowledge isn't infallible. Read more.

China's upgraded light-powered 'AGI chip' is now a million times more efficient than before, researchers say

China's Taichi-II chiplet has elevated light-powered AI processing, achieving a million-fold increase in energy efficiency and a 40% enhancement in classification accuracy over its predecessor. Powered entirely by photons, this chip employs a "fully forward mode" training method, enabling real-time learning without the cumbersome baggage of iterative processing. Its modular design allows for impressive scalability, with multiple chiplets capable of simulating nearly 14 million artificial neurons—outpacing rivals.

Operating at over 160 trillion operations per watt, Taichi-II positions itself as a frontrunner in the quest for low-energy, high-performance computing. As it strides toward practical applications, it’s not just a chip; it’s a step closer to the elusive realm of artificial general intelligence, albeit with the usual caveats about its future implications. Read more.

6 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on X.