• AI Breakfast
  • Posts
  • Interview With Harvard Professor on AI in Medicine

Interview With Harvard Professor on AI in Medicine

Good morning. It’s Monday, April 10th.

We interviewed Harvard Medical School professor Dr. Isaac Kohane about the role of GPT-4 in medicine, and explore Facebook’s new image segmentation model that may be the most advanced image recognition model yet.

In today’s email:

  • AI Breakfast Interviews Dr. Kohane on AI in medicine

  • Edge Impulse: Pioneers of Edge AI

  • Facebook’s “Segment Anything” model for images

  • Decoding AI

  • Top 5 AI Research Papers of the week

You read. We listen. Share your feedback by replying to this email, or DM us on Twitter.

Harvard’s Dr. Kohane on AI and Medicine

Dr. Isaac Kohane, a renowned physician, computer scientist, and professor at Harvard Medical School has recently collaborated with colleagues to evaluate the performance of GPT-4, within a medical context.

The astonishing results are detailed in the upcoming book "The AI Revolution in Medicine," co-authored by independent journalist Carey Goldberg and Microsoft's vice president of research, Peter Lee.

The authors reveal that GPT-4 outperforms many doctors in certain areas, answering US medical exam licensing questions with over 90% accuracy.

GPT-4's capabilities extend far beyond impressive test-taking skills and fact-finding. The AI model also excels at translation, converting complex medical jargon into language understandable by a 6th-grade audience and even translating patient discharge information for non-English speakers.

We spoke with Dr. Kohane about the future of AI and medicine.

AI Breakfast: How do you think medical education and training should adapt to better prepare future doctors for working alongside AI technologies like GPT-4?

Dr. Kohane: I teach a third year medical school class at Harvard medical school on computationally enabled medicine.

I've already had the students experimenting with large language models (LLM’s) such as chatGPT and GPT-4. In the coming years, I will be explicitly, asking the students to leverage skills in using these (and other) large language models to address problems in communicating with patients, medical workflow, diagnosis, therapeutic decision-making and, patient management.

I do not see what else we can do other than embrace this new and important learning tool all the while repeatedly reminding our students of the importance of double checking the content to make sure that these large language models have not confabulated their responses.

It's also clear that students are using these tools whether we guide them or not and furthermore, many of our patients are doing the same. Therefore, embracing and understanding this new player in the patient, doctor relationship seems important and worthwhile.

AI Breakfast: In the context of global health, do you think GPT-4 could play a significant role in addressing healthcare disparities and improving access to medical expertise in underserved regions?

Dr. Kohane: There is an opportunity for GPT 4 to help reduce some of the disparities in the global perspective. This is probably not the case from for the most resource-poor countries. Unfortunately, transmitting knowledge where food, and water are scarce and medicines rarely are available will not have a large impact. But for those countries which have some resources and a sizable literate fraction of their citizenry, the LLM’s can augment the ability of healthcare paraprofessionals (e.g. village medics), as well as doctors to deliver healthcare closer to the state of the art.

This through the rapid diffusion of medical knowledge via the LLM’s at the point of care, in the local language, with understanding of local resource constraints. However, field testing and educational programs in the use of these LLMs ensuring that they are just as careful to check for LLM confabulations as their developed country counterparts.

AI Breakfast: As AI technologies like GPT-4 continue to advance, how can we ensure that they remain transparent and accountable in terms of their decision-making processes, particularly in a high-stakes field like healthcare?

Dr. Kohane: There is an urgent need to define what our society, writ, large, expects, and needs in terms of transparency of these large language models. Is it in their operation, the data that they use for the training sets, the reinforcement learning that modifies their outputs? This conversation has to happen soon if we are to avoid unpleasant surprises.

AI Breakfast: How do you think AI can be incorporated into medical practice in a way that enhances the doctor-patient relationship, rather than diminishing the human connection in healthcare?

Dr. Kohane: One important goal is to reduce the paper work of doctors so that we return to facing the patient rather than our computer and that the documentation is mostly written as a result of our interactions with our patients.

AI Breakfast: Looking beyond GPT-4, what developments in the field of AI and medicine do you anticipate over the next 5-10 years, and how might they revolutionize healthcare?

Dr. Kohane: Developments are so rapid at present that the 5-10 year horizon seems opaque. The technology will surprise us but I do wonder in which way the medical-payor establishment will slow progress because of the potentially huge financial shifts that these technology could enable.

Sponsored post

Edge Impulse: A Pioneer in Edge AI

Edge Impulse has streamlined the development and deployment of lightweight machine learning (ML) models on a wide array of devices, catering to industries like health and industrial sectors.

Key features:

  • Edge Impulse Studio Platform: A web-based platform that allows users to create datasets, train models, extract features, and deploy ML models across diverse hardware.

  • Bring Your Own Model (BYOM): Enables users to import their trained models into Edge Impulse for optimization and deployment on edge devices.

  • Python SDK: Provides ML engineers with model profiling, optimization, and C++ library generation within their Python notebooks.

Edge Impulse's innovative features have attracted large enterprises, including NASA, Oura, and Lexmark, along with over 70,000 developers.

The platform empowers ML engineers to seamlessly transition from data to ML model and device deployment, reducing time to market and making edge AI more accessible and efficient for various industries.

Not a developer, but want to know what “Edge AI” refers to?

Edge AI is the process of running AI algorithms on devices located near the data source, or "at the edge" of a network, instead of relying on cloud-based or centralized data centers for processing. These edge devices can include smartphones, IoT devices, sensors, and edge servers.

The primary benefits of Edge AI are:

Reduced latency: By processing data locally on edge devices, Edge AI can deliver real-time insights and faster decision-making, which is crucial for applications like autonomous vehicles, robotics, and real-time analytics.

Enhanced privacy and security: Since data is processed on the device itself, less data is transmitted over the network, reducing the risk of data breaches and ensuring better compliance with data privacy regulations.

Lower bandwidth requirements: Edge AI reduces the amount of data that needs to be transmitted to and from the cloud, which leads to cost savings and better network efficiency, especially in areas with limited or expensive bandwidth.

Improved reliability: As Edge AI enables devices to function independently, they can continue to operate even when the network connection to the cloud or a central server is lost or compromised.

Edge AI is gaining popularity across various industries, including manufacturing, healthcare, smart cities, and transportation, as it enables more efficient, responsive, and far more secure AI-driven applications.

Facebook’s Powerful New Image Model

The Segment Anything Model (SAM)

Facebook’s SAM can detect nearly all objects in complex images

What is Segmentation?

It’s a process in computer vision and image processing that involves dividing an image into distinct regions or segments, each containing pixels that share similar characteristics or features. This helps to identify and differentiate various objects, structures, or areas within the image, making it easier to analyze, interpret, and process the visual information for a variety of applications, such as object recognition, image editing, medical imaging, and autonomous vehicle navigation.

Facebook’s Segment Anything Model (SAM) is an open-source tool that allows users to identify and separate objects within images, making it easier to analyze and process them.

In simpler terms, it helps to recognize different elements in a picture and create individual "masks" for them, which are like digital outlines or layers of the objects.

This model has been trained on a massive dataset of 11 million images and 1.1 billion masks, which enables it to perform extremely well in various object segmentation tasks.

The significance of SAM lies in its ability to process images and identify objects in them with minimal input, making it highly efficient and versatile.

This has a huge potential in various applications, such as:

  • Image editing: SAM can be used in photo editing software to help users easily separate and manipulate different objects within images. For example, changing the color of a specific object or removing it entirely from the image.

  • Surveillance and security: In video monitoring systems, SAM can identify and track different objects or individuals, enhancing security measures and providing real-time insights.

  • Retail and marketing: SAM can be used to analyze shopper behavior by identifying and tracking different products and customers within a store, allowing retailers to optimize store layouts and marketing strategies.

  • Medical imaging: SAM can assist in processing medical images, like X-rays or MRIs, by accurately identifying and isolating different anatomical structures or abnormalities, which can help medical professionals in diagnosis and treatment planning.

  • Robotics and automation: In robotics applications, SAM can help robots recognize and interact with objects in their environment, allowing them to perform tasks more efficiently and adapt to new situations.

A unique aspect of SAM is that Facebook open-sourced the code for it. It seems that Meta’s competitive approach to AI is making their code publicly available, which will likely become their greatest strength in the AI race.

Check out the segmentation demo in the link below to see how it works.

Read more: Overview, demo page

Decoding AI: A Non-Technical Explanation of Artificial Intelligence

Our new book is available April 18th

This was a fun book to write.

Decoding AI breaks down the complexities of AI into digestible concepts, walking you through its history, evolution, and real-world applications.

We'll introduce you to the key players in the AI field, as well as explain the underlying algorithms, data, and machine learning concepts that power AI systems. You'll gain a deeper understanding of deep learning, neural networks, and reinforcement learning, and we'll explore various types of AI, from rule-based systems to probabilistic networks and beyond.

The goal was to make this book an approachable discovery of how AI works.

It discusses a wide range of applications AI has in areas like natural language processing, computer vision, robotics, and predictive analytics. It also delves into the regulatory landscape and policy issues surrounding AI, as well as the potential future developments in AI, such as its applications in healthcare, education, transportation, and even space exploration.

You'll also learn the difference between narrow AI and Artificial General Intelligence (AGI), and how to get started with using AI through tips and resources.

The link below is a discounted pre-order for the readers of the newsletter (Premium Subscribers receive a free copy) at 33% off.

The ebook will be delivered to your email on April 18th. It’s a fun, easy read for the endlessly curious.

5 Most Viewed Papers from the past 7 days

Note: arXiv is a free online library where scientists share their research papers before they are published. These are the 5 most viewed papers in the last week.

3x the information, for less than $2/week

Stay informed, stay ahead: Your premium AI resource.

AI Breakfast Business Premium: a comprehensive analysis of the latest AI news and developments for business leaders and investors.

Email schedule:

Monday: All subscribers
Wednesday: Business Premium
Friday: Business Premium

Business Premium members also receive:

-Discounts on industry conferences like Ai4
-Discounts on AI tools for business (Like Jasper)
-Quarterly AI State of the Industry report (June 1st)
-Free digital download of our upcoming book Decoding AI: A Non-technical Explanation of Artificial Intelligence available April 18th

Thank you for reading today’s edition.

Your feedback is valuable.

Respond to this email and tell us how you think we could add more value to this newsletter.

Read by employees from