Google’s Med-PaLM 2, an AI tool designed to answer questions about medical information, has been tested at the Mayo Clinic research hospital, among others, since April, The Wall Street Journal reported this morning. Med-PaLM 2 is a variant of PaLM 2, which was announced at Google I/O in May this year. PaLM 2 is the language model behind Google’s Bard.
The paper also cites research made public by Google in May (pdf) which shows that Med-PaLM 2 still suffers from some of the accuracy issues we are used to seeing in many language models. In the study, doctors found more incorrect and irrelevant information in the answers provided by Google’s Med-PaLM and Med-PalM 2 than other doctors.
However, on almost all other metrics, such as showing evidence of reasoning, responses supported by consensus, or showing no sign of misconception, Med-PaLM 2 performed more or less as the actual doctors.
WSJ reported customers testing Med-PaLM 2 to control their data, which is encrypted, and Google has no access to it.
According to Google senior research director Greg Corrado, WSJ says, Med-PaLM 2 is still in its early stages. Corrado said that while he didn’t want it to be part of his own family’s “health care journey,” he believes Med-PaLM 2 “takes the areas of health care where the AI can be useful and expand it 10-fold.”
We’ve reached out to Google and the Mayo Clinic for more information.