Skip to main content

ES / EN

Medicine in times of artificial intelligence: dubious ally or the definitive tool?
Thursday, April 11, 2024 - 16:59
Fuente: Pexels

For Jaime De Los Hoyos, head of the Department of Biomedical Informatics at the Clínica Alemana in Santiago, Chile, generative AI has potential as a tool to classify medical reports, but he also highlighted its ability to help in diagnoses.

“We have never seen the advances that have come with large language models like ChatGPT. Historically, chatbots were very primitive, while today's chatbots can answer basic questions like a person would. It is not surprising that these systems are now capable of passing Turing tests and medical knowledge,” says Jaime De Los Hoyos Moreno, head of the Department of Biomedical Informatics at the Clínica Alemana in Santiago, Chile.

This is how the specialist reflected on how great the prominence obtained by artificial intelligence (AI) has been in recent years, after participating in the conference “Artificial Intelligence in Health: present and future perspectives” this Wednesday during the 9th Latin American Congress of Technology and Business América Digital, held in Santiago de Chile.

A surgeon by profession, De Los Hoyos highlights that medicine has a wide potential for the use of AI because it traditionally handles a large amount of data that “has been systematized and managed in hospital information systems at public medical centers and “private.” On the other hand, although AI is not recent as a concept, the emergence of generative AI has marked a technological milestone. The reason lies in the fact that these models now emulate the human being's ability to solve problems.

Later, the specialist showed a video of a New Year's celebration in China that turned out to be the creation of Sora, a model from the OpenAI company, which creates artificial videos based on instructions given by the user. “There has been a rapid evolution of AI systems. We have gone from systems that are limited to classifycategories, to others capable of generating texts, videos and even music,” declared the speaker.

It should be noted that medicine maintains a long relationship with AI if we take into account the application of “classifier” models or machine learning. In this context, De Los Hoyos mentions bone age prediction systems. These are models that allow us to evaluate growth problems in infants by studying their bones and checking whether their biological age coincides with their chronological age.

Later on, the offer of machine learning was extended to the analysis of neuroimages that detected the risk of strokes in elderly people. “The scans did not give you the precision to promote anti-stroke treatment. The AI informs you if you are still in the “window” or prediction period to save the functional capacity of the brain or the patient's life,” the speaker clarifies.

However, these systems operate in an automated manner, following previously established commands. By contrast, generative AI closely resembles the flexibility of human intelligence when creating content. This way, the ChatGPT image creator is able to display photos of a young female doctor, as well as human skin tissue. If the latter is studied, controversy arises: it looks realistic, but it is not precise. De Los Hoyos highlights that although the image is similar to those in medical books, the truth is that the drawing is not proportional to the measurements of a real person.

It turns out that ChatGPT is a pre-trained system that is fed enormous volumes of textual information. Thus, you can recognize patterns that answer questions asked by users. “But the question arises about resorting to these portals to find out something like the current treatment for pneumonia with certain characteristics. The system will have the capacity to respond, but only to a certain point,” warns the Clínica Alemana executive.

The main problem with these models arises from their greatest virtue: the breadth of their database. When collecting a large amount of information, sources of various qualities are collected. And since generative AI models emphasize that a response be more coherent than precise, the phenomenon of “hallucination” can occur: in the desire to show coherent responses, even when imprecise, information that is not real is made up.”

Incidentaly, De Los Hoyos says that the first versions of ChatGPT were susceptible to change their mind on a topic if the user insisted that an answer was not correct. But now it's not so easy anymore. So, since “hallucination” is a latent problem, it would not be strange to think that the example of “false tissue” is replicated in textual information.

“Large language models are a sophisticated version of the smartphone word predictor. We give them a prompt and the system searches its database for what may come next,” explains the doctor. Another point to take into account is that the models are not specifically trained to address health issues, but rather are generalists.

However, generative AI models can still perform effectively in multiple tasks in medicine. For example, De Los Hoyos mentions the summary of patient records that classifies the most important findings such as creatinine or glucose values. In addition, they can process a set of medical records to check which patient is at greater risk of developing certain diseases.

Regarding the future of AI in medicine and society, De Los Hoyos is not alarmist. Far from predicting an apocalyptic scenario, the speaker points out that Friedman's Theorem, one of the basic theses in biomedical informatics, will be fulfilled: generative AI will not be a replacement for human activity, but rather a complement. Ultimately, technologies added to human intelligence are better than either alone. “We are going to see much more progress on this issue, but we must be careful, because ultimately, we have human beings at the forefront,” the expert noted.

Autores

Sergio Herrera Deza