Mobile Menu

Generative AI: Will New Tech Transform Healthcare?

Artificial intelligence (AI) has found itself at the forefront of the scientific world in recent years. Despite existing as a concept for decades, AI was catapulted into the limelight with the release of the generative AI tool ChatGPT in late-2022.

ChatGPT and similar bots were touted as a breakthrough in the way the world works. But how exactly can this technology be used to further healthcare? In this feature, we explore the potential uses of generative AI in this field, with the help of ChatGPT itself.

AI may have gained notoriety in the 2020s due its prevalence in our everyday lives, but the term has been around since the mid-20th century. Alan Turing, namesake of the infamous Turing test, was one of the first to champion the idea of ‘machine intelligence’ in the 1940s. AI has been used in many forms over the years, in ways that we often do not even notice, like recommendations from websites based on your viewing habits, or suggestions of what to buy based on previous purchases.

But what has really brought AI to the masses in recent years is the rise of generative AI. This refers to a class of AI systems designed to generate content or information that resembles, or is derived from, existing data. These models are trained on large datasets and learn the underlying patterns and structures within that data to produce new content autonomously. This includes, but is not limited to, the production of text, images and even music. Whilst this seems revolutionary, it has been the subject of many scandals in recent times, sparking debate over ethics, copyright and ownership.

Perhaps the biggest breakthrough in the generative AI world came in late 2022 with the launch of ChatGPT, an AI chatbot available to use for free online. The chatbot is based on a large language model and is trained on data from the internet, which includes discussions, forum posts, social media interactions and more. The format of the tool is conversational, and the bot learns to generate responses based on the context provided in the chat history. However, as with many other generative AI tools, ChatGPT has found itself at the centre of a number of controversies, notably when it was caught ‘lying’ to users, providing false information alongside doctored references, and allegedly responding positively to bribery.

WHAT IS GENERATIVE AI?

ChatGPT has been all over the news in the last two years, and many of us will have had a conversation or 10 with the bot. But ChatGPT isn’t the only form of generative AI, and you’d be forgiven for not knowing much about how it works. We asked the ChatGPT to explain it for us.

Figure 1. Image showing a response from ChatGPT when asked to define generative AI in simple terms. Generated March 2024.

What is generative AI’s place in science?

Despite the controversies, many still believe that generative AI will transform the scientific and healthcare landscapes. With incredible pattern recognition and the ability to quickly sift through databases, generative AI tools are already finding their way into scientific processes like drug discovery.

CHECK OUT OUR RECENT FEATURE – THE RISE OF AI: HOW CAN YOU USE CHATGPT TO SUPPORT YOUR RESEARCH?

But what about closer to home? Could ChatGPT or a similar tool one day replace your doctor?

A number of recent studies have tested the chatbot for use in clinical settings, with mixed results. But who better to ask about the use of generative AI in healthcare than a generative AI chatbot itself? Below, we dive into some of the areas that ChatGPT itself states could be enhanced by the use of generative AI.

MEET THE CHATBOTS CHANGING THE WORLD

We’ve all heard of OpenAI’s ChatGPT and Google’s Gemini (formerly known as Bard), two of the AI tools that rocked the world in the last couple of years. But at the time of writing this article, a new model gained prominence.

Welcome Copilot, Microsoft’s generative AI endeavour. Described as ‘your everyday AI companion’, the tool was launched in early 2023. However, it became more popular in early 2024 as it was integrated into the Windows 11 taskbar.

Copilot consists of multiple tailored chatbots, which provide expertise in specific areas such as cooking and travel. It also hosts an image generator. What’s more, the free version of Copilot has access to the internet and can provide references for its answers, addressing one of the flaws of ChatGPT.

Medical imaging

The first suggestion that ChatGPT provided was the use of generative AI to enhance medical imaging procedures. Generative AI models can enhance the resolution of images obtained from modalities such as MRI, CT scans and ultrasounds. By generating high-resolution images from lower resolution inputs, these models can provide finer details and improve the visibility of anatomical structures, facilitating more accurate diagnoses and treatment planning. There is also the potential for generative AI models to remove noise from images and fill in blanks from incomplete or low-quality data. Additionally, the use of AI to integrate multimodal data now makes it possible to gain combined insights from mediums such as histology imaging and genomics to inform precision healthcare.

But aside from generating high-quality images, generative AI may also be able to analyse imaging data. Some models can detect abnormalities and physical signs of disease more efficiently than the naked eye; one study posited that AI models could correctly guess the stage of a tumour based on a CT scan 82% of the time, whilst a trained clinician could only get it right around 44% of the time. Should AI be used in the clinic for this purpose, it could prevent the need for more invasive diagnostic procedures.

However, that is not the end of the story. Unfortunately, these models are not infallible, and are only as good as the data they were trained on. One example from a few years ago demonstrated this problem well. An AI model had been trained to detect malignancies in medical images, but, in the training data, cancerous lesions were more likely to be accompanied by a ruler. The model therefore ascertained that images with a ruler present were most likely to be malignant. So, whilst AI got it right, it wasn’t for the right reasons.

Moreover, some models struggle to apply their imaging knowledge to non-white individuals, due to a lack of diversity in training data. This highlights an important concern when investigating these tools for use in the medical field – we must ensure that training datasets are appropriate, unbiased and of a high quality before we can assume that the model is reliable.

Diagnostics and treatment

A second suggestion made by ChatGPT regarded the use of generative AI to aid in diagnostics and the development of treatment plans. Recently, several studies have investigated the use of AI chatbots to analyse patient data. By feeding AI models information from electronic health records, these tools can recognise patterns and compare them to extensive databases, providing a diagnosis that could potentially take a human years.

In one experiment, ChatGPT was given patient details from those who had attended the emergency department at a Dutch Hospital in 2022. Complemented by test results, such as from urine and blood, the information was presented to both the chatbot and trained clinicians, who ranked their top five diagnoses. The AI model managed to correctly identify the diagnosis over 97% of the time, whilst the humans only managed to do so in 87% of cases. Another study showed that the bot was better able to make treatment recommendations for depression due to a supposed lack of bias about gender and social class.

On the other hand, a recent study showed that an AI algorithm trained to predict treatment outcomes in schizophrenia patients failed when analysing data from new individuals, after having performed well with its training dataset. Ultimately, when presented with the challenge of assessing never-before-seen data, the model was no better than random chance. The results highlighted a need for rigorous testing when it comes to implementing this kind of tech in the clinic, ensuring there are no biases based on training sets.

EDUCATING PROFESSIONALS

Another important area that ChatGPT believed could be transformed by generative AI was the continuing education of medical professionals. Beyond simply using tools like online chatbots to obtain information, the tech can be used to generate case studies that may be hard to come by, create anatomical models for research or even realistic simulations for assessment.

What do you think about this? Should medical professionals be trained on generated data, or should all materials be obtained from real patients, no matter how rare the condition?

Patient communications

Another use of generative AI in healthcare that ChatGPT suggested is patient communications. Often, healthcare jargon can be confusing and unclear, and some believe that using a conversational model to distil down ideas could be helpful.

In a study published in March 2024, the researchers used a large language model to summarise patients’ discharge notes. Typically, these records can be filled with technical language, unclear abbreviations and other terms that may not be immediately understandable by the patient. Using generative AI, the notes were translated into a ‘patient friendly’ format, which were then assessed by trained professionals for readability and understandability. They concluded that the notes were in fact more understandable, and in most cases presented a complete picture. However, in 18% of cases, concerning omissions and inaccuracies were noted. This highlighted the need for revision by humans, suggesting that AI models are not yet ready to replace trained professionals.

Additionally, another recent study assessed ChatGPT’s ability to respond to common questions from patients regarding medical imaging. Once again, the algorithm was mostly able to respond accurately and completely, but a few mistakes highlighted a serious need for professional oversight.

What about outside of the clinic? We’ve all been told by doctors not to google our symptoms, but can something like ChatGPT give a more nuanced answer to your burning questions, before you consult your primary healthcare professional? Despite several studies assessing the use of the model for this purpose, ultimately even OpenAI themselves have cautioned against this approach. We asked ChatGPT to try to diagnose us based on our symptoms, and it gave a very diplomatic answer.

Figure 2. Image showing a response from ChatGPT when asked to give medical advice. Generated March 2024.

COMMUNICATION CONTROVERSY

Generative AI might be changing the world, and many are investigating its utility in science and patient communications. That doesn’t mean it can’t fail, and it certainly doesn’t mean we should publish generated content without giving it a once over.

This was brought firmly to the world’s attention after the publication of a recent research paper in Frontiers in Cell and Developmental Biology, in which images and text were later found to be generated using AI.

While it’s true that there is probably a lot of AI generated content right under our noses, the paper in question was riddled with typos, inaccuracies and shockingly not-safe-for-work images. The article was quickly retracted, but not before sparking outrage over the risks of AI usage and the integrity of the peer review process.

Where are we going now?

It’s hard to escape conversation about how generative AI will change the world. Something new seems to appear every day – even this article will likely be swiftly outdated.

It’s clear that AI isn’t going away, and every new development brings us closer to a world where AI chatbots, image generators and music makers become more integral in our lives. But when considering the impact of this technology on healthcare and science, we mustn’t jump the gun.

The common theme in all of the above challenges is a lack of a human touch. Whilst this is sometimes a good thing – a bot may have less inherent social biases, for example – it ultimately allows for misinformation and inaccuracies to trickle through. A 97% diagnostic success rate may seem high, but that 3% of individuals could still find themselves in a dangerous position. As for tackling bias, it’s abundantly clear that these models are only as good as the data they are trained on, meaning that there is still the potential for human judgments to seep through.

Despite these flaws, scientists are still pushing ahead to develop tools that harness the power and efficiency of generative AI to improve healthcare. For example, the model GatorTron was developed specifically for healthcare delivery using electronic health records. However, a recent study published by Festival of Genomics & Biodata alumnus Joshua Au Yeung concluded that this kind of chatbot model is not ready for use in a clinical setting. In fact, the study showed that both ChatGPT and Foresight (a model trained specifically for this purpose) had inherent racial biases when dealing with diverse patient groups, among a number of other issues.

But negative study results don’t spell the end for generative AI in the healthcare field. The topic may be ubiquitous, but it’s easy to forget that the rise of these chatbots has only come about in the last two years. The field is very much still in its infancy, and as researchers fine-tune training sets, work to tackle bias and find the right target audiences for their endeavours, it is likely we will begin to see improvements to what already looks like a promising new avenue for healthcare delivery.