Utilizing Large Language Models in Healthcare
Utilizing Large Language Models in Healthcare
There is a lot of talk, hype, and fear around several new artificial intelligence applications, such as Open AI’s Chat GPT, that fall under the category of “generative AI” and large language models. The following explains these models at a very high level and discusses their potential use in the healthcare setting.
Generative AI is a type of artificial intelligence that can generate new content without explicit instructions from a human programmer. Generative AI takes advantage of machine learning algorithms that are based on artificial neural networks (modeled after the human brain) with a so-called “transformer architecture.” When trained on massively large datasets of unlabeled text, they can generate novel human-like text as an output.
Large language models
These technologies, when trained on text data, are also referred to as “large language models (LLMs”), such as Chat GPT-3 (Generative Pre-trained Transformer 3). Chat GPT-3 specializes in generating natural language text. It works by processing massive amounts of input text and using that information to predict the next word or sequence of words in each context.
LLMs like Chat GPT-3 have a wide range of applications, from assisting with language translation to generating human-like chatbot conversations. A limitation of any model trained on historical data is the currency of the data its trained on. Chat GPT-3, which is the current version available online, has a knowledge cutoff of September 2021. Because of competitive concerns, Open AI is not publishing the currency of the data in its latest model, GPT-4. Bard, the LLM created by Google, claims to have up-to-date information but is not yet generally available.
Concerns about these AI solutions
Generative AI solutions raise ethical concerns about the potential misuse of AI-generated content and its impact on human employment in industries that rely on language-based tasks. There are also concerns about whether the data that the models are trained on are fully factual and non-biased. When trained on the corpus of the internet, this is certainly a concern. But the proverbial “cat is out of the bag,” and despite the current raft of concerns about the dangers of AI, especially these new generative AI tools, expect this area of modern AI to continue to accelerate.
At a recent conference on LLMs, a prominent expert in medical AI was asked what he saw happening with LLMs in 3 years. His response was that he didn’t even know about Chat GPT until a year ago and that things are changing so fast that he couldn’t even predict what was likely to happen in 6 months! Buckle up and enjoy the ride.
Using LLMs to address issues in Healthcare
How do or should we utilize these AI models in a safe and effective way to address issues in healthcare? Instead of the term artificial intelligence, invert that term and think about how these tools might augment human intelligence (IA) to help us care for patients.
One such generative model focused on healthcare is Glass AI. While not specifically disclosed, Glass AI is an LLM trained on medical data sources rather than the general internet. When data about a patient is entered into the tool, it returns a differential diagnosis of several possible disease states, suggested treatment options, and references to support these recommendations. This is clearly a “game changer” in augmenting the intelligence of caretakers. With current shortages of healthcare providers at all levels, it will only expand the capabilities to provide care, not replace current practitioners, as is a concern for industries outside of healthcare.
Chat GPT and other generic generative models also will have a prominent role in healthcare well into the future, but how to best apply them is being considered by academicians as well as individual practitioners experimenting with its value to their practice. For example, a generative LLM trained on a health system’s data rather than the general internet can have tremendous value at all levels of the organization. Also, leveraging LLM to help prevent clinician burnout by taking over some of the mundane duties, such as preparing notes, discharge summaries, and patient education materials, are prime targets for this technology.
Everything in healthcare seems to be regulated by state and federal governments. Regulation of these new generative AI tools is coming, but little currently exists except within the federal government. Obviously, HIPAA and privacy rules apply to healthcare and to the use of these models. The FDA is taking the lead on the national level with regulating software as a medical device. New initiatives are coming on both the national and state levels. Some of the new legislation focuses on the ability of individuals to opt out of the use of AI in their care, especially in the use case of mental health. Also, there will likely be oversight to ensure AI systems or their application doesn’t have an inherent bias.
LLMs such as Chat GPT and Glass AI will have a dominant place in our future care of patients. The cautious application of the tools in the healthcare setting is warranted with a constant eye on impacts on the quality of care delivered and making sure that the care of patients utilizing these tools isn’t biased by the data on which they are trained.