Generative AI in Healthcare

Excitement for generative artificial intelligence (genAI)—a branch of artificial intelligence using algorithms to create new videos, images, and text that resembles its reference data—is quickly spreading. This technology is different from other types of AI by using predictive models to make predictions based on existing data. Unlike traditional AI, which focuses on detecting patterns, making decisions, gathering analytics, and classifying data, genAI produces new content, chat responses, designs, and synthetic data.

Already, it is leveraged within a handful of industries, from engineering to marketing, and even law. And now, tech mammoths are set on integrating generative artificial intelligence into healthcare. From streamlining medical processes like scribing prescriptions to responding to patient messages, the list of potential use cases is endless. But that hasn’t stopped experts from scrutinizing where genAI’s shortcomings could set up the sector for failure. 

generative ai in healthcare
In the last 12 months, we’ve seen a 690% increase in documents mentioning “Gen AI” found within AlphaSense’s collection of aggregated healthcare content.

Noticing an uptick in documentation around mentions of genAI within the healthcare segment, we dove deeper into the research to outline the recent developments of genAI. From its proposed applications to the technology that companies have already released to where experts are saying it will fall short, we’ve compiled the most crucial insights to keep you up on how this groundbreaking technology is disrupting the healthcare space.

GenAI in Healthcare

According to a new report from Accenture, advances in large language AI models, which genAI uses, can revolutionize the healthcare industry, benefiting creativity and boosting productivity for providers and patients. Nearly all (98%) healthcare providers and (89%) healthcare payer executives who participated in the study believe these advancements will usher in enterprise intelligence, as 40% of all working hours could be supported or augmented by language-based AI.

The reality is that a large percentage of medical institutions face scarce clinical resources. The biggest potential for generative AI in healthcare, then, is to free overworked and underpaid employees from administrative tasks so they can work at the top of their license. “A strong digital core and investments in people will be the key to reaping the full value of generative AI in a responsible and authentic way,” Accenture writes.

However, to truly realize the full benefits of genAI, upper management will have to remodel the work, and consequently, jobs done by human beings. Ultimately, generative AI will take over some tasks—but not whole jobs. Those jobs, in turn, will have to be repurposed so that their new “9 to 5” focuses on tasks that cannot be automated and, therefore, prioritizes human efficiency and effectiveness. 

But the work of integrating genAI into healthcare starts with helping both clinicians and patients stay up-to-date with this technology for greater access, better experiences, and improved outcomes. Further, institutions will need to get their proprietary data ready. Foundation models for genAI need vast amounts of curated data to learn, which means organizations need to take a strategic and disciplined approach to acquiring, vetting, safeguarding, and deploying data.

Fine-tuning the pre-trained large language models with organization-specific data will allow for more accurate usage. Organizations need a modern enterprise data platform built on the cloud with a trusted set of data products. As more than half of healthcare organizations plan to use ChatGPT for learning purposes, and more than half are planning pilot cases this year, these steps will be critical in ensuring a positive outcome. 

Below, we’ve outlined other use cases for genAI within healthcare:

  • Clinical Decision Making: Generative AI is already assisting providers in making accurate and informed diagnoses, as it can analyze data from a patient’s medical records, lab results, previous treatments, and medical imaging, such as MRIs and X-rays, to identify potential problem areas and suggest further testing or treatment options.
  • Risk Prediction for Catastrophic Health Events: Generative AI models have emerged as a vital source of insights for scientists studying the effects of catastrophic health events, such as modeling new pandemics and developing preventive measures. Some new generative AI models are currently being trained on large amounts of protein sequences to identify new antibodies which could address infectious diseases and construct outbreak responses.
  • Personalized Medication and Care: Wearable devices collect real-time continuous data on a patient’s health indicators, including heart rate variability, blood oxygen, and blood glucose levels. By leveraging wearables and at-home monitoring devices with generative AI, healthcare providers can move away from traditionally reactive healthcare models to more proactive ones.
  • Improved Drug Discovery and Development: Generative AI has shown promising results in drug discovery and development (e.g., Insilico Medicine, a company that has developed a generative AI platform called GENTRL). Doctors may no longer have to rely on old-fashioned methods such as manual patient diary entries, faxed records, and snail-mailed findings to regulatory agencies with the help of genAI.

Benefits for Patients and Providers

While genAI is relatively new to the market, there’s no shortage of companies racing to be the next manufacturing leader of genAI-powered healthcare tech. 

Already, Microsoft has teamed up with GPT-4 developer OpenAI to understand the technology’s applications. Generative AI has proven beneficial in explaining benefits notices to patients and writing prior authorization request forms.

This past April, Microsoft unveiled plans to embed generative AI into clinical software from Epic, the biggest hospital EHR vendor in the U.S. The two companies produced the first sites integrating GPT into EHR workflows to automatically draft replies to patient messages. The companies are also bringing genAI to Epic’s hospital database, allowing laypeople to ask AI general questions instead of needing a data scientist to query specific data.

Google is also joining the race by creating its large language model, called Med-PaLM 2, which is available to specific customers for use case trials. Unlike most genAI algorithms, Med-PaLM was specifically trained on medical data, allowing it to sift through and make sense of massive amounts of healthcare information. While it still has room for improvement in answering queries, its performance is exceeding expectations. 

Unlike Microsoft’s product, Google’s Med-PaLM won’t be for patient-facing use. Rather, hospitals could use the AI to analyze data to help diagnose complex diseases, fill out records, or as a concierge for patient portals.

Another use case for genAI is streamlining medical note-taking. Physicians can spend roughly six hours a day logging notes in their EHR, which leads to less time with patients and more burnout. Nuance, a documentation company owned by Microsoft, is providing a solution to this problem. 

Last month, the company announced the integration of GPT-4 into its clinical note-taking software. This summer, Nuance is allowing providers to beta test the documentation products, called DAX Express. Between 300 to 500 physicians will test the product in a private preview this mid-June, before it’s generally available in early fall.

Suki, a documentation company partnering with Google, recently launched its genAI-powered “Gen 2,” which can generate a clinical note by listening into a conversation and filling in a note automatically. By the end of the year, doctors will also be able to ask the AI questions and give it commands, such as graphing a patient’s A1C levels over a three-month period.

Growing Criticism and Concern 

There’s no shortage of excitement around future use cases for genAI. And yet, some of its current applications have already drawn sharp backlash.​​ 

ChatGPT and GPT-4, bi-products of largely consumer-focused genAI that utilize interfaces from OpenAI, are capable of holding conversations, answering questions, and even writing low-grade high school essays. However, this past February when the New York Times published their troubling conversations with Bing’s chatbot, Sydney, which sent messages that ranged from promoting infidelity to expressing a desire to be “alive,” speculation arose concerning where these tools are pulling their reference data from.

While tech leaders are hyping up genAI’s streamlining capabilities, conversations around automation and the role it will play in our society are running rampant, especially with regards to the healthcare sector. Ultimately, it’s a question of how much we should be relying on genAI and what the consequences of doing so are. 

An artificial intelligence system relies on an algorithm to generate answers or make decisions. But how these results are produced brings into question three areas of ethical concern for society: privacy and surveillance, bias and discrimination, and the role of human judgment. 

Because large language models can provide responses to queries that are factually incorrect, irrelevant, or frankly disturbing, GPT-4 “still has many known limitations” that OpenAI is working to address, such as social biases, hallucinations, and adversarial prompts.

Bias then becomes another key worry with AI, especially within a healthcare setting. If an algorithm is trained on biased data, or its results are applied in biased ways, it perpetuates those biases. This leads to the question of accountability: since generative AI is relatively new, it’s unclear if the owner, user, or developer is at fault for any negative consequences stemming from its use.

Getting the Answers You Need

In a volatile market that’s producing new developments around genAI every day, it’s challenging to decipher what technology, product, or insight could revolutionize the healthcare industry next. Cutting through the noise to find insights is nearly impossible in the age of information overload. You need a tool that does all of the heavy lifting, so you can focus on leveraging information rather than searching for it. 

Discover how AlphaSense can help you stay on the leading edge by starting your free trial today. 

Tim Hafke
Tim Hafke
Content Marketing Specialist

Formerly a writer for publications and startups, Tim Hafke is a Content Marketing Specialist at AlphaSense. His prior experience includes developing content for healthcare companies serving marginalized communities.

Read all posts written by Tim Hafke