CloudMedx Logo
Connect

Op-ED: How Healthcare Can Pioneer Industry-Specific AI For Privacy And Optimization

By Sahar Arshad, co-founder and COO of CloudMedx.

Perhaps the most impactful way that artificial intelligence (AI) can transform lives will be through healthcare. Right now, there’s a great deal of theorizing underway about how AI could deliver solutions to help fix huge problems in the healthcare industry. But for me and my team, the change is already underway. We’re seeing how both predictive and generative AI can lead to concrete changes in healthcare—and how the sector can help forge a path for every industry to follow.

Turning ideas into functional solutions in healthcare is paramount. The challenges facing the sector grow every year. “The U.S. healthcare industry faces demanding conditions in 2023, including recessionary pressure, continuing high inflation rates, labor shortages, and endemic Covid-19,” McKinsey analysts report. Even worse, life expectancy is down and maternal mortality is up. Dramatic improvements to healthcare are needed.

AI solutions should improve healthcare outcomes and even help save lives while simultaneously cutting back on the massive costs in U.S. healthcare. A recent study from McKinsey and Harvard found that AI could save up to $360 billion in U.S. healthcare spending every year. “These opportunities could also lead to non-financial benefits such as improved healthcare quality, increased access, better patient experience, and greater clinician satisfaction,” the researchers wrote.

But healthcare also faces unique pressures. The risk of hallucination is especially concerning. As the Wall Street Journal reported, the newly popular AI chatbots such as OpenAI’s ChatGPT and Google’s Bard sometimes “confidently and authoritatively generate statements that are flat-out wrong.” There’s no room for this kind of error in healthcare. Similarly, all data must be kept private, and healthcare organizations can’t risk the possibility that private information could be leaked through AI.

My team has worked to tackle these problems as we build large language model (LLM)-based applications to assist with the administrative burden that’s responsible for up to 30% of healthcare spending. Here are some things we learned along the way that can help guide other technology leaders in developing AI healthcare solutions.

Fine-Tune With Use Case Data

In developing an AI tool, creators can follow different methods. They can build their own foundation model (such as GPT-4 or PaLM), which typically requires a great deal of money and effort. Or creators can use an existing foundation model and supplement it with domain-specific rich data, fine-tuning it. There’s also a third approach, in which creators keep the existing domain-specific foundation model as it is and then use certain prompting techniques such as zero-shot learning or few-shot learning to get the desired output from it. They then build a user interface on top of it, which is sometimes referred to as a “wrapper” around the base LLM model.

In our work, we’ve used both the second and third approaches, depending on the use case we’re solving for. We enter anonymized healthcare datasets into a base model (on private servers), mold the model to our niche applications and incorporate our domain expertise.

For example, our tool needs to understand what shorthand and acronyms refer to in our specific industry. Many doctors, nurses and other healthcare practitioners write “DM” as the code for diabetes (referring to the technical name diabetes mellitus). We needed to teach our AI tool this so it wouldn’t assume DM refers to direct message, direct marketing, database management or other terms. The more industry-specific information you provide and the more defined the scope of the use cases, the less hallucination you’ll see.

Design For Firewalls With A One-Way Path

For a business to take it on, the tool must be designed to operate strictly within firewalls. Any stakeholder in the healthcare ecosystem—whether a patient, healthcare provider or payer (such as Medicare or an insurance company)—needs to know that outsiders will be unable to access the data. So, the tool must not operate with a two-way channel for information flow between the organization and the outside world.

But the AI does need to be able to collect external information. For example, one of our LLM-based agents engages in automatic rules extraction. It searches through publicly available information to find how certain healthcare rules are changing, such as coding guidelines for submitting charges for patients in various demographic categories or rules for creating value-based care reports based on patients’ clinical and/or demographic data.

It should no longer be up to humans to keep track of all the changing rules and then tell the IT team to write new code; all of this should happen on its own. By enabling data controls in your AI solution, you can make sure your tool can bring in external information without risking the exposure of proprietary data.

Humans In Charge

None of this puts AI in charge of deciding who gets what procedure, who gets discharged from a hospital or any other medical decisions. Patients, of course, don’t want the healthcare industry to rely on AI for such things.

The key is to design AI tools to handle rote activities, freeing healthcare providers to focus on those crucial medical tasks. (Nurses, for example, have to spend an estimated 26% to 41% of their time on documentation activities.)

As the healthcare sector pioneers ways to use AI within these rules, other sectors will have good reason to follow suit.

For the full article, please click here: https://www.forbes.com/sites/forbestechcouncil/2023/06/20/how-healthcare-can-pioneer-industry-specific-ai-for-privacy-and-optimization