Juandiego Marquez is Shift's Lead Data Scientist for US Healthcare
In the age of a rapidly shifting technological arms race, no industry on the planet will ever be the same. Health insurers are no exception to the rule. With AI in healthcare transforming the industry at a rapid pace, being on top of the changing trends and keeping a tab on the many iterations of the technology can help insurers use it to its full potential.
What Do We Mean by Artificial Intelligence?
While AI has become the underlying catchphrase for machine learning, natural language processing, deep learning, or any other myriad use cases, in the context of FWA (Fraud, Waste, and Abuse), artificial intelligence is all about enhancing the capabilities of an investigator or analyst.
While an expert investigator may have decades of experience and intuition when it comes to fraud, the sheer volume and complexity of cases in the modern environment require a high degree of statistical analysis, a skillset that doesn’t always come with the job. AI enables the automation of critical analytical functions, resulting in faster and more accurate decision-making, leading to greater cost savings and improved patient care.
Benefits of Artificial Intelligence in Healthcare
By adopting AI-based tools, healthcare organizations could see nearly $360 billion saved every year. Through a combination of real-time analytics, highly efficient resource allocation, and administrative assistance, artificial intelligence has the potential to completely transform provider processes, decrease healthcare costs and expenditures, and greatly improve the care patients receive.
That being said, the actual adoption of artificial intelligence in the industry is lacking due to a combination of factors, including knowledge of the technology as well as regulatory standards that prohibit the use of patient data and PHI (protected healthcare information).
HIPAA Privacy Rule, according to the U.S. Department of Health and Human Services, states that: The HIPAA Privacy Rule establishes national standards to protect individuals' medical records and other individually identifiable health information (collectively defined as “protected health information”) and applies to health plans, healthcare clearinghouses, and those healthcare providers that conduct certain healthcare transactions electronically.
Because of these regulatory standards, readily-available tools like ChatGPT can’t be trained on the PHI needed as context to answer questions surrounding FWA accurately. The same goes for generative AI, which has enormous potential in healthcare, but only if organizations can get security right.
However, the same types of AI models that underlie ChatGPT and others can be utilized internally by health plans and FWA (Fraud Waste Abuse) vendors to detect fraud and other suspicious activities.
Investigators that once had to manually parse through relevant information can now leverage AI to parse through data at a much faster rate while saving time and money in the process.
Investigating these cases involves tracking vast amounts of data. AI-enhanced investigations leverage information from diverse sources such as financial records, social media, and emails. By processing this information at scale and using advanced algorithms, AI helps uncover concealed correlations and patterns, significantly shortening the investigation timeline. Investigators are no longer reliant on waiting for obvious spikes to prove fraud, as they can now rely on patterns and models derived from automated statistical analysis.
Automating the bulk of these processes will lead to more operational efficiencies, including data entry, claims validation, and compliance checks. The automation of these tasks can help health plans further reduce operational costs and allocate more resources to improving patient care.
FWA leaders shouldn’t just be thinking about short-term cost savings and quick fraud detection when making their investment decisions, however. The long-term ROI benefits of AI-powered FWA can result in continuous process improvements in fraud detection that have a significant financial impact over a long period.
The AI solutions in use today must emphasize explainability and highlight the important, relevant information so the user understands which investigative steps are likely to yield positive results as well as the rationale and reasoning behind particular alerts.
Opening the “AI black box” is critical to achieving greater AI adoption in the healthcare industry, where trust and transparency are cornerstone values. Understanding the rationale behind AI-generated alerts and focusing on explainability gives health plans more confidence in the results, taking away the fear that any output must be manually verified. Once knowledge, familiarity, and trust are in place, health plans using AI have the potential to revolutionize the industry.
Shift uses reinforcement learning that can train models to automate the process of obtaining meaning from patterns. AI can supplement an investigator’s skills by not only automating the statistical analysis but also presenting the most relevant information. This is done by providing historical data that informs an analytic result.
At the same time, external data integration automates a typically manual investigation process and sends back alerts to investigation teams. What the SIU achieves is a decision-making process that is much faster and far more accurate than previous approaches. Additionally, if the machine-ready data is not available, Shift’s experts can reformat the data so that it can be ingested into the AI model.
However, leveraging AI internally can help detect fraud and improve investigation efficiency. To achieve greater AI adoption, emphasis should be placed on explainability and transparency in AI solutions. While the road to complete adoption is long, it will be up to the innovators to take back control and combat the next generation of FWA.
If you are ready to take that step, talk to an expert today.