Organizations across the health care program integrity industry are using analytic solutions for fraud detection and prevention that intake and analyze data to signal the threat of fraud. These solutions can process huge amounts of data from multiple sources and remove much of the human element of potential fraud detection. When we think about these big, sophisticated systems running behind the scenes, it is easy to assume that bias is taken out of the equation. However, fraud detection and prevention systems are still susceptible to bias, which can cause errors in system results.
Statistical bias is the tendency of a statistic to overestimate or underestimate a fact about a population. In health care fraud detection and prevention systems, it is the tendency to overestimate or underestimate the prevalence of fraud or the activity associated with a particular fraud scheme. Statistical bias can be introduced by flawed analytic logic and can cause errors in fraud detection and prevention system results. We can’t always prevent bias, but we can address it if we know it is there. While there are many types of statistical bias, we have found these four to be especially prevalent in the field of health care program integrity: chronological, selection, confirmation, and system.
Types of Bias in Health Care Fraud Detection Systems
- Chronological Bias can be introduced when comparing historical data to current data and using current data to predict future patterns. Using predictive analytics in fraud detection can cause chronological bias, which occurs when models fail to account for events and trends that will cause significant differences between the two timeframes. For example, if a system or model is predicting the future costs of a procedure based on past costs, failing to account for a market or technology change can cause predictions to be significantly off. Chronological bias is common in models that perform comparisons over time. Be aware of market and policy changes and adjust models accordingly. Be sure to research unexpected findings and determine whether a market or policy change is affecting your model.
- Selection Bias occurs when certain groups are more likely to be included in data collection and analysis than others. This can cause organizations to make incorrect assumptions about a population based on a sample when, in fact, there are key differences. For example, if a system models potentially fraudulent behavior on providers in a certain geographical area, its results are likely to be biased. It may over — or under — identify potential fraud in other geographic areas because the sample used to create the model is not representative of the population. To avoid selection bias, identify trends related to demographics, geography, and other traits prior to building and deploying a model. Also, look at cases your system or model missed and determine whether selection bias played a part and adjust models to account for differences.
Learn About Our Data Analysis Services
- Confirmation Bias is the tendency to interpret information in a way that confirms our perceptions. It is commonly introduced when reviewing model or system results. If an analyst or investigator expects to see fraud, they may falsely conclude that they do see fraud. Look out for shared perceptions that are not backed by data and a personal stake in a model or system. A collaborative approach can mitigate confirmation bias. This approach brings in people from of a variety of backgrounds to review system hits before taking action.
- Finally, System Bias occurs when organizations stack the deck in favor of a fraud detection and prevention system, ensuring it produces a requisite number of hits. This is tempting for organizations because these systems are often expensive and organizations want users to accept the system and show return on investment. It can be detected after a system produces results but can also begin from the inception of the fraud detection system. System bias can result in: artificially inflated success rates, failure to adjust models in the system, and inability to detect actual fraud cases. Prevent system bias by avoiding quotas for system outcomes — use thresholds based on scores rather than a certain percentage. Establish regular evaluations of models and the detection system to assess its success and assess the system on a variety of criteria and follow hits to resolution to assess whether hits are actionable.
These are just four types of statistical bias found within health care fraud detection and prevention systems and best practices for avoiding bias. Understanding that these systems are not immune to bias allows us to be mindful of the effects of bias and to work towards eliminating it to get better results. To recap, researching and understanding the population your model or fraud detection system covers can help you avoid chronological and selection bias. Employ a collaborative approach to decisions on next steps to avoid confirmation bias, and perform thorough model assessments and system evaluations to avoid system bias.
Contact IntegrityM’s Experienced Team of Data Analysts
IntegrityM can provide the support needed to enhance your Medicare and Medicaid fraud data analysis. We have experience designing and assessing predictive models and have worked with multiple fraud detection and prevention systems. The increasing complexity of fraud schemes underscore the need for effective, controlled data analysis. IntegrityM’s data analytics team uses proven methodologies to identify potential waste, fraud, and abuse in federal programs. For additional information on how we can assist your organization, contact us online or call (703) 535-1400.