This article original available on podcast, click here to listen podcast.
This is a follow-up podcast to one we did earlier this week where we asked whether or not health insurance companies tell the truth when they are sending out denials. In other words, can you trust the information that’s in the remittance coming from the payer? Are they telling the truth or not, and can we utilize data to validate that hypothesis? If they are not telling the truth, we should be able to find the digital trace of their deception.
Most of us who’ve been in this industry a long time will say, “You can’t trust the payers (health insurance companies).” Yet almost universally, we trust the information that comes from them in order to do denials management and to determine what to do to follow up on claims, to resubmit claims, and to do appeals.
Follow up Analysis
This is part 2 where we are looking at follow-on analysis to find results and see whether or not the payer was telling the truth for a particular remittance advice. In Part 1 we considered denial code CO109. This was an excellent example because they generally had two broad categories of remittance descriptions: one that said they did something and one that said they didn’t do something, meaning the insurance company said they had resubmitted it or forwarded it on to the correct insurance company because it had been sent it to the wrong payer, or alternatively they said, “No, you have to do this yourself.”
We used workflow analytics tools to parse the data and look at the descriptions since they were buried as text deep within a Notes field. Within those parsed descriptions, identifying the keyword strings that are in common tells us what we believe the insurance company is trying to communicate. In this case, it was grouping things together that were similar to “Resubmit a claim to correct insurer” or “forward claim” directed back to the provider to tell them to do this, versus “Submitted to correct payer” or “claim forwarded”, which indicates that the insurance company forwarded it on to the correct health insurer.
Looking at those keyword strings and identifying similar patterns, we took all of that information and we grouped claims after we had flagged them into those different categories and determined which payers said they were forwarding them versus which ones said the provider had to do it. We analyzed the mean (the average) Insurance Payment and the mean Allowed Amount from the insurance company. We then compared the two different groups to see if there was any significant difference.
Note that the statistical sampling size was quite large so we shouldn’t have an issue with statistical significance. Each one measured hundreds or many hundreds. We weren’t getting into small sample sizes. If we were in a statistics class we might go further in the analyses in order to see how confident we are on a percentage basis, but that isn’t really necessary for something as obvious as we found.
And what we found was pretty startling. When the payer said that the provider needed to send it on to the correct payer, the average payment was 250% of the average of the payment when the insurance company said it had already been sent to the correct payer. This is so significant we’ll repeat it. When the payer says, “We sent it,” this client received on average $12 per line item. When the insurance company said, “You need to send it on to correct payer,” they received $30 per line item. That’s an extraordinary difference. What would it mean to more than double the average reimbursement received on every denial?
We did quick QA (one might call it a gut check) to compare the results of this small subset to the other records in the dataset. Compared to the approximately 10,000 other denials in this sample set, the average when the payer said “You need to forward it on,” was still about 250% of the baseline of the average of all denied line items.
That quick check makes sense in this case because amongst all denials resubmitting a claim to the correct payer is one of the easier problems to solve than many other types of denial problems. Issues like benefits were exceeded cannot easily be overcome and is likely to result in low collections. You’d have to bill a patient, and you’re still not going to get anything from the insurance company. Other denials like the diagnosis doesn’t match the procedure and therefore violates a policy is again harder to correct since if data was all documented and gathered, one cannot go back and create new diagnoses that don’t exist. Many other denials are harder to solve than correcting insurance, so it makes sense that you would get paid significantly more for solving this problem than it would for most denials in general.
It is important to do those types of quality assurance checks. Often results may be confusing, misleading, or outright incorrect due to some data or analysis issue and it isn’t discovered. Making a business decision off of bad information would be unfortunate. Thus the importance of quality checks.
What do we interpret from these results? Perhaps the billing company knew that they needed to do something, that is to go resubmit this claim to the right insurance company,” and they did that, and they didn’t rely upon the payer to do something. So they collected a lot more money. That seems like the most likely scenario. If a billing company submitted a claim to a new payer. With new information and voided out the bold claim, however, then we might expect the “Sent” category to be lower, which is actually what we saw.
I know that’s getting a little hard to follow. So we’ll put this into a written blog post so it’s easier to see some of this information.
When you’re checking to see, “Does this make sense?” one of the things we’re looking for is, “Is there some alternate explanation that might. Indicate what we should learn from this?” In this case, it’s possible that if the billing company had voided out claims that went to the wrong payer and then created new claims to send to the new insurance, then we might expect to see the result that we saw, which is the category that said “Need to send” would be higher than “Sent.” That’s one possible option or one possible explanation for why we see the results we do.
That should not be the case here because when we ran all of these transactions and looked at the denial data, we filtered out and did not include; we excluded the “The transaction was undone” correction. It’s the term used in this system. We filtered out all the voided claims, so that shouldn’t be a possible result.
More Quality Assurance
Typically I expect our team to do QA and so I do not need to replicate that step. In this case since I was curious, I did my own QA. I just wanted to make sure that we had filtered out all of the voided claims as one would expect. So I went into the data to verify that voided claims were excluded and found that they had been. That information was stored in a note field that said, “Transaction was undone / correction”, and records with that note had been filtered out.
However, we missed a nuance that is important. We didn’t see initially that when a claim was voided, it didn’t write that information into the note field, the information that was put into that note field was appended onto the end of a text note field. That means if there was already a denial in that note field or any other comment, it just appended it onto the end. Therefore, those records that were voided that also had other information in that note field were not filtered out. This means we included about 65 claims where the claim had been undone, i.e. it had been voided, and it also had a denial. That’s going to skew our data a little bit, but 65 out of approximately 1,000 shouldn’t provide that much difference. Indeed, it certainly would not account for a 250% variance.
Although we found a problem in our analysis that needs to be corrected and it is important to update this information, it is not going to change that there is a large difference in results. Quality assurance is clearly vital in revenue cycle analytics. It also often helps to have a different person do the QA than the original analyst so that you don’t bring in your own biases and assumptions about what’s correct and what’s not correct. In this case, I found that because I didn’t assume the analysis had been done correctly, I happened to catch a nuance that somebody else didn’t see.
The net of this is we believe there’s a statistically significant difference between financial results when the insurance company says they forward versus they do not forward. The next level of this analysis is to figure out “Why?” Is it truly what we believe, which is you shouldn’t rely upon the insurance company because they’re not telling the truth? Was it because the billers trusted the information from the insurance company and therefore didn’t pursue those claims as aggressively, trusting they would be forwarded on? Or is there some other cause that we’re not fully aware of yet at this point?
We’ll keep you updated on in our quest to document and definitely prove whether insurance companies tell the truth…or lie.