This article was initially available on our podcast. Click here to listen.
We frequently run into situations when we’re doing analyses or setting up systems or denials management applications, whatever it might be for clients. When we run into those things, that triggers, “Ah! We should probably share this because other people will probably run into it.” Even if it’s something we encounter or have encountered over a long period, we need to have a trigger to think, “This is something to be shared.”
This is one that we just ran into again. A billing manager did an analysis where they determined that specific CPT codes were getting denied at an abnormally high rate. They concluded that alternate CPT codes should be used instead of those first ones. That would get denied less frequently. Therefore, they would get paid. They said that the denials for those codes were sufficiently high, that they shouldn’t be billed anymore and should switch.
Check the analysis
The challenge was when we looked at their analyses; they showed that only a few claims had been paid off more than $1,000. So it showed a very, very small, like a couple of thousands of dollars in payments on more than 1,000 claims, which suggests an average of $1 payment, an average of $1 reimbursement per claim. It didn’t look good from the summary table they showed to the client and saw.
The problem is, looking at a minimal number of payments, meaning you billed $1,000 and some small number of them got paid for a minimal dollar amount, suggests a very high denial rate. But it doesn’t definitively demonstrate a high denial rate.
Find the supporting data
There was some other supporting documentation. There was a denials table, and there was a raw data table. The raw data table had all the transaction information that supported that analysis. Yet, there were no denials on that table. Raw data did not include any denials. One of our questions going back to the billing manager was, “Where are the denials? Because we don’t see any raw data. How did you come up with some analysis if you don’t have the raw data that shows the analysis, that shows the data?”
The denials summary table (again, it wasn’t clear where that came from) listed a count of denials by payer, by CPT. So it showed Aetna for these CPT codes denied six times for these denial codes, and Blue Cross Blue Shield denied this many times for these CPT codes for these denial codes. Again, we couldn’t figure out where that came from.
When we summed up that table and looked at the total number of denials that they had listed out by CPT, denial code, and payer, the total was 81, which seems a lot of denials except when you remember that there are 1,277 claims. 81 divided by 1,277 is about 6%, and 6% is anything but bad for a denial rate. I mean, it’s undoubtedly not catastrophically wrong. It’s better than average for billing. We’ve done other podcasts, and you can read other articles and things like that about what the average denial rate is, but it’s not 6%. It’s above 6%. So this is better than average. Not only is this not catastrophically bad, but it’s above average. Like better than average, below average, nominally better than average.
A lesson to learn
This podcast and article might be a cautionary tale about lousy analysis. That’s not what we’re talking about here. They shouldn’t have concluded that the codes should be changed based on their provided data. That is true. It’s always worth thinking about, “Hey, is good analysis being performed here?” But there’s a bigger problem going on, and this is the one that we want to focus on.
You can’t reconstruct and verify what they said. Why? You would think it’s because there’s no raw data, but the raw data has to exist somewhere. So we go back to them and say, “Where’s the raw data? Please show us the raw data. We’ll access the raw data.” We now analyze it ourselves and determine whether or not their analysis was faulty or not. The problem is, that’s not possible.
Almost zero denials
Before we got to why we ran our analysis, we took their data. We ran the analysis, and we essentially found close to zero denials. We found a total of 150 denials all time for those CPT codes that they were analyzing. Of those, approximately 150 130 were from a single insurance company, which was Anthem. But we didn’t see any of the denials that they had listed for those other payers like Aetna and so on. There were zero. I mean, zero-zero, goose egg, nothing, nada.
Our initial conclusion was, “Wow, this is faulty analysis. This is garbage. It’s not just that they had a bad methodology, but the data is off. I mean, they’re not even close.” We talked to the billing manager. We were completely surprised at what they ended up telling us, which was, the data essentially was gone.
We’ll explain why the data disappeared in the next podcast. Still, while we thought this would be a bad analysis discussion, it suddenly became a massive data problem. You have to be aware of this because if you don’t realize it, it will skew somebody’s analysis. You might be flip-flopping and changing things all over the place, not realizing that there either isn’t a problem or you’ve already solved the problem. There are all kinds of issues tied up in this. So we will not get into more detail now to keep this from being a 20-minute podcast.