This article is originally available on the podcast; click here to listen to the podcast.
In a recent podcast, we talked about the concept that accounts receivable aging buckets are antiquated. We said that the primary goal of doing AR (account receivables) aging buckets was to help billers prioritize what claims they’re working on but that that’s not a very efficient method.
One of the problems that we didn’t get a chance to mention yet was, “How do you keep billers from duplicating each other’s efforts?” even on that same day where you’re dividing up the output from an Excel pivot table. Well, you give a list to all of them, and then you divide it up. So you can cut it up into lots of little pieces and send it out to people, which is very hard to manage. You randomly assign a number to each record, and then each individual biller can filter those records. If you don’t put some complicated system like that, they may all work on the first page. When you sort them in descending order, then they’re all going to work the same stuff. That’s not a particularly great way to do it either.
Challenges of accounts receivables buckets
One of the challenges, of course, is that doing those AR aging pivot tables, where it’s got in the aging buckets, is not particularly productive as well, as there’s a lot of different axes in which you might want to prioritize claims. Secondary claims versus primary claims are a good example where they may get lumped in all together. If you’re aging by data service, then a secondary claim that just went out the door three days ago might show as 90+ or 120+ days. It might look like that. It needs to be worked. A collector might go back and look at that claim and look at the claim again and again, or somebody might look at it many days in a row because they’re under the false impression that they need to work that claim. After all, it seems old, and yet it’s not. You don’t want to filter out the secondaries because those do need to get worked.
How do you layer in and prioritize those types of things? That’s very hard to do in a single pivot table. What about considerations? One payer pays faster than another. Medicare pays relatively quickly compared to something like Anthem or, maybe, one of the workers’ comp claims or something like that. I mean, we have seen payers where if you contact them at 40 days, they don’t even have the claim on file yet. I mean, forget whether it’s adjudicated. I would know they have it. It’s just they intentionally slow down the process. You have to mail it to them physically. They take weeks to load it into the system. And then, they park it in a queue where they pretend not to recognize it for some time until they say it’s there. You would have collectors wasting a lot of effort contacting them at that point. It would be helpful to know that.
If you pivoted payers on one axis as part of that pivot table, you might be able to have some of your collectors’ work, for example, Medicare that’s 30+ over a certain dollar threshold and maybe Anthem that’s 45+ or 60+ or something like that. But that’s still a very crude method. It assumes that you know well which payers adjudicate faster. For the top couple, they will probably have a good idea: this one’s faster, this one’s slower. It’s pretty unlikely that they have a good quantification like 37 days for this one and 23 days for this one, and 64 days for that one. Certainly not for hundreds of payers. That information is not going to be in anybody’s head. So that Pareto is going to go pretty quickly, where you’re going to jump out of those first few payers where they have some idea of that. That means it’s going to be impossible to manage that type of prioritization by a payer. You will likely have tons of wasted effort where collectors are going after claims that aren’t ready to be worked yet. While many other claims are sitting there and not getting worked that are ready to be worked and should be worked.
Understanding time to adjudicate
Wouldn’t it be great if systems understood the average time to adjudicate or the response time by a payer and could prioritize for your collectors based on when you should have gotten a response? Claims that have denials and rejections would essentially flow directly in workflow queues that need follow-up and work depending on the AR team’s structure and things like that. You might have status checkers. You might have different types of people, different teams, but that could automatically flow.
The problem with AR aging buckets is that even the best consultants, analytics consultants, will see things like the payer billing staff wants to work claims, where Medicare is primary, and the claim is between 91 and 120 days old. Great, okay, but how to know what to work exactly? Experts frequently give vague answers when they talk about these AR aging buckets like, “Oh, an experienced accounts receivable manager will know what to do.” Well, that doesn’t help anybody. Okay, really? I mean, yes, to some degree, they’ll know what to do because if you drop somebody in the middle of a forest and don’t give them any tools, they will have to do something. But that doesn’t mean it’s the best course of action. It’s easy to say that when no one has any information or data to prove that they’re correct or incorrect.
Systems to determine work payment
We’ve said this thing many times before, but, “In God we trust, all others must bring data.” I should have referenced it. This is not us who came up with that. What about a system that determined what work was most likely to result in payment if it was worked, if that claim was worked, or if you didn’t appeal? Some payers are much more amenable where, “Okay, there’s an honest mistake, or some information was provided.” If you work it, you will get the claim paid, whereas others, it’s just an excuse not to pay, “I’m not going to pay you no matter what.” I mean, you’re going to work that claim 6, 7, 8 times; it’s still not going to get paid.
Prioritizing that factor can be helpful as well. There might be many different factors that come into play. It could be when you should expect a response time from a payer. So if it’s been 45 days and they typically respond in 30 to 35 days, and you haven’t heard anything, that would be a factor. Or the dollar amount, or “Is it primary or secondary?” or “Are you likely to get it paid if you work those claims?” There could be so many different factors on prioritizing claims and route them and which ones to work on. That would generate real benefits in terms of how much money you make, based on a limited number of resources to work claims.
I’ve not seen that complex system, and that will be able to do that, but you could build that offline. You could do that type of analysis offline and automatically rank claims in prioritization order and then rank those two individual collectors. So it is possible to analyze and calculate the time to payment, or time to adjudication, or even time to expect a response from a payer.
How long does it take to get a denial or confirmation back, a payer ERA, anything, whatever it might be? This data exists in your system. All the data is there. A real set could be created that prioritizes these claims based on a combination, whatever factors you decide. Again, there could be a feedback loop that determines, “Okay, was that prioritization a good idea?”, based on how successful it was after the fact by again capturing more data on which ones got paid, which ones were successful in that feedback loop, feeding back into the system.
That could exist working offline from the billing system. It could be worked in whatever type of application. There are many different ways you could do this in an Excel spreadsheet or even a shared file like Google Sheets so that you don’t have the issue of “Are people duplicating efforts?” There are a lot of workflow collaboration tools. There’s a ton of those now that can be used as well.
Here’s a critical little secret. If you did that kind of work offline rather than manually creating it twice, where you enter information into, for example, a spreadsheet. You then have to recreate the copy and paste that back into the billing system or manually write it back in there, and it could be pushed back into the billing system a lot of times. It could be an HL7 interface. A lot of systems can upload files, whether those are different data types, XML. Some of them take survey 35 type files. It’s possible to convert all the work you did into a file format that the system will take, and you can load it back up into the system, and it will automatically populate all of them one time.
That’s a secret: there’s a lot of ways to skin this cat and make workflow much more productive so that more revenue is generated per unit hour worked by a team. But what’s clear is that, in an old-school way, the AR aging buckets are antiquated and are not the best way to go forward.