This article was initially available on our podcast. Click here to listen.
Robots get tired. Sometimes, we don’t access a clearinghouse to download remittance files because they are housed in the billing system. There are even other times where we may want or need data (or a client, I should say, wants or needs data) that resides in the billing system. It’s not available in a clearinghouse, and it’s not available via some standard reports out of the billing system. So for whatever reason, we need to have a bot or robotic process automation (also known as RPA) to get that information out of the billing system.
Guess the billing system
Depending upon the billing system or the practice management system, this can be a massive pain in the butt. So let’s do some math. I like math. First, if the system is an older system (and what I mean by “older system” was developed a long time ago)… We still see this where you’re essentially dealing with systems built on AS/400, green screen applications, built-in mainframes. Sometimes, they have updated these systems a little bit (what I call “putting lipstick on a pig”), where they’ve added an internet presentation layer or something like that. Or they could be old local systems before everybody goes client-server or whatever.
We have one of these going on right now. It’s an older system. It’s an overall system, but it’s an old system. It’s pushing 20 years old now. It’s a 32-bit system. It takes a long time to download every 835 file because, in this case, the 835s are stored in the billing system. We don’t have access to the clearinghouse.
They’re not stored there in any way that they’re accessible. It can take a long time to navigate through some of these older systems, clicking on the function and then loading the new page or moving to the new function, feature, whatever it might be. Sometimes, that’s noticeable if a user is using this system. That’s certainly the case with this system. Each function takes a little while for this system to respond and to load.
How fast is it?
Even if something seems quick, it might be pretty slow when you’re talking about repetitive functions over and over again. Something that takes 20 milliseconds, 50 milliseconds, 100 milliseconds, even like half a second, may seem relatively quick, except when you’re multiplying it out over vast numbers of functions because a robot is trying to do that. That just can be highly, painfully slow.
With this system, it’s noticeable to a human user, which means it’s pretty slow. For this system, each time you want to download an 835, it takes about 18 seconds to do that. We’re downloading in the range of 100,000 records for this client (or at least that’s the order of magnitude that we’re looking at). If you do the math, that comes out to the range of 500 hours of downloading 835s.
This doesn’t include anything else we’re doing in analytics or analysis. Just purely getting the 835 files. Not getting the charge information, not joining data, not building analytics, or anything else, and just getting the 835 files. That 500 hours is approximately three weeks of downloading data.
Wait, it’s worse than that. Because it’s an older system, it can run into memory issues and essentially crash out after some time, which it frequently does. I ran into this when playing around with the system, accessing files, and downloading reports. I kept thinking, “Why is this erroring out?” and I was trying to find some consistency, the pattern. But once we built the bot to go in and do this, sure enough, we ran into a whole bunch of problems where it kept crashing out. That means the robot stops working and needs to be restarted. It isn’t tiring. It got shut down, and so it couldn’t work.
The problem is if that happens during the middle of the night (which happens a lot), then we lose all of those night hours when people aren’t working. When I say “night hours,” I mean if somebody works 8 or 10 hours a day, that’s another 14 to 16 hours a day that people are not working. So if it goes down at any time during that period, then, of course, you lose all those hours, which means it essentially could take 6 to 10 weeks or something like that to download. We’re in the months to download the 835s. Of course, that’s not feasible.
Here come the robots
What do you do? We built more robots. Number one. Number two: We modified the scripts to essentially have them restart automatically if they get shut out, if the system conks out, which happened a lot. They can restart. But even if they’re continuing fairly quickly and automatically, we’re still looking, as I mentioned earlier, about the range of three weeks of going nonstop. Again, that didn’t work. So we built more robots. What we’re doing is, we’re having them run concurrently, so they’re all working at the same time, downloading records in parallel.
Even with all of this, it still takes us a week to download the data. So imagine doing this manually. I don’t even know how many months you’d be talking about a human doing this. I mean, it’s bonkers, and it makes it impossible to get access to the data.
Without robotic process automation, it’s impossible. And even with robotic process automation, it’s been very, very challenging. The moral of the story is that if your robots are getting tired, build more robots.