This article was initially available on our podcast. Click here to listen.
I wanted to talk a bit about eClinicalWorks reporting. These all sort of lead into some dashboard dos and don’ts as well.
About the reporting model
eClinicalWorks has a reporting module that is their eBO. When you go into this module, it pops up a series of grouped reports. Some of them are grouped by Daily, some of them are Month-end, and some are Financial and Administrative. As we started to go through each one one by one to see what they were about and what quarter reports were, we found some things that were kind of interesting. Well, it’s frequently more fun and entertaining to do the negative things. I’d love to say more positive things. It’s just that we haven’t found a lot of really positive things in these report sets.
For example, the first one, copay report summary–when we click into the copay report summary, we have an expected copay, which is excellent. The structure of this would be very positive, meaning you wished to collect $1,849 today from copays or over the last two weeks or whatever period was for the patients that came to that office. Then, we have a column for the copays paid, and we have the other amount and the total collections.
Design vs. data
The exciting thing is that we have some collections, we have the other amount, and nowhere recently do we have the copay paid anything more significant than zero. So it’s hard to tell whether the report design is flawed or just the data entry is faulty. I guess that there’s something related to the data entry that’s a little flawed. Therefore, the report doesn’t work and effectively breaks because it’s constantly showing a copay percent of zero, meaning that it was zero of the total you expected on copays.
The problem with reports that give you insufficient information like this is, it can be discouraging. People don’t use it then. Therefore, it defeats the purpose. If the goal is to increase the copays collected at the point of care to maximize collection and the report doesn’t work, then, of course, you don’t have a way to track, you don’t have a way to improve things, and it demoralizes people because it all doesn’t work anyway. It’s not that we’re not performing. It’s that the report is broken.
One of the things that I found very interesting was, when you’re in that report, you have a period that you had selected, for example, the date (November 15) or a group time (November 1 and November 15). If you want to modify that period and look at a different period, you can’t do that. You have to go all the way out to their homepage, go back through, and click through to that report again, select the filters, and then rerun it rather than adjust the report timeframe.
Dos and don’ts, good and bad.
Ensure you can apply modifications expediently
Make sure that people can easily change the timeframe so that if they’re looking at the current month, they can change it to the last month or three months or year to date or whatever it might be. Why? Because people frequently will want to stay in that type of report and go through different timeframes to have a better understanding of what’s going on. That’s a pretty critical thing to do.
The following report is encountered without claims. For the particular date selected, November 12, there was one. So it just shows 100%. Not particularly helpful, but there was a bit of a strange kind of thing. When there is just one, it effectively indicates a histogram for the physicians that have outstanding claims effectively of the encounters without claims, I should say. It showed one, three times in a vertical axis, so 000111 when there’s a count of one. It’s just a peculiar graphing problem that somebody didn’t QA particularly well.
Then, when you look at the summary and the details of encounters without claims, it shows a single line item at the table. It shows a doctor. Okay, that’s positive. It’s great to see the breakdown by physicians. If you go through and take a much more extensive timeframe, you can see which physicians have problems. So you can see how old they are as well. Then, when we go back a year to date, we see 114 encounters without claims.
The overwhelming majority of those (80%) are more than 90 days old. Okay, so now we know that they’re told, and now we can see a histogram. There’s a bar chart of who the physicians are that have the most encounters without claims.
Is it organized?
I think the challenge I’m running into is, there are a couple of things in this. One is, it’s a bar chart that doesn’t have any organization to it. You would think that sort of descending order would make the most sense. Who are the worst offending physicians, start with them and then go in descending order. But it’s not that.
It seems to be in ascending order except that number two, the second in the list, is the biggest. You have the smallest, then the biggest, and then the second smallest. It’s just kind of all over the map. It’s also not alphabetical, so it’s not that either. No idea how to organize this. Very strange.
I think the more significant issue I have is, now what? I understand the bar chart of the bad physicians. Okay, now we know that Dr. Zhang is the most significant contributor in terms of the problem. Let’s find Dr. Zhang and figure out why. But do we know that the problem is with the physician not signing off on the note? Or is it something else? Why was the claim not created?
If we’re looking to solve a particular problem, we’ve got to ensure that we zero in on the right issue. I would want to see not encounters without claims but encounters that hadn’t been signed off to know it was the physician’s problem.
Look at the biller
Once they’ve been signed off, the physician has sent off the note or something like that. The problem is in revenue cycle management, not with the physician. Therefore, looking at the aggregation by a physician makes no sense at all. We should be looking at biller, billing department, or something else somehow to identify the problem within the billing department because it no longer has anything to do with the physician. They’re missing the causality there. Again, maybe this is what it is. I don’t know.
It’s just that it’s not clear from that report. Indeed, I would want to see the precursor to this showing that breakdown before we dive in and say, “Ah, yes, the problem is that the notes aren’t signed off, and therefore we want to know which physicians.”
That plus, we have a pie chart showing the age of the claims, where 80% of them are 30 days or more. I can’t tell how many are older than 90 days. It’s a pie chart without numbers. It’s not very well designed. The point is, now what? Now that I know that most of them will be older than 90 days and most of them are going to be older than 30 days, how does that help me? Looking at the year to date, I would expect that the outstanding ones from January-February will be older. Most of this year has elapsed, so most of them are going to be past 90 days. Again, that seems very obvious and not very helpful.
Arrive at the solution
What are we trying to solve? What’s the question we’re trying to answer? How are we getting to a solution here? I don’t see the progression. Again, this feels kind of like, “Oh, we’re going to give people some data. Maybe, it’ll help them. Maybe, it won’t.” Some of these kinds of things are good. Some of them are not. There’s not a straightforward design of, “Oh, these are the problems that we want to solve. Here’s how we’re going to solve them.
Here’s how the data is going to break it down and make it very clear for you, so you can see and isolate, “Yes, it’s this problem and not this problem. Here’s where the underlying problem is, or here’s how to solve it.” I don’t see them walking us through that.
We’ll come back and talk more about some of the eClinicalWorks reports another day, but that’s our take on it. Not mainly well designed as far as I can see. Again, it’s not that it’s useless or that no data’s helpful. It’s just that it’s somewhere between wrong data inputs, bad report design, a few other things. I would want to make sure I was checking all these things out and making sure the data input is correct and then going like, “Hey, how do we get better reports on this?”