This article is initially available on our podcast. Click here to listen.
When they’re doing analytics, most people don’t do a large amount of quality control or frequently not even really almost any quality control, other than looking for basic things that break. For most people, the presumption is that if the analysis works, if there’s an output, then it is probably correct.
Perform quality control
In analytics, we must do a lot of quality control—the reasons why are many. A part of it is because systems are terrible. I mean, healthcare IT systems are overwhelmingly awful. I’m not trying to knock anybody in particular. Still, if you think about the big four in hospitals, Epic, Cerner, Meditech, and others, they’re generally not great at reporting and analysis. They often have confusing structures. They’re not necessarily designed for analytics. They’re frequently designed for clinical use because they’re designed initially as clinical systems, not financial systems.
Another reason is that reports don’t match up. They don’t tie out. When you run two different reports that should output the same number, they don’t. It’s not only confusing but completely inexplicable as to why. There are often missing fields or missing records, and it’s not clear why that’s the case.
Identify reporting inconsistencies
I can tell you that we’ve done much digging into quality control to identify why reports don’t tie out. Typically, we throw up our hands and say, “We don’t know. We can’t identify or explain why this is the case.” We’ve even gone so far sometimes as to go back to system administrators and the companies that run those systems that built those systems, the OEMs, and ask them why the reports don’t work or tie out. And they frequently can’t figure out what’s going on. We can get into some examples in another podcast.
I think the message that we want to make here is that quality control is essential. Not only is a lot of quality control necessary, but I would suggest that the ratio of quality control to actual analysis or development of software applications. If you think about it, you have sort of the “spec’ing out” process where you try to identify what it is you’re trying to accomplish. It also involves where you’re going to get data from, what exactly the output is going to look like, the questions you’re going to answer, and all this kind of things. Call that the spec part. Then, you have the development or analysis part, and then you’ve got the quality control.
I think most people think that 90% of all of this is doing the analysis. Somebody will probably believe even 100%. The reality is, I would suggest that it’s perhaps roughly 1/3 each, where you spend a third of your time scoping out the project and understanding the requirements and drafting it up in great detail to make sure that you know exactly what you need. You splice it out, saying, “This is what the output is going to be.
Here’s the question we’re going to answer, and here’s what fields we are going to use and where the data is going to come from,” and all those things so that when it’s done. You say, “We did exactly what we said we’re going to do,” somebody is happy with the output. That’s part of it.
Then, there’s the analysis part. And then, there’s the quality control. Realistically, it would help if you were spending a comparable amount of time doing quality control as the actual analysis. I know that sounds crazy.
I remember one time, I was a sponsored bike racer many years ago. Again, it wasn’t like I was professional, but they paid for my races and gear and stuff like that. We were at a training camp one year, and I had a full-time job. It wasn’t nine-to-five. It was a professional thing, where I had to work 10 hours a day kind of thing plus commuting. It was tough to fit in several hours a day of training on a weekday and then between five and seven hours a day on weekends. And so we were at this training camp, and the coach said to us, “If you don’t have time to stretch, you don’t have time to train. If you don’t have time to sleep, you don’t have time to train.” And my response was, “Well, then what the hell am I doing here? I don’t even have time to train. I’ve got to do these other things also.”
Set up adequate time for error mitigation
Understandably, you probably feel like you’re in that position which is, “How in the heck am I going to find time to do quality control and all this extra time, spec’ing things out, and all because I don’t have time for that? I don’t even have time to do the analysis.” Please don’t do it because you’re wasting your time in this respect. You will become frustrated. End-users who try to use the output will become frustrated when somebody identifies or finds an error or a problem or something that doesn’t make sense.
When you lose credibility, it is tough to regain. If people don’t trust the data, it will be tough to support having resources to put towards doing analysis and the value of making decisions off of that data and analysis.
The moral of the story is massive amounts of quality control. In another session, we’ll go through what kind of things we’re discussing.