Preloader Icon

The Informonster Podcast

Episode 2: A Guide to Interoperability, Charlie Harp Talks About the Process of Using Interoperability to Pursue Peak Data Quality

October 8, 2019

For this episode, Charlie Harp, CEO of Clinical Architecture, gives a primer on interoperability, as well as why it matters. He also discusses ideas such as the responsibility of the receiver and the dangers of Conceptual Echo.

h

View Transcript

Follow Us

Have a question or topic idea?

Get our News and Updates

Get notified about new podcast episodes, upcoming events and webinars, and more!

Transcript

Welcome to episode two of the Informonster podcast. I’m Charlie Harp and today we’re going to be talking about healthcare interoperability. Let’s start with what is interoperability? Well, according to the Healthcare Information and Management System Society or HIMSS, interoperability is the ability of different information technology systems and.

software applications to communicate, exchange data and use the information that has been exchanged. In other words, I should be able to take information, some unit of work or data from system A, deliver it to system B and system B should be able to continue the work, quote unquote, on that data or make use of that data in a meaningful way. Simply put, that’s what interoperability is. Now, interoperability is not new. It’s something that we’ve been doing with technology as long as I’ve been around. We’ve been doing it in the financial industry. When you put your debit card in an ATM or use it at a register at the store, you’re encountering multiple levels of interoperability as that information moves from place to place to place, from system to system.

The same is true in retail, the same is true manufacturing. In fact, in healthcare we’ve been doing interoperability for a long time. Back in the 80s, when I started in healthcare when we were building interfaces, going from one lab system into an EMR, that was interoperability. The word interface kind of implies interoperability. I’m taking information from one place and delivering it to another. We’ve been doing interoperability when it comes to paying insurance claims for a long time. So interoperability is not new. Interoperability is not cutting edge science, but lately we’ve been talking about it a lot. Why is that? Why have we been talking about interoperability in healthcare like it’s something new? Well, the reason is we’re doing something new with interoperability in healthcare; and a lot of that has to do with our desire to have software systems provide help, guidance, information, best practices to the people providing care to patients.

So really what we’re talking about, when we talk about interoperability now, the buzzword interoperability, I usually categorize as clinical interoperability. It’s the ability to share meaningful clinical information about the patient that allows the software that the provider uses to help them better. It does that by improving the data quality. Because if I have more information, if I have good information if I have information I didn’t have yesterday, that information creates a broader picture of what’s going on with my patient, which allows me to make a better decision. Or it allows the software, the logic, the reasoning that I built into my systems to do a better job of helping me because they know more about the patient than they did before. So interoperability in healthcare, and that’s what we’re going to focus on today, is moving information about a patient from one system to my system.

And that’s another point I want to make about interoperability. We talk about these things in these grandiose Spiritus Mundi you know, Kumbayah way. Because we talk about interoperability as an abstract, but clinical interoperability is not an abstract. It’s not a thing. It’s not an “Oh we must have interoperability”. Interoperability should be looked at from a selfish perspective. It should always be looked at from the perspective of my system. Now whether my system is the government registry or it’s clinical data warehouse or it’s an EHR. If you really want to be successful with interoperability, it always has to be seen from my system. There’s data, it’s elsewhere. I need to get it into my system so that I can make meaningful use of that data and take advantage of it and improve the quality of my picture of the patient. So with that in mind, let’s talk about what makes up interoperability.

所以这一切开始于数据和其他数据的生活where in another system. To ineroperate with me, that system has to take the data out of the data structures that live in its system, its canonical structure, how it models its universe. It has to take the data out using the terminologies, the code systems that it understands and it has to package them up into some kind of a message. Then it needs to take that message, move the message to me, I have to receive that message, I have to unpackage the message format that it is built into, I have to take the code systems and terminologies that are being used in the message and I have to change that into something that I understand, And then I have to take that information and put it into my canonical model, put it into my data structures that my system understands.

So let’s kind of go through that and talk about each layer, because a lot of people talk about interoperability and they don’t realize there are multiple steps because there’s the packaging up in delivery side of interoperability and there’s the receiving an unpackaging side of interoperability. And really you’re doing the same steps. You’re just doing them in reverse when you receive the package. Now, once again, from our perspective, and for the purpose of this discussion, we’re looking at it from the selfish perspective, my system, my data. So when I talk about interoperability, I always kinda think of it as the receiver. Other people are packaging up information. I’m trying to get it and put it into my system. And part of the reason for that is when you are consuming information, it’s more important to you in many ways than when you’re putting information out there.

Because you already have the information you’re putting out there. The information you’re receiving is what you want. You should still do a quality job. You should still try your very best (to) put it out there in a meaningful way. But when it really hits home is when you’re taking information from elsewhere and you’re bringing it into quote unquote my system. So the first step when you’re doing interoperability as the sender is canonical interoperability. You have to deliver information to somebody else. The information lives in your system, in your schema and your data structures. You might have a patient table, problems table, allergies table, et cetera. And so the first thing you need to do is you need to write a query or software that takes the information and pulls it out into something that you can work with. And you have to do it in a way because you’re usually targeting some message format, whether it’s FHIR or HL7 or CCDA.

So you’ve got a target that you’re trying to fill. One of the potential challenges is your data schema. Your internal systems may not match up exactly with the way that those formats contemplate the data. And I’m not going to get into that too deep now, but that level, where a programmer sits down and reads through the data structures and figures out how to make it fit, is canonical interoperability. And we never talk about that because it’s something that a programmer does and it’s usually some function in the system that allows you to extract information and put it into a reasonable structure; depending upon how challenging your native structures are for dealing with the data in the first place. So I write some code, it takes the data out of my tables and now I’m going to take the data that I’ve pulled out of my tables and I’m going to stick them into a message format.

Taking my information and formatting it into a standard message is typically referred to as syntactic interoperability because I’m taking my data and I’m putting into a syntax a shared syntax so that theoretically other people can unpackage it. The challenge is, the standards that we have in healthcare, HL7, CCDA, FHIR, there’s some level of ambiguity and so the running joke is if you’ve seen one file in an HL7 format, you’ve seen one HL7 format. They all follow a standard set of rules or guidelines and the job of the syntactic interoperability is to take the information I’ve extracted from my system and package it up into a message format that represents some agreement between me and other people that have agreed to use that same format to receive and send data. That’s usually pretty straight forward, if you understand the rules, and the tighter the rules, the more successful you’ll be, but the bottom line is you take the data, pull it out of your system, message format, whether it’s XML or or pipe delimited or Jaison.

这些都是你的语法规则filling in in the message. Once you have your package built, your message is all packaged up. The next thing is physical interoperability. That’s when you take the data and you move it from the sending system to the receiving system. Now, that’s what I would consider to be the easy low hanging fruit of interoperability because you know whether I push data across the internet, stick it in an FTP site, put it on a memory stick, CD, put it out onto a nine track tape, is irrelevant. The bottom line is getting the information, the digital data, the package, moved from point a to point B is trivial. Assuming whatever levels of encryption or security are necessary, I should be able to get the data from point a to point B. That’s the easy part. And by the way, if there are requirements in the message format that require that I use certain code systems or terminologies, I should point out that when I’m doing the packaging, I’m also going through the last layer of interoperability, which is semantic interoperability.

Semantic essentially means the meaning of something. And what I need to do is, if my messaging format requires that I use RxNorm for drugs and SNOMED for problems, then I have to go through another layer where I take the terms that I have that came from my system and transcode them or translate them, whatever you want to call it, from the code systems that I use to the code systems that are required, whether it’s FHIR or HL7 or CCDA. Now in a lot of cases that isn’t always done by the sender, the sender just says this is what I’ve got. And in some cases that actually might be good and I’ll explain that in a minute. So I’ve got my package, I’ve physically moved it to where I can receive it. And so now I, as the receiver of the message in my system, am going to take that package and I’m going to go through my steps,

When it comes interoperability. The first thing I’m going to do is I’m going to unpackage it. So I’m going to do my syntactic interoperability by reading the message, finding the things and where they are and pulling them out. The next thing I’m going to do is semantic interoperability where I’m going to take the codes that are embedded in the message and I’m going to make sure that they’re encoding systems that my software understands. Because if I don’t do that, the data I’m getting is going to be jibberish. So I’m going to take it to a map that maps the code value from the sending system to the code value of my system so that I can take advantage of the data. The next thing I’m going to do is canonical interoperability. I’m going to take the things that are in the message and I’m going to figure out how do I put that into my data schema so that my system can take advantage of that.

所以我经历这些interoperab层ility. I’ve moved the information from point A to point B. I’ve transformed the information to not only the code system I understand, but into the structure that my system uses to represent that data. So that is the trip of interoperability: Getting the information from elsewhere into my system. Now, what are the benefits of doing this? If I do it well, it’s that I have more information. If I’d only captured four conditions and two allergies and I get additional information that tells me the patient has three other conditions, is on four different medications that I didn’t know about and they’ve got additional allergies, well that makes my data quality better, and that translates to better reporting, better decision support. The more data I have, the more complete the picture on my patients, the better job I’m going to be able to do; not only acting upon it, but representing it to people that care in a value based care world.

So what are some of the things that you need to know if you want to be successful with interoperability? If you look at the different layers of interoperability, hopefully you have a solid engineering team, they’re able to get the data out, they’re able to do whatever canonical shifts are necessary to meet the syntactic requirements. The syntactic interoperability can be challenging because the variability in the way that messaging standards are handled, but it’s also pretty straight forward. Physical interoperability, assuming that you can get a secure channel from point A to point B is also pretty ubiquitous. The semantic interoperability is a challenge primarily because of the word “semantic”. The word “semantic” is meaning. So every time we interoperate semantically, we run the risk of altering the meaning of the information. It’s one of the reasons why I’m not a huge fan of normalizing data or semantically transforming data that I send out of my system because every time I do some kind of semantic translation, I introduce kind of that “whisper-down-the-lane” effect.

So I might change it just a little bit, but that means what I’m sending you is a little bit different than what I actually captured in my system. And then you get that and you have to probably semantically translate that or transform that to take it into your system. And so you changed it again. So there’s at least two levels of semantic translation between you and what was actually sent – at least two – and every time you do that you have the risk of changing the meaning of the data. If it were up to me, I would say you don’t translate the data coming out of your system. You just provide. “Here’s what I have. You want what I have? Here’s what I have.” Whether it’s a local code, whether it’s a standard code, because anything I do to introduce additional translation is potentially creating uncertainty and one of the biggest challenges with interoperability in healthcare is that we create a significant amount of uncalibrated uncertainty.

Now when you receive that data, you go through semantic translation. Typically in healthcare we have a history of doing like the top 10%. The problem with that approach is the whole point of interoperability is to complete the picture. Just because I capture the high volume labs or the high volume allergies or problems, it doesn’t mean I’m completing the picture. It doesn’t mean that that low volume term that came through isn’t really important to me understanding what’s going on with the patient. In fact, I would argue that, the low volume things are the things that I might not already know because they’re not common. The common stuff is common for a reason. The fact that the patient’s diabetic is probably in both systems, but the fact that they had myasthenia gravis. Because it’s a low volume term, if I don’t address it, I’m still missing that information and not having that information could impact my ability to care for the patient or make a decision.

So one of the challenges is this mentality that had for a long time, which is, “well, we’ll just do the high volume things.” I find that unacceptable. I mean, just from my perspective, when you’re talking about data quality, you either have data quality or you don’t. You don’t have 10% high quality; 10% high quality by definition is not high quality. You really have to strive to map everything, everything that’s coming in. If you want to use it in a meaningful way, you have to map everything. Now when you’re looking at a particular domain in a given channel, there’s always going to be things that are not appropriate. There are going to be things in your drug feed of information that are adult diapers. There are going to be things that are billing artifacts. You don’t have to map everything. You identify the things that really aren’t appropriate and you carve them out, but if it belongs in the domain, if it’s appropriate clinical data, you should map it no matter what and find a way to do that.

Whether it’s human powered, whether it’s automation, find a way to do it. The next thing is you have to remember the onus is on the receiver. You are receiving data to make your world better in your system. You have to assume that you’re going to have to do the work. We tend to get wrapped up in the spiritus mundi and “everybody join hands and interoperate”, but in the real world, in the trenches, you have data coming from a bunch of other places. It is your job to wrangle that data, to benefit your system. It’s nobody else’s job but yours. The next thing is always retain the source data. Even if somebody translates data going out, I would always look at the source data. Always have the source data, always be able to go back to the source data. Having the source data is the only way to combat whisper down the lane.

代码不能总是的另一个挑战be trusted. When you get data from other systems, you might be getting data that’s not right, and so the next question is how do you decide whether or not to add that data to your patient’s record? And this is a data quality thing and there’s no real easy answer for this. Let’s say for example, you get data on a patient and the data comes back and says the patient’s diabetic. Now there’s a couple of reasons why that information could be wrong. It could be the wrong patient. There’s some kind of an NPI failure. It’s not John DOE, it’s John Smith. The other possibility is that somebody put the data in in error and the bottom line is you now have a patient that indicates their diabetic. How do you know whether or not that’s true? Well, one way to do that is to put it in front of the provider and say, Hey, is your patient diabetic?

因为这个信息来自其他系统em. I would imagine that providers hate that, but that’s one way to do it. You can have a provider look at everything that comes in from another system and adjudicate that and say, “No, patient’s not diabetic. I know they’re not diabetic.” You could also theoretically put it in front of the patient. “You know we got information from the system saying you’re diabetic, are you diabetic?” “No, I’m not diabetic.” You could also put things like inferences in place that when data comes in from somewhere else, it looks for things to indicate whether or not that can be true. So did I already think the patient was diabetic? Yes or no. Is the patient taking insulin? Does the patient have an elevated hemoglobin A1C or history of an elevated hemoglobin A1C? I know most systems quarantine incoming information until some event takes place that allows them to include it. They just don’t blindly include information from other places and that’s probably wise, but it also acts as a gate or a bottleneck to get that information to someplace where we can use it. So it’s kind of a blessing and a curse if you’re gating that information and it’s a double curse if you’re gating it based upon a provider reviewing it just because it’s one more thing that the provider doesn’t have time to do. So look for strategies and I’m a big fan of automation strategies, if you can pull it off, that look at data and say “looks like it’s right.” And even better, if you have a system that can indicate information that’s questionable and kind of have a notion of uncertainty to say, “Well, I got this from somewhere else, so I don’t know that I can treat it as verified, but it’s there in case we need it.”

The other thing to think about is that more is not always better. You get information from a bunch of different places. One of the things to be successful with interoperability is knowing how to consolidate and summarize data. One of the things we did when we started accumulating data in health care, where we went from an episodic model to more of a continuum or longitudinal model: We just started to accumulate. And the problem is the terminologies we have in health care being pre coordinated, they automatically create a lot of variability, a lot of diabetes with this, with this, with that, with this, with that. And so you end up having, you know, 40 things to really say three things if you’re not careful, and I don’t have an easy solution for this. I think this is something we need to innovate through as an industry, but it’s “How do we take all this information and consolidate it in a way, and deduplicate it in a meaningful way, so that the relevant bits make it through but I’m not blasted with a junk drawer of a million things when I’m just trying to figure out what’s going on with the patient.” The other thing that is a challenge in our industry is something that I call conceptual echo and conceptual echo is where you send me something and I semantically translate it into what I understand and then at some point I produce an output and I send it to you and because it’s different than what you got before, because I translated it, you translate it into something different than what you originally sent and now you have two bits of information and then you package that up and send it back to me. Now you can see where I’m going with this. It’s like that old commercial back in the in the 70s where they told two friends and so on and so on and so on.

Because we’re doing semantic translation, semantic interoperability, at some point you always run the risk of creating informational echoes of data that you’ve sent out previously. And that information can compound if you don’t have an intelligent mechanism to do what we talked about a minute ago, to do that generalization, deduplication that says these things are really the same thing. And this is also just a side effect of pre coordinated terminologies where we take these codes and compound them internally where we say things like “congestive heart failure with end stage renal disease and malignant hypertension.” We start putting these terms together and if I were to send you those things separately, now you have the combined term and the individual terms and then it can just get worse from there. So the trick is not to let your interoperability mechanisms create a snowball of information that ultimately buries the provider in a way that instead of creating data quality, you just create unbridled data quantity, and that’s not necessarily a good thing.

So anyways, this was our primer on interoperability. I hope that you found it useful. I think there are huge benefits to doing healthcare interoperability well. There are some challenges. There are ways to get around those challenges with a little elbow grease, a little intelligent design, and just remember that the goal is data quality. If you always focus on the goal of data quality, and when you’re looking at these situations where data is coming in, focusing on data quality, and remembering that the onus is on you as the receiver to manage the information coming in in a way that benefits the quality of the data you provide to your systems and users. If that’s your North Star, it’ll help you make the right decisions. Well, I want to thank you again for listening in. This is Charlie Harp signing off on the Informonster podcast. Until next time, wishing you best of luck in taming your Informonster.