Learning healthcare systems, part 2:

How some of the pieces fit together

September 10, 2018

One of the things I have thought long and hard about is how the different disciplines that focus on improving outcomes, processes and services work together (and how they don't).  At least three fields have this focus (quality/process improvement, program evaluation and clinical/translational research), but they seem to not always talk well together, and sometimes they step on each others' toes.  For example, sometimes quality improvement projects are delayed because the leads want to do a more thorough literature review, or wants to explore baseline data more thoroughly, as is typical in the research world.  And sometimes if multiple changes lead to an improvement and are easy to implement, QI does not focus on which change was most impactful, but researchers would want to.

This would not be a big deal, except that in many hospitals all three are essential to the learning health system.  We need to interweave them; we need to make sure they can talk to each other.  And since analysts support all three, we have to be able to make heads or tails out of requests related to each of them.  But analysts sometimes find themselves in the middle of meeting a wide range of needs from a single project team, and project team that is not always aware of the nature of these overlapping objectives.

This appears to be a problem everywhere in healthcare. I recently attended the annual research meeting for Academy Health (the premiere health services research professional organization in the U.S.).  There was a lot of consternation about 'why hospitals don't listen to researchers!'  And when I attend Children's Hospital Association quality improvement meetings, I wonder 'Why don't any of these people pay attention to program evaluators (of which I am one)? It would really help them.' – and my colleagues who are quality improvement specialists most likely wondered why I even asked that question, and probably wondered why I did not listen to them.

Recently, I ran across a book by a nurse researcher, Karen Monsen (Intervention Effectiveness Research: Quality Improvement and Program Evaluation) that deepened my thinking.  She wanders through the difficult process of finding core teachings from each of the disciplines and shows how we can have a single science that integrates them all, which she calls 'intervention effectiveness research.'  Monsen only tries to find common ground for program evaluation and quality improvement, but I think we can stretch that to include clinical/translational research in many instances. 

So I would like to make two claims to clarify how these sciences work together and the differences they have.  I think these claims can help us figure out how to build a successful learning health system. Perhaps you can see if the two claims pass the smell test.

Claim 1: Karen Monsen is right, the three disciplines (quality/process improvement, program evaluation and clinical/translational research) share a core methodology.  And this methodology is based on the 150 year development of scientific methods that researchers rely on.

The truth is that all of our approaches to statistical and quantitative methods spring from the fountain of research methodology.  Research methods teach us how to make precise comparisons, understand trends over time, and sort out the signal from the noise.  These did not exist 200 years ago.  We have developed them over the last 150-200 years and they have helped us move scientific knowledge and societal improvements uphill significantly. 

Table 1 is my attempt at summarizing what these three fields pull their approach from 'traditional' research. By 'traditional' research, I mean research as it is traditionally conceived.  In healthcare, it is often what we consider 'basic' research to be like.  But other fields have their own versions of traditional research. 

Table 1: Comparison of translational research, program evaluation and quality improvement

For sure in the table, we see some key breaks from tradition.  For instance, while traditionally research is understood as seeking knowledge for the sake of knowledge, all three of these disciplines seek knowledge for the here and now, knowledge that is unique to a particular setting.  They each rely on different levels of controlling that environment to get that knowledge.  And while QI is not usually considered very 'researchy,' QI experts try to speed up the adoption of best practices that evolve from research.

Quality improvement is the most context-dependent.   A quality improvement specialist is rarely interested in making comparisons between an intervention group and a control group.  QI looks to learn from current patients/practice to impact the system for improvement.  QI may focus on different conditions in the same setting (what impact will this intervention have when the system is under stress?), but rarely two settings (for example, between two clinics or two hospitals).  But he or she still needs to make rigorous comparisons to know whether the team is making progress.  So they trend data over time to be do this (that is, they use statistical process control charts).

Program evaluators are really in-between traditional researchers and quality improvement specialists.  They can either choose a design that is completely context-dependent, or they can choose one that compares contexts.  It depends on the objectives of the project they have been asked to evaluate.  So if an evaluator is asked to determine whether clinical care guidelines reduce costs generally (across all of health care), they will want data from many different institutions and settings.  If they are asked if clinical care guidelines reduce costs in a single hospital, they just need data from one hospital.

But both disciplines rely heavily on the decades of learning how to compare well and how to set up tests and measurements with rigor.

Claim 2: The three disciplines are engaged in the same project – improving services, processes and outcomes, and all three are participating in developing new knowledge.

The first part of this claim is not controversial, but the second part is.  For complicated reasons related to the history of philosophy (yes, sorry, but it is true), the term 'knowledge' has generally been held to mean something like what researchers term 'generalizable' knowledge – knowledge that holds true in almost all contexts.  An example of generalizable knowledge is math – the rules of mathematics hold no matter what society you belong to, no matter what planet you live on.  Two plus two always equals four.

But social scientists started messing with this idea in the early 20th century when anthropologists linked knowledge to culture.  Math is still math, but beyond things with physical traits, knowledge has come to be understood as much more context-dependent.  A great example of this has to do with how to interact with strangers.  Typical visitors to Chicago from New York City are very confused by how friendly and outgoing strangers are, and the typical Chicagoan visiting New York City is famously confused about how 'brusque' everyone is.  Each place teaches its own rules (knowledge) about 'appropriate' interactions with strangers.  A term that is frequently used for this is 'local' knowledge, which differentiates context-dependent knowledge from generalizable knowledge.

Table 2: Contribution to knowledge development

I tend to think that there is a link between local and generalizable knowledge, but that is not always an easy case to make (and it is not always true, for sure).  What is important here is that even disciplines like process improvement and quality improvement that have been told they 'do not produce knowledge' over the years, do indeed produce knowledge (I contend).  The knowledge they produce may not be generalizable knowledge in the strict sense, but it is knowledge that facilitates positive change.  And as quality improvement specialist works in more and more settings, he or she begins to develop quicker and deeper insight about how to tap into that local knowledge so a solution can be identified more promptly and implemented more effectively.

It is in these patterns of learning from local setting to local setting that many new interventions are born, and some of these go on to be developed into interventions that affect many and different kinds of settings.  That is what Table 2 is about.  Table 2 attempts to lay out how each of the different disciplines (quality/process improvement, program evaluation, and translational research) all have a role to play in developing knowledge.

Sometimes it works the other way – an idea from a research study will be implemented in a new setting and that can either confirm what the study found, or it can 'infirm' the research hypothesis.  When a hypothesis does not hold in a new setting, all sorts of new questions surface.

I hope that Table 2 starts to get at how each of the three disciplines contribute to a learning health system, and to the interplay of these disciplines in practice.  Each has their own proper field of operation, but there is a lot of overlap.  The overlap helps explain why analysts are often confused when they are sent on tasks to discover something in data that may seem far afield from the current project, and it may explain confusion in project teams for the same reason.

Does this seem right?  Does this pass the smell test? Does this help us get closer to building our learning health system?  Let me know what you think.

Thanks to Lee Budin, MD, and Susanna McColley, MD, for providing feedback on an earlier draft of this.

Subscribe to The Why Axis

Subscribe now to have updates from The Why Axis delivered to your inbox.

Please leave a comment

Comments will be moderated.