Moving the needle on outcomes, part 1:
Logic models help you focus on outcomes
Coming from the public health and program evaluation world, I take 'logic models' for granted and am always confused when people don't use them or don't know what they are. And whenever I walk a group through building a logic model — whether they are working on an improvement project, a research project or an operational reporting project, project leaders always say something like 'Wow! What we are doing really makes sense!'
So what is a logic model? It is a graphical representation of the resources, activities, processes and outcomes for improvement efforts or other organizational changes. Its job is to tie cause and effect together in a way that puts illogical assumptions right in front of everyone's noses so they can be fixed.
It also ties the metrics that measure success right to the activities. This helps reduce the scope of the data effort to metrics that directly reflect the value of the intervention. And it provides clear guidance to the analyst supporting the project.
Finally, logic models are great for groups to do together, and it is good to have a devil's advocate in the room to question everyone's thinking. It is not at all uncomone for members of project teams will have different assumptions about what the intervention is and how it can improve outcomes. Completing a logic model together surfaces these different assumptions and gets everyone on the same page.
There are many different versions of logic models. The one I am going to go through now is a good one for the project team to complete together. Once this one is completed, you can have lots of fun simplifying it and making it more graphically pleasing. A simplified logic model can be a great way to introduce your intervention to a broader audience, like staff members you need to train to carry out an intervention or leaders whose buy-in is important.
The logic model has seven components (see Figure 2).
- A statement of the problem you need to solve (the gap/need)
- A statement of how you are going to solve it (the intervention, tactic or activity)
- A description of the target audience (whose behavior do you need to change?)
- A description of outputs (a measure that the intervention is launched and in the field; usually something the project team puts into place – like a training, a new policy, or equipment purchased)
- Process metrics (these show that the target audience is changing their behavior, the first indication that the intervention is taking hold)
- The outcome metric (this is usually a single metric and should 'match' the gap/need statement)
- The balancing metric (this is a metric that identifies unintended consequences that you want to prevent)
The second step is to agree to the outcome metric. It may not seem 'logical' to skip to the end of the logic model, but the problem statement and metric should 'match,' and they then serve as the brackets for the rest of the work. They hold the group to 'true north.'
The third step is deciding on the balancing metric. It is not always easy to anticipate unintended consequences, but thinking about them ahead will help you shape your intervention to avoid them.
Once you have those three pieces in place, go back to the Intervention column and start working your way across.
Check your work
The benefit of the logic model exercise is that it forces everyone to see any illogical connections in the project team's approach to solving the problem and to fix them upfront. Here are some things to focus on (see Figure 3):
(1) The outcome should 'match' the gap/need. For example, if the need is to reduce readmissions, the outcome should be 'xx% reduction in readmissions.' No fancy stuff here.
(2) The process metric should logically produce the outcome. If your process metric is '% of discharged patients who are given purple balloons,' you would not be able to presume the process would reduce readmissions. It is not logical. This is where a devil's advocate can be useful.
(3) The expected outputs should logically result in the process changes. If you are training nurses in July to implement a readmissions reduction effort in May, the readmissions targets cannot possibly be met.
(4) The target audience needs to have the training, time, tools, incentives, and authority that could result in a change to the process metric. For example, if nurses decided to try to discharge patients earlier, but the target audience did not include attendings, discharges would not change because only attendings can write discharge orders. Nurses are part of the team that can make a discharge optimization project work, but they cannot do it on their own.
(5) The balancing measures should be quantifiable. These can be hard to pin down because they are often a surprise. But it is hepful to think through one or two things that your intervention might inadvertently trigger and that you want to avoid.
You may need more than one of each of the metrics. But one is often enough and I think having a single outcome metric is almost required. Otherwise, the project team can easily lose focus.
One yuo have a logic model, it is fairly straightforward to build a data plan for the project (defining each metric listed in the logic model). And, once you have that, the analyst will have a good handle on what you need.
Browse The Why Axix Posts
Please leave a comment
Comments will be moderated.