My dad was super-estimator. After a 50-year career as an engineer and property developer, he had an astonishing ability to estimate the size, shape or volume of almost anything. Here is an example –
He and my stepmother visited Grand Tetons National Park. My stepmother asked my dad, ‘From Jenny Lake to the top of the mountains, how far is that?’ My dad responded, ‘From sea level or the lake? From the surface of the lake, or its floor?’ ‘Hmmm, probably 1500 meters to the nearest mountain peak, no wait, 1800? Let's say about 1700 meters, maybe a little more. I’d say 1755 or 1760, from the surface of the lake to the top of the nearest mountain peak.’ Mt. Moran, which sits roughly between Jenny Lake and Jackson Lake is 1746 meters above the lakes. Yes, my dad was really good at this.
There are a lot of times we wish we could use our analytic skills to ‘foretell’ the future, not just estimate the size of something. We are asked to anticipate movements in the health care market, make educated guesses about how likely a particular patient will respond to therapy, and hypothesize about how the outcomes of the care we provide will change if we do an improvement. How do we do that? How do we do it better?
It is worth noting that some of our colleagues seem to have an innate capacity to answer these kinds of questions on the back of an envelope. They ‘guess’ correctly an awful lot. This post is about these super-forecasters, how to pick one out from a crowd, and how to become one.
Philip Tetlock, a well-regarded political psychologist, conducted a large study of this skill set. Tetlock and his colleagues developed a website called Good Judgment on which his team posted dozens of forecasting challenges. Then they invited people to sign up and start forecasting (you can still join up – give it a try!). Over three years, about 10,000 people signed up and became dedicated forecasters. At the end of the three years, Tetlock’s crew graded the forecasters and identified 2% who were consistently more accurate than others – these are the ‘super-forecasters.’ Tetlock’s team studied those 2% and boiled the difference between super-forecasters and the rest of us down to five things.
1. Super-forecasters focus on the right kind of questions – time limited, specific, falsifiable.
A question that is less forecastable is ‘Will this patient be readmitted in the next year?’ or ‘for any reason?’
A good forecasting question is ‘Will this patient be readmitted within 7 days after discharge for a related problem?’
Be concrete as often as possible and be as specific as possible in how you define your problem. Make sure each part of your question has an empirically-based ‘yes’ or ‘no’ (that is ‘falsifiable’), and that each part of your question is measurable. In this question, ‘readmitted within 7 days’ is specific and concrete. ‘A related problem’ is much less so. Perhaps it can be replaced with ‘known sequelae’ or ‘exasperation of the discharge diagnosis.’
As an analyst, you probably have this part of being a super-forecaster down pat. We cannot face a dataset without a pretty specific set of questions to ask it.
2. Super-forecasters acknowledge that ‘yes’ and ‘no’ are almost always the wrong answer, and that ‘maybe’ is not sufficient.
The question above can be improved by shifting it from yes/no to a probability, ‘What is the probability this patient will be readmitted within 7 days after discharge for a related problem?’
Not everyone thinks in probabilities easily, but super-forecasters do. And when it comes to making a decision for an individual patient, you don’t make a partial decision (a patient is not 25% readmitted – they are either readmitted or not). But when you are anticipating future events, you are comparing that patient to the universe of patients and wondering whether he or she fits into the yes group or the no group. Using information from other patients puts you in the realm of probabilities.
3. Super-forecasters break the question into smaller pieces.
Then they ask what information they need to have to answer each question, and then pursue the answers.
Here is a list of questions about the readmission problem that, if you had the answers, you could come up with a better forecast for the specific patient.
4. Super-forecasters follow a specific pattern in when they are breaking questions into smaller ones.
They first do some research and begin with a question about ‘outside’ probabilities – this helps them set a base probability rate to begin with. For example, the base 7-day readmission rate for our hospital is about 4% (answer to question (a), above). That probability is not related to the current patient at all, but it is a good place to begin because it tells you a lot about how our patients generally do after discharge.
Then they make ‘inside’ refinements – that is, they research and find probability rates for things particular to this situation (age, condition, subtype of condition, family stress). If 40% of children with the patient’s condition have an exasperation after discharge, and 10% of those children require hospitalization, then the likelihood of readmission for this child is about 4% - the hospital average. If half of the patients with exasperations require hospitalization, then the probability of 7-day readmission would be 20%, quite a bit higher than typical.
Or, let’s say you know nothing about the current child except that he or she is being discharged from the PICU. In that case, you would begin with the hospital 4% and then estimate what portion of the 4% are PICU readmits. If you don’t know, but the research literature says PICU discharges are three times more likely to require readmission than other discharges, you would estimate a 12% chance of readmission. You could go on and refine your estimate with more and more information as you gather more information.
5. Super-forecasters seek to falsify their assumptions, they add new ideas, adjust, and update their forecasts constantly.
For example, not all stressed families are stressed in the same way or have the same resources to deal with stress. So knowing a patient’s parents are stressed might increase the probability of readmission, but a super-forecaster who later learns that they have good transportation or that family members who live nearby would decrease the probability of readmission. In other words, super-forecasters are skeptical about the accuracy of their estimate and keep working at testing and verifying it.
Over time, you will get better at this.
1. Practice – try, assess, adjust
Becoming a super-forecaster takes time and effort. And you need to track yourself and notice where you make errors. Then you can use that information to make better forecasts in the future.
2. Cultivate good mental habits
Get used to breaking down problems and questions into small pieces, and make it a habit to be willing to learn that you are wrong.
3. Teams, and the right kind of team
When you work in a team, take advantage of the diversity of knowledge on the team. Seek it out. Add people to a team just to broaden the knowledge base. Always test your assumptions and those of others. This constant testing will help you come up with more solid probabilities.
Finally, when you are new to a team of people with good mental habits, one of the things you will notice right away is that there is a sense of ‘psychological safety’ on the team. There might be wrong points of view, but not wrong people. The best forecasts get made when everyone is presumed equally valuable to the team and when information is vetted publicly and rigorously regardless of who puts it out. You learn very quickly that it is not what you know that matters but how you actively think and learn.
So start sharpening your skills, or sign up at Good Judgment to see how you do.
Comments will be moderated.