As I mentioned in my previous post, all of statistics revolves around two key concepts – describing what is 'typical' and describing 'how typical the typical thing is'. If we described what is typical without showing how typical it is, we risk over-simplifying some very complicated things. And if we do the reverse, we risk making them impossible to understand.
In statistics, we are always trying to balance simplicity and complexity so that we can say a lot in just a few numbers (something we devotedly call 'parsimony'). Finding that balance is an art and it is why people who have a deep understanding of statistics are so valued. They can find the essence of truth in a very complex array of data and can express it with brevity and clarity.
In this post, we are going to focus on central tendency, the 'typical' side of the equation. It is really where statistics begins. But I don't want to lose sight of variability too much because the two ideas really cannot function without the other. Indeed, you will see we cannot even talk about central tendency without variability butting in fairly frequently.
Let's just be honest. The average is everyone's favorite statistical procedure. It just is. It is everyone's go-to way to summarize data.
And there are reasons for this. (1) The average (or the mean) is the probably the single most well-known statistical concept. (2) Almost every complex statistical procedure revolves around it. (3) And almost every third grader can tell you how to compute it. So, yes, it is a little spoiled by the statistical gods.
But we don't have to be enablers. The mean is not the only measure of central tendency, and not always the right one to use. In fact, in some of the data we use around the hospital, it can be pretty misleading. So let's take a look.
There are three ways to measure central tendency (where the center tends to be):
The mean: This is the average. We learned in grade school that you just add up all the values in a group and divide by the number of things in the group, and – voila! – you get the average number.
Figure A shows the distribution of the ages of the children from the previous post. You can see that while the mean is 8, there is a lot of difference among the ages. So maybe we would feel that saying 'the mean is 8' really does not tell us so much. Maybe we would be better off (that is, more accurate) just to say, 'there were 16 kids in this group, ranging from age 1 to age 14' – because there is not much happening in the center. There is a great deal of variability around the center, so much so that the center does not really feel like a true center.
The mode: This is the easiest of all. It is just the number that happens most in a group. In our age example (Figure A), it is 8, because 8 is the age with the most number of kids. In Figure B, below, the distribution actually has two modes – 1 and 2 years of age. A distribution can always have more than one mode. If there are two modes, it is called 'bimodal' and if there are three or more, it is called 'multimodal'. The mode tells you the age of children that happens most often, but not really anything else. It comes in handy when the mode dominates a distribution. (We will talk about that in a later post.)
The median: The median stumps people. But it is really pretty straightforward. It is the number that 50% of the cases fall below. In Figure B, the median is two – 50% (or 8) of the children are at age 2 or younger.
Note that in Figure A, all of the measures of central tendency are pretty close together (7-8-ish). But in Figure B, they are not. The median and mode are a lot lower than the mean. This is always the case in distributions that statisticians call 'skewed,' 'having a long tail' or 'having outliers'.
Having said that, I want to pause for a second. Variability has interrupted our discussion twice so far. I am not sure you noticed. First, we noted that when there is 'a lot' of variability, the mean is not really very informative. Maybe we should instead use the concept of range since there is no strong center. Second, we learned that when the distribution is skewed to one side, the mean tends to separate from the other measures of central tendency and in a way makes it 'feel less central,' pushing it to one side. Variability is always pestering the mean.
We now stand squarely in front of the number one problem with using the mean: It tends to get pulled all over the place by outliers and skewed distributions. Take Figure B, above. Eyeballing the graph, most people would say the 'typical' case is somewhere under age 3 or maybe 4. Indeed, the median is age 2 (half of the children are age 1 or 2). But the outliers (the 11, 13, 14 and 15 year olds) pull the mean all the way up to age 5, which suggests that 5 is the typical age. But it isn't, is it? There is only one five year old and only 4 of the 16 children are older than 5. In this case, common sense really does not agree with the mean. So we need another tool.
The median offers a very strong correction to the mean's instability. This is important in health care, where we have many, many distributions with outliers. For example, take length of stay for children with asthma in our hospital (the data presented here are hypothetical) –
The mean is almost twice as high as the median. This is important. If we decided we wanted to decrease the average length of stay from 3.4 to 3, we would be guided to focus on the outliers (patients who had unusually long lengths of stay). In this hospital, that would lead of to focus on patients who have complex conditions and for whom asthma is one exacerbation of those complicated conditions.
But maybe there is more to be gained by focusing on the median and mode. If we focus on the mode, we might ask why so many kids need one extra day in the hospital? Could we look at the patients who need three days and ask if perhaps some of them should have been here only two days? In that case, we might find that discharge planning could help.
The point is that each statistic might lead you to think about the problem differently and then might suggest different solutions. So it matters which one you choose.
The average is the favorite child (it is incredibly useful), but it is not always the right tool to use in interpreting data. Its particular weakness is that variation affects it a great deal, so much so that it might be a long way away from what one might think of as the center of a distribution.
A final note: Even with the mean's flaws, financial folks stick to it pretty tightly. That is because it has the magical quality of 'containing all the variability in a distribution.' Here is what I am getting at. If you have the mean and the number of cases (for example, the average length of stay is 7 and the number of discharges is 23), you can compute the total number of the whatever you are measuring (in this case, total patient days; just multiple 7 time 23, or 161).
In finance, totals are really, really important. Think of the number one thing you want to know about your bank account – you want the total amount of money, you don't really care about too much else. Experts in finance and economics like the flexibility that the mean has for this reason. They will always want the mean displayed, even if they know the median or mode is a better indicator of central tendency (and they will compute the total in their heads if it is not displayed on the table).
Comments will be moderated.