Patient reported outcomes and assessments
Surveys are essential tools that allow providers to get structured and consistent feedback from patients and families. They have always been used in health care, but are currently the focus of a lot of attention because they provide a mechanism to offer a more patient-centered experience of care.
The methods of developing surveys have evolved over the last 70 years in the social and behavioral sciences. These sciences have developed a precise approach to survey development and administration that has transformed informal questionnaires into scientifically rigorous data collection and measurement instruments.
Survey methodology is a giant field. It is helpful to begin with some basic vocabulary. I will try to note where certain disciplines do not agree on the meaning of a term. Future posts will address how surveys are designed and fielded.
Survey vocabulary I
Survey item – A single question on a survey.
Open-ended questions – Items that allow respondents to choose their own words to answer a question. Open-ended items are often used in pilots and other situations where it is difficult to anticipate how respondents will answer questions. They are also used when the choice of words a respondent makes is important.
Closed-ended questions – These items, sometimes called 'forced choice' items, provide respondents with a set of responses to choose among (such as True/False, Agree/Disagree).
Response categories – The options respondents are given to answer a closed-ended item (such as True/False, Agree/Disagree).
Scales – Groups of items that work together to provide information that a single item cannot. Scales are used to understand 'latent traits,' that is, traits, attitudes, and perceptions that are complex, hard to define, or that the subject may not have sufficient understanding or self-understanding to speak about directly. Examples would be friendship, anxiety, and political attitudes. A score on a scale is formally called a 'manifest indicator;' it makes manifest (observable) something that is not observable through ordinary means. Once validated, all items of a scale must be included in a survey to measure the trait it has been validated for. Items cannot be changed, added or dropped without revalidating the scale.
Another post walks through how to develop a scale, which requires specialized statistical and analytical approaches. The methods for developing scales are pretty complex and developers face a number of difficulties, so it is always helpful to ask for the details on how they were created before using them.
Composite metrics – These metrics use multiple survey items to provide an overall metric of some experience, perception or capacity. For example, a measure of school readiness might include items that assess reading ability, math skills and social functioning. A measure of access to care may include insurance coverage, health literacy, and geographic access. Composite metrics are often made up of items that measure different latent or non-latent traits, but taken together provide an assessment for a larger, overarching attribute, attitude or capacity.
Unlike scales, the key in developing a composite metric is to assure that the items that make it up (a) are not necessarily associated with one another (otherwise the metric just measures the same thing over and over) and (b) have a strong link to the attribute the composite metric is measuring.
Domains – Domains are sets of survey items that represent experiences, perceptions, attributes or functions that are linked together in some way. How this works is somewhat discipline-dependent. But in health services research, domains can simply focus on broad sets of patient experience. For example, a measure asking about the quality of nursing might ask about the respondents' experiences with nurses in the ED, in the OR, in the OP clinic, etc. The domain is the quality of nursing, though each item in the domain asks about specific nursing encounters.
Surveys do not always ask ALL of the items in a domain. Indeed, doing so is typically impossible. Thus, most surveys 'sample' a domain by asking a subset of the domain questions that are validated to represent all of the items in the domain. Going back to the nursing domain, a domain sample could be used if it was shown that a few of the nursing items could 'represent' all of them nursing items. In this case, if there is a strong correlation between how patients rate ED and OP nursing and how they rate OR and ICU nursing, a survey designer could use the ED and OP survey items to represent 'all of nursing' and drop the other domain items from the survey. The Consumer Assessment of Healthcare Providers and Systems (CAHPS) uses this methodology. Domain samples may not be changed without being revalidated.
Proxy items – Proxy items provide information on a broad set of attributes without detailed questioning. For example, education is frequently used as a proxy for socioeconomic status; grade point average is used as a proxy for aptitude or success in school. Proxy items require their own validation process and are used when you want to simplify a survey. They may not be changed without needing to be revalidated.
Proxy items are based on solid population data or previously-collected data that validate the connection between the proxy item and the attributes, perceptions or experiences it is intended to represent. Social scientists often use the US Census and other national databases for this. For us, the National Survey of Children's Health is a very helpful anchor for proxy items.
Proxy respondent – A survey respondent who is taking a survey on behalf of another is a proxy respondent (for example, a parent completing a survey for their child who cannot yet read).
Parents will often be surveyed about their child's development, health or other matters. The parent is only a proxy if they are answering in the place of the child, not if they are giving their own opinions about their child's development, health, etc.
Sometimes, the term 'proxy respondent' is used even though they are not truly proxy respondents. For example, parents have been shown to be poor proxies when reporting on behalf of their children. The information from the parent is helpful to have, but it is not truly proxy information. Indeed, it is quite rare to have a true proxy respondent. Even so, it is very important to know whether a survey respondent is supposed to represent the view of the patient or the view of the survey respondent. And in children's hospitals, using parents as proxies (with all the problems of doing so) cannot be avoided.
Survey vocabulary II
A single questionnaire can have many components to it, or it may just have one component. The following table defines survey components that are often used in health care.
In clinical settings, where the provider already has a lot of information about the patient, components tend to be used independently. For instance, it is quite common to have a brief screening questionnaire with no other items included except the screener itself. In research and social settings (particularly when using anonymous surveys, where we have no additional information about the respondent), a single questionnaire could include all of these components. The patient satisfaction surveys hospitals mail to patients' homes include demographic, history, processes of care and sometimes an outcome component or two.
Later posts will examine survey reliability and validity, and how surveys, scales and composite metrics are developed. It is a big world out there in survey-land!