– [Stephanie] Good afternoon. Good morning to those
of you on the West Coast and welcome to Insights
into CAHPS Survey Modes Response Rates, a webcast presented by the Agency for Healthcare Research and Quality’s CAHPS User Network. We have a fantastic lineup of speakers for you today, including Caren Ginsberg who directs the CAHPS program at AHRQ; Marc Elliot, Senior Principal Researcher at the RAND Corporation; Layla Parast, Statistician, also at the RAND Corporation; and Paul Cleary,
Professor of Public Health in the Department of Health
Policy and Management at the Yale School of Public Health. Our focus for the next 90 minutes is to provide an overview of
recent research on the impact of various survey administration
modes and strategies. Specifically, we would like
to share lessons learned including how do the strategies affect response rates across
different population groups, and how can we improve
the representativeness of survey responses. Some of the key research questions that we will address in
this webcast include: Can high response rates be achieved with the hard-to-reach
young adult population? How do response rates for
electronic modes alone and in combination with
other survey modes compare to more traditional modes? What electronic modes achieve
the highest response rates? What methods of web survey invitation are the most effective? And how do characteristics of respondents to electronic and
traditional modes differ? Before we begin our content I wanted to take just a moment
to go through a couple of housekeeping details. If you’re having difficulty
hearing the audio from your computer speakers you can change the audio selection so
that WebEx calls you back to connect you through
your telephone instead. In the event that your computer freezes during the presentations
you can try logging out and logging back into the webcast to refresh the page. Also remember however,
that you may just be experiencing a lag in the advancement of slides due to your
Internet connection speed. If you need help at
anytime during this webcast please use the Q and A icon. At any point through today’s presentation if you have either technical difficulties or have questions for our speakers, you may ask a question
through the Q and A feature. Depending on the browser
that you’re using, your WebEx screen may
look slightly different from what you see on this slide here. Look for the Q and A
icon which may be blue or gray and be sure that
the drop down option displays all panelists for you to ask a question so our team can see it. Feel free to share your
name, organization, or role, when you type your question. Today’s session is being recorded. A replay of today’s webcast, as well as the slides,
will be made available on the AHRQ website. So with that, I will turn it over to Caren Ginsberg to get us started. Caren over to you. – [Caren] Welcome everyone to today’s talk on survey modes and response rates. I’m, as Stephanie said, Caren Ginsberg and I direct the Agency
for Healthcare Research and Quality’s CAHPS Program. So if you’re familiar
with the CAHPS program and our webcast, you’ll notice we’ve recently started discussing our research on survey methods and data collection. A couple of months ago
we hosted a webcast on survey invitation wording
to increase response rates, and today’s presentation
is on understanding survey administration modes as drivers of response rates. So I’m excited to
present this to you today but before I do, I’d like
to take a few minutes to give you some context for why this topic is important to us, and especially to welcome those of you who might be new to the CAHPS world and I’ll go through
some background for you so you have the context
for why we’re doing this work and presenting it to you. So the Agency for Healthcare Research and Quality has a mission
to improve the life of patients by helping healthcare systems and professionals deliver care that’s of high quality, and of high value, and is safe. So AHRQ’s a science-based agency and as such what we do
is invest in research and evidence to make healthcare safer and improve quality. We create tools for
healthcare professionals to improve care for their patients. And we generate measures and data that are used by providers,
and policy makers, and researchers, to
improve the performance of a healthcare systems
and evaluate its progress. And as part of this research we feel it’s important to push it out
to you, push the science to implementation and get our tools and products to you our user. So today’s program, as part of this data and analytics competency, is to help you understand the best ways to collect data and the affect of the data collection mode that you choose on your survey findings. So CAHPS stands for Consumer Assessment of Healthcare Providers and Systems. And the CAHPS program is
a comprehensive program to advance the understanding,
the measurement, and improvement of patient’s experiences with their healthcare. And I’ll speak more
about that in a minute. We have been funded by AHRQ since 1995, so it’s been around a long time, it’s a mature program,
we have a very large and extensive website with a lot of information on it including recordings of all of our webcasts. So if you’re interested in the that I just mentioned,
that invitation wording, it’s on our website. So the CAHPS program has
an active research agenda that focuses on understanding
patients’ experiences with healthcare, how to measure it, and on the best methods
to implement surveys. So we also conduct research on how to report patient experience data, and also on quality improvement efforts involving CAHPS surveys. So we’re most known, I
think, for the surveys, the CAHPS surveys that we developed, and the related materials all designed to help assess patients
experiences in healthcare settings and with health plans and providers. And these surveys are recognized as the gold standard for
patient experience measurement and we design these surveys by capturing the patients’ voice and the foundational work to
understand what’s important to patients before we even draft a survey, and then we test it extensively with patients to make sure that it’s understandable to
patients and relevant to them. We use a standardized methodology for the development of all of our surveys and other tools. So for those of you who might be new to the CAHPS world, here
are some of the surveys that we offer that are all
recognized CAHPS surveys covering healthcare providers, some conditions specific care, inpatient and outpatient
facilities, health plans, and even a program delivering care as part of the Medicaid program, the Home and Community Based Services CAHPS Survey. So some of our versions
include a pediatric version in addition to an adult version, all are available in Spanish and some have additional
translations as well. So let me just say ultimately the focus of a CAHPS survey administration effort is to have response rates that are sufficient to allow us to understand
patient experience and also to ensure that
we have a representative sample of patients who are
responding to the surveys, what we’re calling responsiveness
and representativeness. And so there are several factors that can influence responsiveness
and representativeness and so today we’re talking about survey administration modes as determinants of survey responsiveness
and representativeness. So I’m excited about this program as I said you’re gonna hear today from survey methods
researchers who have worked on CAHPS efforts for many, many years and who are leaders in this field. So with that I’m gonna turn this to Marc Elliot. – [Marc] And I’d like to start by talking about some work that was done with colleagues at a
variety of institutions. And the motivation for
this work, as Caren said, is that it’s an increasing problem to try to combat low response rates for hard-to-reach
populations in particular. And the standard survey
approach is using mail and phone usually receive
low response rates for younger adults, adults under age 35, often less than a 30% response rate. And we looked at it here in the context of the Child HCAHPS survey because this population has a number of younger adults, in
this case the respondents are parents of pediatric patients. The questions that we ask in this work were was is possible to achieve higher response rates in this population? In other words, is this a population that’s fundamentally unwilling to complete surveys in large numbers or are there methods out there that might allow us to do that? And in particular we had interest in email since we were looking
at a series of hospitals that had unusually high proportions of email addresses collected and we wanted to know how response rates for email alone, and email in combination with other survey modes, compared to the response rates that were obtained by standard mixed modes. So for background, the child HCAHPS survey which is the Child Hospital Consumer of Healthcare Providers and Systems Survey was used for an experiment in which we sampled almost
four thousand parents of pediatric inpatients, six
large children’s hospitals, and we randomized them
equally to six arms, and I’ll describe that design. So we either had an incentive, or not, those are the columns,
and the incentive was $20, and we used one of three survey modes and those are the rows. So the first row is what CAHPS calls standard mixed mode,
so regular U.S. Postal Service mail survey followed by a telephone follow-up
of mail non-respondents. The other rows involve things that were less traditional. So in the middle row we looked at a commercial overnight delivery service which has an envelope which draws the potential respondent’s
attention to the survey and indicates, perhaps, that
it’s particularly important, followed by telephone followup of those respondents. So that’s essentially substituting this overnight delivery service
for regular USPS mail. And then the third approach
was a three stage approach so that people were
first contacted by email, if they didn’t respond then they were sent overnight delivery service, and then people who didn’t respond to that were followed up by telephone. And so we looked at all six combinations of these three sequential
delivery processes and whether we had an incentive or not. And so the next slide
summarized what we found and what we found was that each of row two and row three helped and also the incentive helped, let me say a little bit more about that. So across the whole design on average the incentive increased the response rate by about 15 percentage points versus not having an incentive. Secondly if you compare
the overnight delivery service to the U.S. Postal Service with telephone follow up with each, you get about another 12 percentage points for the overnight delivery service. And then when you add the email on top of that for the three stage procedure, you get about a 14 percentage point jump over the standard mixed mode approach. We didn’t find an interaction. So in other words, the incentive effects just added to the effect
of these multi-stage approaches in each case. And here are the actual response rates. And so if you look in
the upper left corner between 90% response
rate that was achieved with the standard mixed mode approach is typical of what we often see with that approach in a difficult hard- to-reach, low response rate population such as adults under 35. And then you can see that even without an incentive you’re looking at response rates that are closer to 40% by either adding the overnight component or having the three stage component. And then in some cases when you added the incentive, as well, you’re
looking at response rates higher than 50% and
almost approaching 60% with populations where the response rates are often half of that. So this next slide provides
some additional information and thoughts about these results. So the headline message here, and I’ll describe the basis for this, is that here email
worked but it only works when there was traditional followup. So when you just use no incentive and you did the U.S. Postal Service and phone, then you got about half of your responses by phone. When you go down to this third row and you use the overnight
delivery service, this is an expensive approach, and it’s just the mix of mail
and telephone dramatically rather than getting about half or even more than half of
your responses by phone now the vast majority of responses are in response to that overnight mail. Now what happens if we throw
in the three stage approach? If we use a three stage approach then about half the approach, half of the responses that you get are by email and then only a quarter to a third of them are by overnight mail leaving again about a quarter of them or less by telephone. So what you see in terms of the nature of the responses is that stacking email which is intended to
be the least expensive mode up front, even if it doesn’t really change your overall response rate much it may shift the modes that are used and it may have an affect on cost. The other thing that’s notable is that email by itself,
depending upon whether there was an incentive involved, produced the lowest response rate of all. It produced a response rate on the order of 15 to 25% and so if
we just stopped at email, we’d have gotten a much
worse response rate than the traditional U.S. Postal Service followed by the telephone followup. In the case of no incentive we’d of gotten about half the response rate than we would’ve had through
that traditional method. So to summarize, it’s not the case that even low response rates hard- to-reach populations can’t achieve high response rates. If you use techniques such as overnight delivery service or incentives, then you can take the response rates as much as 25 percentage points higher from less than 30% to the 50 to 60% range. Now some of these
techniques are expensive, they may not be practical
for some implementations. But at least we learned from this that it’s not that these respondents are unwilling to respond
under any circumstances. In terms of the multi-stage approach that begins with email,
when you add an email to an approach that already had these other two stages, in this case the overnight delivery
service and the telephone, you didn’t really change the
response rates that you got. But as I alluded to earlier, it’s possible that you achieve the same response rate in this multi-stage approach but possibly with less cost, substituting the email responses for things like overnight delivery and telephone followup. And then just to emphasize,
we also learned that, and you’ll see this theme in some of the other studies that follow is that email by itself produced a really poor response rate, worse
than standard methods, but it did seem to have a potential role as part of a multi-stage approach where it was linked to other methods. So a few implications. So for young adults again
the high response rate is possible and email
added to a mixed mode procedure can preserve response rates, possibly at a lower cost. And again, email by itself resulted in very poor response rates, and although this was just
done in a particular setting, so the parents of pediatric inpatients, we think that the patterns seen here may generalize to other groups and we’ll see in the following talks some evidence of that. I’d like at this point to pass the presentation to Layla. – [Layla] Thank you Marc. I’ll be speaking today about testing the feasibility of the Emergency Department Patient Experience of
Care, the EDPEC Survey, and I’ll specifically be talking about our experience testing a web survey. So I’d first like to
note that this work was funded by CMS but I
take full responsibility for what I say here today. And I’d also like to acknowledge our large study team at RAND and CMS and Health Services Advisory Group. So just a little bit
of background about the Emergency Department setting. Nationwide there are over 130 million emergency department visits annually. Most emergency department
patients are discharged to the community, which just means they’re discharged home as opposed to for example admitted to the hospital. The development of the
Emergency Department Patient Experience of
Care Survey began in 2012 and it was designed to
measure the experiences of patients who are discharged home from the emergency department. Our development began
with a call for topics, we’ve had multiple literature reviews, multiple technical expert panels. And while this is not a CAHPS survey, it was developed with
CAHPS principles in mind and we’ve had ongoing meetings with the CAHPS Consortium. We’ve had multiple rounds
of cognitive testing of potential survey items both in English and in Spanish. And we’ve had multiple field tests which I’ll talk about today. So our first field test of the survey was conducted in 2014 with 12 hospitals. And then in 2016 we
conducted a mode experiment with 50 hospitals. Both of these were experiments in that patients were
randomized within hospitals to different mode protocols. For both experiments
the three mode protocols were mail only, telephone only, and standard mixed mode which was mail with telephone followup. For both our overall
response rate was quite low at about 20% and particularly
low in the mail only mode. So for example, in the
mode experiment we had a 13.7% response rate by mail only. We also learned from
both of these experiments that the contact information for emergency department
patients was less accurate and less complete as compared
to admitted patients. So for both of these at
the same time we were doing some experimentation
with admitted patients so patients who had an
inpatient stay in the hospital and so we were able to
compare the accuracy of the contact information between those two populations. So motivated by these
results in our field test and the mode experiment we wanted to conduct some additional experiments to answer these research questions. Can the use of a web
survey increase response rates in this hard-to-reach population? And what methods of web survey invitation are most effective? And by web survey I mean
an electronic version of the survey that’s online, so respondent either clicks a link or they can type it in and they’re taken to a web browser that
contains the survey questions and they answer the questions within that web browser. It could be completed on
any device with internet and if they leave the web survey and come back we save their response. So we began with a feasibility, what we called Feasibility Test I. This was conducted in
2016 with eight hospitals and the goal was to explore
novel administration modes. We had five different mode protocols, one of them was actually
in ED distribution so we’ve heard a lot from people that we should try handing out the survey in the emergency room right after, right when they’re discharged. So we did try that and it was problematic so our response rate was 9.3% and we saw a possible
bias in distribution. So for example in our debriefings with hospital staff they told us that they were much less likely to hand out the survey to someone who was unhappy on discharge. Two of the five protocols that were tested were web only protocols. One was an emailed link to the web survey, the other was a paper invitation that was mailed to the patient but had a URL and a PIN so that they could log in and complete the survey online. For both of those the response rates were very low at less than 5%. So we really learned here that a web-only approach was not gonna
work for this population. So next we moved on to Feasibility Test II which is what I’ll talk more about where our goals were to continue to test novel approaches to try to improve response rates to our surveys, and specifically here to examine different push-to-web strategies. So learned from Feasibility Test I we can’t do web-only, but wanted to focus on a web-first approach where we tried to push as many people to access and complete the web survey and then followup with
a non web component. So by push-to-web strategies I mean for example, email, text, and then paper invitation with the URL and also the use of QR codes which are those black and white squares you can use your phone
to take a picture of it and it takes you to a website. And lastly to explore
challenges associated with collecting the contact information we would need for a web-first approach, so collecting email addresses, and information we would need to be able to text patients. So Feasibility Test II
was conducted in 2018, it involved 16 hospitals. We sampled almost 27,000
emergency department patients and the majority of adult emergency department patients were eligible, so this is not restricted
to Medicare patients. Patients were randomized within hospitals to one of nine survey arms. And our reference arm
was standard mixed modes so mail with telephone follow-up. The other eight arms involved some form of an invitation to the web survey. So one or a combination
of email invitations, text message invitations, or mailed survey invitations
with a URL and a PIN code and or a scannable QR code. And importantly by text message invitation I don’t mean that we’re texting the individual questions to patients, I mean that we are texting a link to the survey, you click on the link within the text and it
take you to a web browser with the survey. All eight arms involved three to four web survey invitations or reminders. All arms involved sequential mixed modes. And all eight arms had mail and or telephone followup after the three to four web survey invitations. So the overall response rate across all nine arms was 18.6%, so still lower than we’d hoped. The highest overall response
rate was in the email plus mail plus phone arm where
we saw 27.3% response rates. But this was not significantly higher than our reference arm,
the mail plus phone, which had a 25.5% response rate. All of the other arms had a response rate less than the 25.5%. And the only arms that got us our response rate over 20% were those with a telephone component, so email + mail + phone which was 27.3 mail + phone 25.5 and then we also had an email + phone arm, which was about 22%. In terms of responses by completion mode we had 4.8 to 7.5% of sampled patients completing by web. The arms with text
invitation had the highest percentage completing by web, so we had two of the nine arms that involved a text invitation and both of those had
7.5% completing by web which is a sizeable percentage when you consider the overall
response rate was only 18.6%. Our analysis found that the use of a paper invitation and a QR code were not useful in terms of
improving response rates. And not surprisingly
in arms with telephone, the majority of responses
were by telephone. And then the email plus
mail plus telephone arm which is the arm we saw that highest response rate of 27%, we saw significantly fewer responses by mail and phone compared to the standard mixed mode kind of telling us that the use of an initial web mode has the potential to perhaps reduce costs associated with mail and telephone contact. But a web-only approach again will not work but seems to be kind of skimming people off the top in
a web-first approach. So our analysis of
respondent characteristics found that the inclusion
of a phone component increased representation of respondents who were less likely to
respond by other modes. So those who were younger,
minority, less healthy, frequent emergency department visitors, and those without a usual source of care. With respect to contact method for a web invitation, for
web survey invitations, like I said email was one
of the contact methods and your invitation to the web survey is only gonna be as successful as you know if you don’t have someone’s email address it’s gonna be very hard to invite them to the web survey by email. So we saw that in this setting our email coverage rates across hospitals varied dramatically, so by that I mean that the percentage of patients who had an email address in the hospital contact information. The overall
rate was about 30%. And text coverage rates also varied. And we considered somebody textable if they had a mobile
number in the hospital contact information and if they provided consent to text. So all of our texting
was done in accordance with Telephone Consumer
Protection Act regulations, so we required documentation of a patient consent to text. So a patient had to have both a mobile phone number and consent to text for us to be able to text them. Overall we found that only 11% of our patients had only an email address. 19% had both email and text, meaning we were able to both
email them and text them. 40% had only text and 30%
had neither email or text. So at least in our population, texting dramatically increased the
reach of the web survey. So our lessons from Feasibility Test II, overall response rates in this setting are still low regardless
of administration protocol. The highest we saw was that 27%. And even in that email
plus mail plus phone arm, no arm performed significantly better than standard mixed mode. And although it’s the most expensive mode, phone surveys do capture a segment of the population that
may not respond otherwise. And especially for this
emergency department setting we found that a phone component is necessary and leads to
increased response rates and increased representativeness. Specific to lessons
learned about a web survey, like I said, you know coverage
rates vary dramatically and that was really important. One of our hospitals only had 0.4% of their patients that
had an email address even though they said they
collected email addresses. And while text messaging
did increase the reach of the web survey and we were very excited about our results, it is important to make sure that texting
is done in accordance with TCPA regulations and think about the administrative procedures that need to be in place to ensure that you have that document
of consent to text. And lastly with respect
to completion by web, again we saw that 4.8 to
7.5% completing by web which for us was a meaningful percentage given our overall response rate and really tells us that for our emergency department
setting we do consider the web survey at least
a web-first approach to be promising as long as there is some non-web followup
by mail and/or phone. And if you have any
questions, please feel free to email any of us and
I will now pass it on to Paul Cleary. – [Paul] Thank you very much. I’m gonna continue the discussion as studies of different types of survey protocols, different protocols. One is gonna be a fairly
classic experiment comparing electronic and mail surveys. The second one has to do with texting. It’s not completed yet,
but given the interest in it by our constituents in texting I thought it would be useful to present some preliminary results. Research questions are similar to what you’ve seen so far and that is how do the response rates of web and mail surveys compare? And how are the
characteristics of respondents to web and mail surveys
similar or different? The study was done in a practice with or an organization with three practice sites in Greater Boston. The reason we conducted this study is at that site a majority of the patients have signed up for a patient portal. If you’ve seen on the
two presentations today and other studies in the literature, the responses to web surveys tend to be relatively low, and we’re very interested
whether in a group of patients who already had signed up and were using electronic means of communication we could do better for people who had that portal and for whom there are email addresses. So as a comparison group we also sampled patients who had not signed up for the portal and had no email addresses. This is a study using the CAHPS Clinician and Group Survey, and there
was four survey protocols which I’ll now describe to you. The first protocol,
patients were randomized into these four protocols was
a standard mail survey. We sent a mail questionnaire, reminders, second questionnaire. Second condition was a mixed mode where we sent a postal advance letter, email letter with a URL link, email reminder, and then
a postal mail survey. The third was a web survey where we emailed a letter with a URL link to the survey and then two email reminders
to non-respondents. And the fourth was web through portal, in other words the patient got an email notification to look for
messages in the portal, a letter and email with link to the survey, and everyone was sent an email reminder. So for patients without email addresses, we just conducted a standard mail protocol so we could compare how those patients were similar or different in terms of response rates and characteristics. Generally we were interested
in response rates, who responded as related to the representativeness question
that you’ve heard before, and then whether there were differences in patients’ experience
reports or CAHPS scores. So a simple version of the response rates comparing the web and web and mail shows that the main difference was for web-only which as you’ve seen in other studies was substantially below the mail protocol and the
mail and web protocol. You can see the, as you’ve seen in the other studies, if you combine web and mail you can get close to,
in some other studies greater response rate than mail but it’s essentially comparable when you use mail and web, but web is substantially lower than either of the conditions using a mail survey. One thing that surprised us is there were no differences in the age, education, or racial, race, and ethnicity, of those responding to
the three protocols, and I’ll get back to why we think this may be the case later. Females were slightly more likely to respond to the mixed protocol, that may just be a random effect. We looked at several different measures. We had four CAHPS composites,
the overall rating, there were three items supplements for the Patient-Centered Medical Home addendum, two item composite measure, there were an additional four items that we compared. Basically there were no significant differences in any of the four composite measures, provider rating, or the Patient-Centered
Medical Home measures. Of the five other comparisons, there was one statistically
significant difference, such as those in the mixed-mode protocol were more like to say that they were asked about depression. When we compared the portal and direct email link the
response rates were similar, so if you may remember I presented the response rates to mail
and web was about 20%, the portal got about a 17%. Those over 65 were more likely to respond if they did
not go through the portal. No differences in the composite measures, the provider rating, or the PCMH measures. That’s between people who approach through portal and the direct email link. And of the additional comparisons, only the Shared Decision Making composite was significantly different. When we took the comparison group of patients without email addresses to other patients, response rates were higher for those with email. Those with email were more likely to be under 65, more likely
to be college graduates, and more likely to be female, consistent with a lot of the research on respondents to email surveys. Of the nine key measures there was only one statistically significant difference. And on the five supplemental measures, there were three significant differences. These results are quite different than many other studies because the respondents
to the electronic modes were very similar in terms
of their characteristics and their responses were very similar. But I should caution everyone that this is a very ungeneralizable study. In this particular practice, over 70% were college grads, over 90%
were non-Hispanic whites, and over 80% had enrolled
in the portal program, which means they used the Internet. So although we think it indicates there are possibilities using this methods, I wanna caution everyone
that these are very, it’s a very atypical group and so the findings probably don’t generalize to other situations. If you looked at the results as they are, you could say if the survey had been done entirely on the Internet. In other words we hadn’t
expended any effort or expense on mail surveys, and we offered no mail alternative. The response rates would’ve
been quite different, about half, 20% by the web and even lower by portal. But the characteristics of the respondents would have been comparable
and the substantive results for all the measures
would have been comparable. Again, keeping in mind the
caveats that I mentioned earlier. Summary of the patients without emails, they are slightly less likely to respond. They’re characteristics were different. Responses of four core composite measures, the provider rating, and the PCMH measures were very similar to those
with email addresses. And differences on the
additional items suggest that they may have some
different experiences. So the conclusions, the response rates to web and mail were very different. In this particular study,
which again is atypical, the survey results were very very similar. Responses from those without known email addresses were also similar, but with differences in reports and some experiences. So if one wanted to use the Internet to collect CAHPS data to address concerns about low response rates and possible different perceptions and experiences of those without email, one conclusion is that
a web survey should be combined with alternative mode. Mail seems to be the bast at this time from what we’re seeing in various studies to improve response rates and to include those who do not use email. Let me now just present some brief results from a survey using texting. These are preliminary but, again, I thought I would share some of them because of interest in this issue. So the first question was does using SMS for survey invitations
affect response rates? Does using SMS for survey
administration affect results? A study was done with a convenience sample and one of the participants asked about what some of the HIPAA concerns are, we may come back to that. But this is a convenience sample of people who are in a panel who had had a recent physician visits. There were basically three conditions: email invitation to a web survey, SMS invitation to a web survey, and SMS invitation to an SMS survey. In other words, they were
sent an SMS invitation if they responded the
survey was administered by text one question at a time. And I should emphasize when you see some of the results that we use what is called a modular approach, so when we texted the survey to people, we used just part of the survey. For
example, the communication posites plus some core items so they were much shorter than the full CAHPS survey. This is a question we get all the time, what if we did a shorter survey and we did it electronically? These kinds of studies
start to address that. There were differences between individuals who responded to a web survey and an SMS survey similar to the kinds of differences that Layla described. Responses were highest in email to web versus SMS to web and SMS survey. So the lowest was when SMS was used for both facilitation and surveys, the response rates and completion rates were actually lower. And I’m not gonna present
the detailed results but I’ll say the maximum here was 14% so the idea we can cheaply contact a large population electronically and get a high response rate is more difficult than many of us think it might be. The SMS completion rates in addition to having lower response
rate were 10% lower than the web. And the SMS survey respondents tended to provide more positive responses. So this is a quick and preliminary summary but the users of SMS are often less representative of a
whole population compared to those who use Internet
or certainly mail. SMS may complement other methods for eliciting surveys but still there’re very important limitations. And it’s very difficult to conduct full surveys using SMS. So let me take a couple minutes to say what I think are some of the overarching messages from these three presentations as well as other works
that people on the call and people on the Consortium have done. First is that response rates to all types of surveys have been
declining for many years. It’s not just CAHPS surveys,
all types of surveys by all modes have been decreasing for a variety of reasons. Response rates are very important but representativeness is also important and often is not assessed. In other words, you could
double your response rate and that may be good because
you get more patients and more power and you get
more surveys per dollar but it might not improve
the representativeness of the sample and in
some instances it may be less representative of the population you’re trying to make inferences to. It’s true, increasing numbers of people use electronic methods such as email, patient portals,
and SMS to communicate, but low response rates and
poor representativeness remain serious limitations for these types of surveys in spite of this increase that we see all around us. Aside from this focus
on electronic methods to increase response rates, the previous two presentations emphasize that high response rates
by traditional modes are possible even for very
hard-to-reach populations. It’s true that some effective strategies, like overnight delivery service and incentives, may not be
feasible or cost-effective but, as Marc pointed out, you
can increase response rates. And this and other
research we’ve done show that mail surveys can
yield high response rates but many survey protocols are not optimal. And studies not presented
here have examined variations in a variety
of aspects of the survey and shown that there’s very very large differences in response rates due to things that could be easily improved and adjusted in traditional surveys. Another message that comes through each of these presentations is that different populations respond to different contact and survey modes, so mixed protocols often yield the best response rates and representativeness. It’s not just that you get more people by doing both phone and mail, but often because complementary people respond to those modes you get a more representative sample. As you see, using email or portals to contact patients typically
lead low response rates. Respondents to electric
contacts often differ from other respondents,
so caution is required when we’re using these modes. Using email, web, or SMS in combination with other strategies can achieve the response rates of
traditional mixed methods and may reduce overall costs, but anticipated savings
aren’t always realized. So, for example, there may
be more follow-up required, there are costs to getting emails, and to tracking those
kinds of surveys and so on, so we should be cautious about assuming that using electronic
methods will reduce costs. Factors in mail surveys to consider and evaluate are things like sponsorship, contact and survey material design. For example in older adults,
in a study I mentioned, a more attractive layout compared to the least attractive layout increased mail response
rates by 15 to 20%. Protocol timing and intensity, and using different strategies for example a well-known delivery service that conveys urgency. There’s a large literature on factors that we can do to improve mail and telephone surveys and often we’re not taking advantage of those. It’s pretty clear the best
approach often differs by population, but several different basic approaches would improve response rates in many applications much more than shortening surveys. The differences we’ve talked about today are much bigger than any differences we see by cutting surveys
dramatically in length and so even though there’s a perception that shortening surveys will increase response rate, we think people would be better advised to focus
on really maximizing the protocols for distributing
and collecting surveys. Cost is important, obviously, but representativeness of this data is “sine qua non” of survey approaches. If we don’t have a representative sample and it doesn’t really matter
how efficiently we did it. I think a basic conclusion
is the electronic methods used alone are
not ready for prime time and the Consortium and many people throughout the country are continuing to do research on diverse contact and survey methods, including
different permutations and combinations of these methods. I think with that I will turn it over to Stephanie. – [Stephanie] Thank you. With that we will move
into the questions portion. And so just to remind you about how to ask a question, you can type into the Q and A box and you may need to select the button with the three dots at the bottom of your screen to open the Q and A section so that it appears on your screen. And again, please be sure to send your questions to all panelists. Again, depending on the browser that you’re using, your Webex screen may look slightly different from what you see here. We have a few questions
that have come in already, so we will start working
our way down the list and get to as many as we can here. So Paul and Layla, I will ask you both to respond to this based
on your experiences. We’ve had some questions and wanted to ask you to describe your experience using text messaging protocols for administering CAHPS surveys. – [Paul] Layla, do you wanna start? – [Layla] Sure. So happy to answer that. So I would say that the most challenging thing we found about using text messaging for the web survey invitation was making sure we were in accordance
with TCPA regulations, like I mentioned. You know during a lot of our early work we kept hearing that
you have to try texting, you have to try texting, that’s how you’re gonna get
this population to respond. So we tried it and it took a lot of effort to get hospitals recruited who were willing to give us documentation of that patient consent to text. We did leave it up to the hospitals and their legal departments to determine what consent to text meant, but they had to literally give us a field for every patient that said whether they consented to text or not. And you know for one hospital that consent rate was 1.3%, and for another it was 85%. So it varied a lot and their methods for getting consent varied a lot and I think that that will you know make a big difference in terms of trying to implement
a survey more broadly that uses texting. That being said, we did find that it did reach a lot more people, just like ED physicians and administrators told us it would, we were able to invite a lot more people to the web survey. We saw people completing the web survey the day we sent it, both by text and by email, but that was certainly something we don’t see in a mailed survey to get that kind of turn around. And it also highlighted the importance of a mobile optimized survey, I mean even if you’re just using email of course it’s important to make sure your survey is mobile optimized because the majority of people will likely be completing it on their phone. But we spent a lot of effort making sure that our survey was mobile optimized and looked attractive. So we did everything we could to avoid any wrapping of text, any required horizontal scrolling, we did a lot of testing with colors, and layout, and design, and we continued to explore that. So yeah, I think it’s promising but there are certainly
still a lot of challenges. – [Paul] Yeah I would echo
everything Layla said. What I presented were data
from a convenience sample which was a web panel. There were two other
experiments we planned that we had to abandon because of practical constraints. We’re very excited about working with PBGH on a texting survey. And ’cause of concerns they hired counsel in California and basically the bottom line was they said the patients had to have given explicit consent before they were contacted by texting. And ’cause we would be dealing with practices throughout the state we explored it and questioned and the bottom line was
it was just infeasible, we would never get a large enough sample. Subsequently we were very excited that the Yale New Haven Healthcare System was excited about doing
a texting experiment and they told us they had permission but it turned out they only had the adequate permission that would enable contact by texting for a
relatively small proportion of their patients. They had generic permissions
and some sub samples had permissions but it just, again, it was not feasible to do it. That may change over time if healthcare statisticians start to get permission to text individuals but
people should be aware it could be a major barrier to using those techniques. And I agree with everything else Layla said about optimizing the survey for different platforms and so on. And we’ve pretty much
come to the conclusion it’s not feasible to do a complete survey that one would probably revert to a modular approach
where you gotta subset of the survey from subsets of patients and then combine the data to form an integrated score. – [Stephanie] Thanks Paul and Layla. Moving through to some other questions. Marc, we have a couple
of specific questions about the work that you did and request for some information about how soon after discharge
did you mail the survey and how long was the survey questionnaire? – [Marc] So I will have to defer answering those questions, but I can provide that by email shortly. I will say that the approach followed the Child HCAHPS protocol, but I want to be sure that I answer that accurately and so I’ll follow on that by email. – [Stephanie] Absolutely, thank you Marc. And we have a couple of questions that I think many of you may be able to weigh in on. So one of them is what would you recommend as best administration methods for Medicaid enrollees? – [Marc] This is Marc. I’ll just make a couple of comments related to that and then I’m sure that others will have comments as well. So some of the things that we noticed with Medicaid in all these is that at least comparing the standard modes of mail and telephone we tend to get a higher proportion of responses by telephone relative to mail and the mixed mode protocol with Medicaid enrollees possibly due to lower literacy so that having the telephone phase or some phase that some component that doesn’t rely on higher literacy can be helpful. Also depending on the particular Medicaid population having instruments available in a variety of languages
can be important as well. On to others. – [Stephanie] So I’ll sweeten
the question a little bit. We also had another question about specific populations and this one about any recommendations that the panelists may have about reaching
populations over age 65? – [Marc] This is Marc. I’m gonna again make I
think an initial comment. So first in general response rates that we’ve seen across a variety of CAHPS surveys tend to rise with respondent age until they level off often around 80 to 84 and then tail off a
little bit at that point. So in some ways respondents who are, say, 65 to 79 are often
the easiest respondents to approach and to get
high response rates from. That said, a few other observations, one is that this is a population where you tend to get more responses by mail than by telephone. And secondly, in one of the studies that Paul alluded to earlier, we found that if you
do have a mail survey, and we think this finding might generalize to other visual presentations, for example some web or other electronic based approaches. While it’s a good idea for any population to have a clear, visually appealing, uncrowded layout, we found some evidence that it makes a much bigger difference with older respondents than with younger respondents. So I think one thing I would emphasize is the importance of visually clear and appealing layouts. – [Paul] This is Paul. I agree with Marc’s
comments to both questions. I was on mute before. The only thing I was
gonna add on Medicaid is that when we’ve done experiments trying to optimize Medicaid
responses, the accuracy of contact information is often a huge proportion of non-response.
In Medicaid populations one of the things you can do is try and ensure accuracy of contact information and that’s one speculation
why mail response rates are so low. But everything else I agree with. – [Layla] And this is Layla. I didn’t comment on the Medicaid question because for the emergency department experiments, we unfortunately didn’t have any information about insurance, so I can’t comment on the
Medicaid population there. And like what Marc way saying, we did see that older patients were much more likely, you know we were able to capture them with a mail component but it was really the younger, minority, less healthy, patients where we needed, we really needed a phone component. So I guess at least for our population I would strongly emphasize that phone is absolutely necessary for us as a component in whatever kind of sequential mixed mode we’re gonna do. We do need phone. And you know we heard repeatedly that a web survey would get the younger population, and it did get a younger population but not as much as, it didn’t do as well as phone. – [Stephanie] Thanks Layla. And Layla I’m gonna
keep you on the hot seat for a moment. There are some questions about abandonment rates for web surveys. What can you say about abandonment rates and to the extent you know it, how often do people return
to complete surveys? In what way are you
prompting them to do that and do you think that survey length has an impact on abandonment rates? – [Layla] Sure. So our survey was 38 items and I can say that of the, so of the patients who accessed the web survey, so by that I mean they clicked the link and at least got to the introduction page, 89% of them completed the survey by web. So that was higher than we expected and those that didn’t
complete the web survey, they tended to just,
they saw the introduction screen and then just never came back. It was rare for someone to, you know, start answering questions and then not complete. Of course that did
happen, but it was mostly that people just saw
the introduction screen that 11% and then didn’t
complete the survey. In terms of starting and
coming back to the survey, so if someone started the web survey and then didn’t complete
it, they did continue to get reminder emails, and we saved their spot, like I mentioned. So if they answered
the first 10 questions, and then quit, they could, you know, they would get a reminder email and they could access it again, on a completely different device even, and we did see some device switching, and complete the survey there. The device switching was, I think, 1 to 2% so not very many. But we tried to make
it as easy as possible to leave and come back. When people did, we did a lot of analysis of the paradata collected from the web surveys so device type, and how people actually
access the web survey, how long they spent on each question, and when people tended to pause a while on a question it tended
to be at the beginning of a new section. So nothing very surprising there but we did, you know, we were hoping we were looking to see if there was a particular question where people tended to quit the survey right there, obviously that would be an indication that there might be a
problem with that question but we really didn’t see that. I think I answered all those questions. – [Stephanie] Thank you, Layla. We also have some questions
that I think a couple of you can respond to
about your respective work, about how many email attempts do you think is ideal in terms of that portion of the data collection segment? – [Layla] So this is Layla. I can comment on that at
least for the ED populations. So like I said we tried
three to four reminders and we had two different
Technical Expert Panels that focused on protocol refinement and got a lot of advice about whether we should test different numbers of reminders and we settled on three to four. You know I believe the research shows that you know the more reminders you have, you are gonna get a bump in response rate but you don’t wanna
completely annoy everyone that you’re trying to contact. So we felt like four based on the research we saw and the, the panel members, we felt like four was our max that we were willing to go. And we did find that you know when people were gonna, of the people who completed the web survey, the biggest chunk of them completed after
the first invitation. Like if they were gonna complete it, they were going to
complete it that first time otherwise they’re kind
of just gonna ignore all of the reminders. But we did see a bump at the second and the third invitation. The fourth we saw a much
lower bump in response so our recommendation for this population is actually to keep it at
three web survey invitations because we really didn’t find that fourth was that useful and better to move on than to the non-web mode like mail or phone. – [Paul] This is Paul. In our study we didn’t do the experiment but on the email contacts,
we used two contacts really for the reasons Layla mentioned. You get the most of the
contacts in the first one and the site was actually reluctant to have too many email contacts and a lot of the net responses you get are moving to mail or phone. And so we just sort of felt it was we didn’t do the experiment but we felt it was best to move on to the other modes to try
and maximize response. Also a timing issue, the more you do, the more it drags out the survey. – [Stephanie] Absolutely understood. Paul, we have a follow-up question that, able to respond to. Asking about the impact
of HIPAA on the verbiage that you use in the SMS invitation. Can you say a little bit
about how HIPAA has impacted how some of your approaches have evolved? – [Paul] Well as I said the study we did was a web survey where
people had signed up and agreed to be contacted by SMS and they were non-patients
so it was very atypical. In the sites where we’ve
tried to do studies and consulting with counsel, their advice was that we had to have explicit, you can’t just contact someone and say is it okay to contact you by SMS, they have to have given
explicit permission to the provider to be contacted by SMS for them to release
their telephone number. At least that was the
advice we got at PBGH, and New Haven I don’t know if they got explicit legal counsel on that but that was their position that they only would text to people who had given explicit permission to
be contacted by text. – [Stephanie] Thank you Paul. – [Paul] It’s not
whether we have the right language, it’s we, sometimes in mail we often do passive consent. We say, this is voluntary,
you don’t have to do it. SMS is a different situation, the very act of contacting the person is felt to be intrusive enough or people are defensive enough about it that the feeling is you’re not allowed to do that without prior explicit consent. I’m sure that varies,
and people on the call may have different experiences, but those are our experience
in a couple of settings. – [Stephanie] Thank you. Marc, there are a couple
of specific questions about the work that you have done. In your use of overnight mailings how did you ask people
to return the surveys? And also can you say a
little bit more about the incentive that you
used in your experiment? – [Marc] So the people
were given a post-paid envelope to respond to either U.S.P.S. or the overnight delivery. And the $20 incentive
was provided in the way where it wasn’t contingent upon your completing the survey as is often the case but because there’s evidence that that incentive is
effective when it’s offered even if it’s not contingent upon response. – [Stephanie] Thanks for that, Marc. And Paul a follow-up for you. You mentioned attractive
design for surveys and there’s a request
for a little bit more information about what you
mean by attractive design. – [Paul] Well it’s really what Marc was referring to and I worked with Marc on a project and a number of colleagues where we used subjective criteria and objective criteria. And it had to do with things like clarity and layout, and how
cluttered the designs were. And a lot of it follows pretty
basic design principals, but we went through and looked at how different vendors prepared their contact materials and surveys, if you sit down with a group of people and look at ’em the net
effect is quite dramatic in terms of what one considers attractive and easy to understand, and that showed up in the response rates. And as Marc mentioned,
one of interesting results which we think is very, very plausible is that those effects were more pronounced in older respondents who might be more sensitive to confusing, or crowded, or cluttered, layouts. I don’t know if Marc you wanna elaborate? – [Marc] Sure. So I agree with everything that Paul said and to say a little bit more, as you might imagine there are some trade-offs sometimes, though not always, between a layout that’s clear and one that’s longer. But even when we examine that trade-off, we found something that occupied a little bit more space but was easier to read and more visually appealing compensated almost always for extra length. And so one of the things that seemed to be the case is that cramming things into a smaller amount of space to save pages really
causes more harm than good in terms of people’s decision to participate in the survey. I should also say that
the work describing this is in press and we can once that appears we can make the journal article describing these findings available. – [Paul] One other, this is Paul again. One other thing we didn’t present here ’cause some of the work is not finished, but the Consortium’s also doing research and experiments on the
elicitation language in letters. The Consortium had a research conference last year about response
rates and representativeness and one of the issues
that came up was the type of messages that one sends to potential respondents can be very important. And we’re finding fairly
big differences among when you randomize different messages and we’re trying to get more systematic information on that as we go forward so that we can maximize those messages. Idea of sending very
wordy repetitive messages may work against response rates whereas if we customize messages and modify them for different contacts that may have a different affect. – [Marc] Along the lines of
what Paul was describing, this is Marc again, there’s
another ongoing effort which found that a simplified cover letter increased response rates for a survey with a particularly
hard-to-reach population by four percentage points
at absolutely no cost. So I totally agree with
Paul that there’s a lot of potential with no cost trade-offs by just improving key aspects of layout, and invitation, and a lot of these aspects of surveys that often get overlooked. – [Stephanie] Thank you, that’s great, and you’ve just preempted
the question around with cost trade-offs
what would you recommend? So for that question asker, there you go. I also have a question
here about text messages and did you limit the number of characters on the messages that you were sending? – [Paul] I’m forgetting the exact details but the answer is yes. And we had to modify some
of the response tasks is my recollection
and/or survey questions. And the details are escaping me right now, Layla may remember more
of those details, but. – [Layla] Mmhmm, so I can
say for the ED experiment, we did of course limit the characters. I can tell you exactly what the texts were because I have, I opened
them up in front of me. So we sent two texts, the first one said, “Please take a short survey
about your recent ER visit at”, and then it was the brief hospital name, and then a short link to the survey. So I’ll have to count
the characters there, but it’s not very long. And then the second text says, “Message and data rates may apply, text stop to stop survey texts.” – [Paul] My recollection is it’s some of the response tasks, not all of them, but some of them had to be reworded, Layla may remember the
details on their experiment. – [Layla] Right. Well we didn’t do any
texting of the actual survey questions so we didn’t
have to worry about that. It was just the invitation, but yeah certainly it
would have been a problem. – [Paul] I’ll have to look at it, but yeah they do have to be modified and we’re trying to get a feel for what difference that would make. – [Stephanie] And there
was a follow-up question to ask each of you about when
your surveys were fielded. I think people are just trying to see what point in time
your research represents. So Layla do you wanna kick this one off? – [Layla] Sure. So the feasibility test
I that I mentioned that had the within ED distribution and the web only, was in 2016, so it was January to
March 2016 discharges. So the fielding actually occurred you know into May and June. And then for feasibility test II, which is what I talked most about with the texting and the email with the sequential mixed mode, that was discharges that occurred January through March of 2018 with administration continuing through May and June. And those results for both of those are published in Survey Practice,
and publicly available. – [Marc] And this is Marc. And the study that I described took place much earlier, the actual discharges were in April through July 2013. I’ll comment that one of the interesting and frustrating things that we’ve seen is we’ve been pursuing email only or electronic only approaches for a number of years now and we keep thinking that if we use a younger
set of respondents or if we wait a few more years they’ll start surpassing
the response rates that we get from things like
mail with telephone followup. But a comparison of some
of what we’re seeing in 2013 and say five years
later shows unfortunately not nearly as much progress
in those response rates as we might have hoped. – [Stephanie] Thanks Marc and Layla. Paul I have a followup question for you. You had referenced the feasibility of doing a modular survey administration using text messaging where subsets of questions would be asked
to different populations and then combined. Can you say a little bit more about how that would work and how feasible
you think that would be? – [Paul] Yeah, it is actually
two approaches we’ve used. One is on the texting,
and I may have misspoke, and well some of the surveys were short in that experiment that I mentioned, but some of them were single questions. In other words, you administer
one question at a time. We’re also about to do something where we distribute using mail and
phone traditional methods, a modular approach. In
other words, you take a subset of the questions administer them to a subset of the patients and then you can combine them and actually do imputation if you have some common items across patients. We’re pretty sure it’s
not gonna be efficient or a good thing to do. The reason we’re doing
it is ’cause so many people ask about using a shorter survey. If you think about it, you have to get an increase in response rates
to the shorter survey that would compensate for
the loss of information of including only one
composite, for example. So you’d have to at least get double or quadruple the response rate and we’ve seen absolutely no evidence that decreasing survey length, even quite dramatically
increases response rates. We have seen on the upper end when people add too many supplemental
items for example to HCAHPS there may be a
fall off in response rates but we don’t think there’s
gonna be much advantage to just using modulars. And PBGH actually did a survey like that and got almost the same response rates,
if I recall correctly, that they got with the full survey. So it’s very feasible and we know how to it statistically and
how to create the scores. My speculation is that
it’s not gonna prove to be worthwhile to do in the fact that there’ll be a net loss of information because the small if any increase in response rate in the shorter surveys will not even come close to compensating for the loss of information. – [Marc] This is Marc. I completely agree with what Paul said. A few more comments. To just to quantify some
of what Paul’s describing. In several of the studies
in which I’m aware you’re talking about the
changes in response rates of maybe two percentage points for every dozen items or so, so really quite small
and so I completely agree that putting core items on and off in a modular way results in a net loss of data compared to something like that but the keeping them on. There have been some studies where when people have a large set
of supplemental items then rather than putting them all on at once they sometimes put sort of non-core items on and
off in the modular fashion but I completely agree that it’s a losing trade-off to put essential items only on a subset of the surveys. – [Stephanie] Thank you both for that. I think we have time for
maybe just one question. So let me go ahead and
ask one more question about presentation of surveys and with regard to collecting data and preparing surveys that have two languages on them that
are placed side by side so the survey itself would be bilingual. Any thoughts about the
potential effectiveness of that sort of approach? – [Paul] One strategy we’ve used that has been quite effect, we call it the Canadian Model. I happen to be Canadian
and in Canda people have been able to recognize that almost everything you get has a French version and a English version so if you get a survey in Canada usually it’s, or any kind of document,
it’s English and French. And we’ve tried that and that has been relatively successful. It’s more expensive and
it makes the document longer and a little unwieldy, but it certainly can be done. – [Marc] This is Marc. I agree, and like Paul’s, although in this case
not French and English, we’ve conducted and published a study about a randomized experiment where anyone with a high predicted probability of speaking Spanish was given both an English language survey and a Spanish language
survey in the same envelope. And you know, as Paul
says, well it increases mailing costs, on the other hand it caused dramatic increases
in response rates for, in this case, lower SES, lower response rate Spanish
preferring plan members. So it may be a trade-off worth making in terms of hard-to-reach groups. – [Stephanie] Thank you very much, and thank you to all of you for your presentations today. If you’re interested in staying up to date with all things CAHPS, we encourage you to subscribe to receive email updates and you’ll see there is a web link here. You would get updates on things like these webcasts including
an upcoming webcast on CAHPS 101 in January 2020. To subscribe please go to https://subscriptions.ahrq.gov/accounts/ USAHRQ/subscriber/new. If you have questions or comments, for example if you asked
a question here today and we didn’t have an opportunity to respond specifically to the question that you’ve asked, please go ahead and followup with us here by email or by phone, you can always reach us and we’re happy to get back to you with any further information that we can to help support your efforts. Thank you so much for your time today. As you exit today’s webcast you will see an evaluation pop up in a separate screen, please take a moment to provide us with your feedback as it helps us to improve our offerings and plan for future events that meet your needs. We invite you to visit the AHRQ website and contact us at any
time by email or phone. Thank you for attending, and enjoy the rest of your day.

Author Since: Mar 11, 2019

Related Post