How to access the International Journal of Market Research (IJMR)
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
A warning from cohort studies?
I recently read a review of a new book by Helen Pearson, ‘The Life Project The Extraordinary Story of Our Ordinary Lives’ (Allen Lane). This tells the fascinating story of cohort studies conducted in the UK since 1946, and the major impact the findings have had on our understanding of our society; the development of health policies; identifying the continued importance of class (wealth or poverty) on health, education, social mobility etc. Despite the vital evidence these studies provide, and the immense value they have contributed, they live from hand-to-mouth in terms of funding.
However, what really caught my eye and set in train the topic of this blog was the changing attitude of the public to participating in these important studies. There was apparently no problem in recruiting the initial sample, collecting data on every baby born in a single week in 1946, and few participants have since dropped out (other than when nature takes its course). Further national cohort studies were successfully launched in 1958, 1970, 1991, and in Bristol in 1982.
However, for the latest study the researchers hoped to recruit 16,000 mothers over 18 months, but so far only 249 have agreed to participate. The reviewer, Jenni Russell, concludes her review with: ‘I hope for all our sakes that they succeed’. A view I fully support, but if this is happening for studies of such importance to society, what hope do we have as market researchers?
It’s tough out there
Martin Boon stated at the initial open meeting of the BPC/MRS inquiry into the pollsters performance during the 2015 UK general election campaign that it was taking 30,000 ‘phone calls to produce a 2,000 sample – a fact he re-stated in the BBC Radio 4 programmes on polling broadcast in January. That’s a very worrying, and costly, statistic. We also know that response rates for major random probability based public sector surveys continue to fall. For example, the British Election Study conducted post the 2015 election achieved a response rate of 55.9%, and the British Attitudes Survey continues to experience falling response rates, down from 70% in the mid-1990s to under 55% in 2014.
Are we trusted?
The ‘Wisdom of the Crowd’ project on ethics and social media research demonstrates again that people have concerns about their data, in this case from social media platforms, being used for research – 60% of participants said this should not happen, only 5% below the figure for marketing purposes. The GRBN study showed that whilst, globally, 10% have a high trust in market research, 59% either have only some trust or don’t know, and 31% have a low level of trust in marketing research.
What we don’t know from these studies is what participants have in mind when asked the question. What does ‘research’, market(ing) or otherwise, mean to them? What is it they fear will happen if they participate? Why don’t people trust us? Surely we need to have some answers to this fundamental question. Within the new EU regulation on data privacy, ‘research’ continues to enjoy benefits that don’t apply to others collecting and processing personal data, but if ‘researcher’ or ‘research’ raises warning bells in the minds of the public, we need to tread carefully if this special treatment is to continue.
Is there too much research?
Taking just one example, from the polls inquiry: apparently UK citizens were exposed to 1,942 voting intention polls in the period May 2010-May 2015, 55% of all such surveys conducted since 1945. No less than 91 were conducted in the 2015 campaign. I don’t think there is any realistic estimate anymore of how many interviews are conducted per year in the UK, or any other country.
We’ve been here before………
I have reports investigating response issues from the late 1970s, and the 1980s. I’ve been involved in two major studies investigated participation in research. Firstly, I led a big Research Development Fund (RDF) study in b2c (‘Public Co-operation in research’) and b2b (‘Business Co-operation in Market Research’) back in the late 1990s, which indicated that trust was not then such a big issue. Yes, there were some concerns and confusion about ‘sugging’, but in the case of b2c, it was more about availability at that moment in time.
For b2b we learnt that some key roles were being grossly over researched, often on identical topics. This was followed by the MRS sponsored Respondent Interviewer Interface Task Force where I led one of the two threads of research. The recommended metrics to track respondent satisfaction were developed and piloted, but unfortunately never adopted.
However, these three reports describe a world that seems a lifetime ago as they predate the online research revolution, and the social media explosion. The emphasis was on personal interview methods, primarily f2f with interviewers, but also by fixed line ‘phone. Apart from mail methods, research was still primarily a person-to-person activity. For many of those experiencing market research today it’s because they have decided to join an online access panel – a self-selected group of consumers who have no personal contact with the people in the research process. And, it’s a clearer trade-off between opinions and incentive.
Does high response equal better quality?
Do response rates really matter, anyway? Jon Krosnick in his IJMR Evening Meeting (IJMR 56/6, Editorial) contended that responses in f2f surveys had fallen because the budget per interview has remained constant, so the resources are no longer available to do the job properly. However, Krosnick thought that it was an illusion to rely on response rates as a key measure of quality as responders and non-responders both comprise random samples (assuming a true random sample in the first place, and, that techniques known to boost response rates had been used).
So, the recent NCRM meeting, ‘Maintaining high response rates – is it worth the effort?’, held at the RSS on the 3rd March provided a very interesting contribution to the debate. I’ll just focus on the opening presentation, by Professor Patrick Sturgis (U. Southampton), describing research undertaken by the NCRM which demonstrated that the impact of undertaking more than five attempts to contact a person/household in a random probability f2f sample usually added an insignificant level of quality to the data.
As Joel Williams (TNS-BMRB) described in his presentation (based on the Crime Survey), 20% of fieldwork costs, and 45% of visits are necessary to achieve 10% of the total response. Whilst it takes between 1-3 attempts to achieve the first interview, moving from 66-74% response rates requires 18.6 additional visits per person. Sturgis estimated that the cost for each interview in a PAF based sample cost £250 due to the cost of repeat calls. The research was based on six major UK surveys, splitting the data from the surveys by number of calls (up to twenty) to obtain an interview. This produced 1250 effect sizes from 559 questions.
For 300 variables there was no real difference between first and all calls; after 2 calls for 600 variables the difference was a maximum of 2%; after 5 calls the difference across variables was a mere 0.3%. So, on the face of it, why bother undertaking more than 5 call-backs? The research, however showed difference between the surveys – and, as we were reminded by one delegate, the polls inquiry team (chaired by Sturgis) had demonstrated in their interim findings that splitting the data in the BES & BSA samples between one call and more than one provided a more accurate measure of voting intentions when the harder-to-reach groups were included (for more on this see Editorial in IJMR 58/2, March 2016).
Secondly, without the extra interviews, samples would be too small. Thirdly, reducing call backs may not lead to significant reductions in fieldwork costs as economies of scale would be reduced. Fourthly, easing targets may lead to interviewers reducing their effort to conduct interviews, and have an impact on the tendering process where methods for achieving a high response rate could provide a competitive advantage. Finally, as Joel Williams discussed, there would be unease about quality if response rates in major public sector surveys were shown to be low, especially for key sub-samples, such as younger people.
However, if low response rates could be proven to still deliver fit-for-purpose quality, this could provide a plus factor in retaining random probability methods – or even to replacing access panel samples as a more cost effective method for topics such as measuring voting intentions...
Participation depends on engagement
But let’s not kid ourselves which we should feel comforted by the findings describe in the NCRM seminar. The real issue is that response, or participation, rates are still falling. In my view we should be talking about participation or engagement, not simply response rates. We seem decreasingly able to persuade people to engage with us. And most commercial research is not based on random probability methods, or even quota samples, but convenience or self-selected samples.
Overall, it appears that people are less comfortable with sharing their attitudes, opinion and behaviours with us, whatever the purpose of the research. Is it because of a lack of trust us, or are there other reasons - we don’t really know. For example, the rise of social media provides new ways for people to express their views. Maybe people feel these have a more direct impact than participating in a survey. They don’t have to hope a researcher will call when they have a bad customer experience – and they know that companies are listening to all that noise on social media.
As you’ll see from the GRBN website mentioned earlier, GRBN have an initiative to build trust in research (Building Public Trust Programme) built around building awareness; addressing concerns; earning trust – but I don’t think we have sufficient understanding of the issues to do this job properly. There may be other factors, as identified in the RDF project. We need to know what will win over hearts and minds.
Meanwhile, I hope that the cohort team are successful in developing the latest study. But, I do wonder if we are part of the problem. Could we do more to find some solutions?
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
Our newsletters cover the latest MRS events, policy updates and research news.
0 comments