How to access the International Journal of Market Research (IJMR)
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
One major accusation levelled at the pollsters in their failure to correctly predict the likely outcome of the UK general election back in May was that their incorrect forecasts influenced both the intentions of voters, and the strategies of the political parties.
Recently, I’ve seen two further examples of how we have to think carefully about the consequences, intended or unintended, when conducting research.
The first echoes the issues surrounding political opinion polls. As you probably know, the heat is already being turned up under the 2016 presidential election in the USA, especially with the first round of TV debates by prospective Republican contenders’ launching their primary campaigns producing a lot of contentious statements and debate. It’s a blog post by John Dick (CEO CivicScience, Pittsburgh) warning pollsters of the dangers posed by the ‘audience reach’ of findings based on inadequate samples which can become disproportionately high due to media coverage.
In support of his case, Dick cites two recent USA national surveys designed to study political attitudes of Republicans based on samples of 252 (Wall Street Journal) and 423 (Monmouth University). As Dick points out, the margin of error (back to that as the key measure of quality, but let’s stick with it in this case) in the Wall Street Journal poll means that ‘every candidate outside of the top three could be ranked anywhere from 4th to 14th’.
In contrast, Dick reports that one leading pollster, Marist Institute for Public Opinion, and the McClatchy Company (‘one of the remaining bastions of professional journalism’) have both suspended their polling of the primary as the popularity ratings for the candidates are too close to call in their surveys, with both organisations expressing unease about the polls ‘affecting the process they are supposed to be measuring’.
What I find particularly interesting here and in a recent GreenBook post commenting on poor quality sample designs (and the subsequent supporting responses) , especially online (but Dick also highlights the problems with ‘phone based sampling in his blog), is that there is no mention of the recent UK experience and the current BPC/MRS enquiry. I hope that more notice will be taken internationally when the final report eventually surfaces next spring.
The second example comes from a recent Marketing Science Institute (MSI) video where Vicki Morwitz (New York University) describes research findings based on experiments to test the impact of undertaking research on the subsequent behaviour of participants. Morwitz’s examples were from the retail and banking sectors, in both cases demonstrating that customers who had participated in customer research were more likely to buy more; be less likely to defect, and deliver more profit than customers who had not been surveyed. She also shows how ‘framing’ in conjoint studies can also influence attitudes towards service attributes.
Whilst, Morowitz contends, much of this influence is unintended at the time, there is the possibility of using research to deliberately influence future behaviour. This could be for straight potential commercial gain or to influence opinions. However, as Morwitz contends, the goal could be societal, in order to nudge consumers to behave in ways more beneficial to themselves as part of a behavioural economic strategy.
Morowitz concludes her presentation by advocating caution as it can be difficult to identify how people might react to being informed through surveys. The consequences might not be as anticipated. But the key message is that, as with the Dick blog, research can affect subsequent attitudes and behaviour, either through informing participants at the time of participating in the survey, or by media coverage of the findings. Some of the consequences might be unintended, others might be intended, but researchers need to act responsibility when designing a research study, and pay more attention to the likely impact of their actions.
It’s a thin line between informing (the ratings of political parties) and influencing (subsequent voting behaviour). In customer satisfaction research, for example, there might be a fine dividing line between simply measuring the attitudes of customers and promoting the interests of the commissioning organisation – the latter not being a legitimate research purpose (see MRS Code Rules 13 & 15) if that was an aim when commissioning a survey (also see ’MRS Regulations for Using Research Techniques for Non Research Purposes’).
Historically, samples in research were small relative to the overall target population group (but, hopefully, fit-for-purpose!), leading to minimal impact, but the world has changed. Online methods mean large sample sizes can be cost effective; research can be used to influence the vast potential of social media platforms to virally spread a message. Survey design needs to be fit-for-purpose, not just in meeting the client’s objectives, but in meeting the requirements of the MRS (or ESOMAR) Code of Conduct, and relevant statistical principles.
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
Our newsletters cover the latest MRS events, policy updates and research news.
1 comment
Annie Pettit, Chief Research Officer, Peanut Labs14 Sep 2015
We're fooling ourselves if we choose to believe that the Hawthorne effect does not apply to political polling and marketing research. Humans can be easily influenced even when they desperately try to avoid it. I'm pondering whether an alternative to posting exact polling findings would be to post groupings - These three candidates are in the lead, these four candidates are next, the remaining candidates trail. It's not as satisfying as seeing three decimal places but it could be far more accurate.