How to access the International Journal of Market Research (IJMR)
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
For the second year running, the annual one-day MRS conference on analytics and insight proved to be extremely popular with over 140 companies represented by delegates or participants.
Delegates experienced eleven presentations and two panel sessions in the packed programme, covering a wide range of case studies and methodological issues.
The challenges of data aggregation
Whilst data aggregation underpinned most of the case studies, and was advocated in the panel sessions, the challenges of how to do this remained largely unaddressed. Yes, there is no doubt that this is the way forward, but let’s not pretend this is easy, especially when matching data for research purposes that was collected for a radically different purpose, such as from transactions or customer service contacts etc. The IJMR Lecture from last July, delivered by Bonamy Finch and their client TUI, demonstrated the difficulties encountered, and how they were overcome, when using data collected for other purposes to create a customer segmentation, using survey data to help fill gaps and provide the necessary depth of context to profile customers. This case study had also addressed the further challenge of ensuring the outputs were truly actionable (see Editorial in IJMR 60/6, ‘How to create a hybrid segmentation’).
The ADDResponse project, which I described in my Editorial in IJMR 58/4 (‘A big data challenge: can auxiliary data help counter survey non-response bias?’) also identified the big challenges faced by the joint research team attempting to use data from a range of sources to help address issues of non-response in sample surveys (see www.addresponse.org). Also, it was interesting to hear in the morning panel session that data silos still create barriers to data integration – knowledge is power – leading to internal politics being described by one of the morning panellists the biggest challenge faced in developing a holistic insight capability. Panellists described ‘data studios’ and trend prediction value measures as ways of determining the value of data.
Alberto Ferreira (Travel Republic) outlined how this challenge had been addressed by creating a corporate research team and integrating internal data on a relational database at a granular level to facilitate search and interrogation through a common insight structure built around description, category, study title, nugget type and source platform/channel. accessible through a common set of tools. Workshops are used to brainstorm hypotheses, the objectives being to optimise networks to share knowledge and identifying whole-company benefits – ‘power is gained by sharing knowledge, not hording it’.
Statistics versus machine learning versus research
This was the topic for the afternoon panel session. Aji Ghose (Sky) stated that around 80% of their work was still based on statistical principles, machine learning being applied to text mining unstructured data and image data, building predictive models and linking text to metrics. Beth Blakeley (Honeycomb) reminded delegates of the bias often inherent in ML, caused by the prejudices and unconscious bias built into the models, a point she had covered in detail at the ASC conference last November ('Alexa, what is the future of market research?’ – see my Editor’s blog). This underlines the need to ensure data used to train models is as objective as possible, probably by adopting a team approach to reduce bias.
This topic was also covered in the ASC conference held last November in a presentation by Jarlath Quinn (SV Europe), who described statistics as the ugly sister, with demands for AI/ML skills ‘exploding’, leading to higher remuneration than statisticians (and researchers?) enjoy! Quinn argued that whilst ML may not require statistical expertise, is this a wise perspective, citing the historical evolution of statistical theory back to the 17th century, and machine learning with its post WW2 origins, data science and ML having evolved out of data mining.
Addressing the skills-gap
Right across the data management spectrum, skills are in short supply, but it is probably unrealistic to add a further range of new, complex skills to an individual’s set of existing competencies. The preferred and realistic solution discussed by the panel is to ensure that researchers and analysts/data scientists have sufficient knowledge to hold meaningful conversations with each other. Different cultures have emerged between specialisms, leading to three culturally diverse tribes developing insights - statisticians, researchers and data scientists - presenting a potential tower of Babel for naïve clients!
All three tribes might also have differing views on data quality and ethical considerations (the concern that GDPR may lead to access to data becoming more restricted in the wake of the Cambridge Analytica revelations surfacing more than once during the day). So, as discussed by the panellists, the key challenge for management is to nurture a culture where differences are understood, respected, and sufficient mutual understanding is developed to create a meaningful team approach. Key is to identify when these different individual disciplines are appropriate in solving a particular problem, and how the outputs can then be stitched together to create added-value insight.
Understanding data sources is vitally important, part of what I call data-literacy, and ensuring that the flaws in data sources are flagged, and the impact identified. The point was also made by the panel that academics and practitioners remain worlds apart, echoing my Editorial in 61/1 (‘Addressing a key challenge facing marketing) research: re-aligning the academic and practitioner communities’) and that data science courses in higher education cover a very wide spectrum (see Nunan and Di Domenico, ‘Rethinking the market research curriculum’ in IJMR 61/1), so recruitment requires a caveat emptor approach.
Also see ‘Data science and the art of persuasion’, Scott Berinato, Harvard Business Review, January/February 2019, for a detailed perspective on building a successful data science function.
‘Why’, and ‘why not’?
The unique selling proposition for research in the ‘big data’ era has often been argued to be addressing the ‘why?’ of human behaviour, but as Ray Poynter (The Future Place) argued, this is simply not true. Sometimes, the ‘why?’ can be sufficiently deduced from other data or is not necessary in creating fit-for-purpose insight.
For example, in selecting locations for new stores, a retailer like Tesco has a wealth of available data to make such decisions, whilst a new entrant, such as the USA company Wholefoods, needs to build up a knowledge base around its offering, including an in-depth picture of consumers and their behaviour, to develop a successful launch strategy. And, as Preriit Souda (PSA Consultants) described, visual data can be used to develop a very detailed picture of human behaviour by linking to other data sources in order to create an in-depth interpretation of an image which he called Brocade Methodology.
So, for an image of a person in a restaurant with grid, data and time tags, it is possible to build a detailed profile of the occasion from contextual data sources. But, again the techniques for how this might be done remained rather obscure.
Blurred boundaries emerging?
Finally, the term ‘big data’ was, thankfully, practically absent from the presentations – probably because this is now becoming the norm in the world of analytics and data science. However, during the day I detected something of a blurring of boundaries where research data was concerned.
Traditionally, we think of research data falling into two typologies, quantitative or qualitative. What I believe I detected was a possible new typological split between quantitative (‘big data’), and qualitative (research data). Whilst this may help illustrate the gulf in volumes of data between generally small scale sample based surveys compared with sources such as social media (sometimes described as qualitative data on a quantitative scale); internal data from transactions and customer service exchanges; external third party sources etc, I suggest that such a blurring of the boundaries within research data is unhelpful and confusing.
Perhaps this represents an interpretation of ‘qualitative’ as providing descriptive context to the ‘quantitative’ (‘big’) data where only a few relevant variables may be present, research adding deeper, richer data than often available from other, high volume, sources.
Obviously I’ve picked themes from a very diverse program that delivered an extremely interesting day, illustrating the increasing complexity and sophistication necessary in generating insight, where traditional research meets analytics and ML/AI. We hope to publish a Viewpoint from Ray Poynter based on his presentation later this year, and a Conference Note from Richard Maryniak’s (Black Swan) presentation describing how social data monitoring can be used in trend-spotting to help pinpoint the timing for new product launches.
Finally, the conference was also used as a platform to help launch the new MRS integrated package of data analytics support services, ADA.
Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.
Our newsletters cover the latest MRS events, policy updates and research news.
0 comments