Editor’s note: Finn Raben is the founder of Amplifi Consulting. The following article was first published in Dutch by the Data & Insight Network in Holland and translated to English exclusively for Quirk’s Media.
Full disclosure: I am a fan of AI, but I am not a fan of how slowly we are addressing the multiple issues associated with it!
Members of the market research/insights/business intelligence profession have always been early adopters of new technologies and developments, demonstrating a keen desire to keep up with commercial evolution and new practices. However, in adopting such practices, our sector (with a few exceptions) has also been quite slow – and sometimes too late – to defend some of the founding principles of our profession.
An example of this is representative sampling. As an industry we shifted very quickly from probability sampling to quota sampling to online sampling as the internet was – in the early 2000s – deemed to be an essential offering for all businesses. Now we have, with distressing regularity, debates about the quality, representation, churn and recruitment issues associated with online samples. ESOMAR did eventually publish the (well received) 28 Questions, but the horse had already left the stable and, as clearly demonstrated at the 2023 ASC conference held on May 25, the issue of quality has become so widespread that a global initiative has been established to try and minimize the effects … 20 years on!
The successful adoption and integration of AI in marketing research is dependent on two crucial elements: curation of training data and the productivity paradox.
The need to better curate training data – and to provide greater transparency of algorithms – has been previously underlined by Michael Campbell in an article in Research World, while the imperative for our sector to be on top of all the implications has also been widely commented upon (includi...