1. SIS International Research
  2. RN_ESOMAR_WP_GBook_480_60_11_14
  3. health2015_480x60
  4. usa-banner2015

6 Killer Insights From Digital and Social Data

While a paid impression tells us that the MARKETER thinks the user is interesting to the brand…an owned or earned impression tell us that the CONSUMER thinks the brand is interesting to THEM.

 

By Joel Rubinson

Marketers need to understand this fundamental truth…in a digital, social, mobile age, consumers can choose brand messages as much as the brand messages choose them.

Consumers are always on, always connected and this has fundamentally changed how we shop…we browse incessantly, on whatever screen we have handy. Progressive marketers know their brands need to be ‘always-on’ as well to generate earned (social media) or owned (e.g. website visits) media impressions that will benefit the brand.

And these owned and earned impressions can be quite significant.  About two years ago, the head of the CPG practice for Google told me that Google tracks 8 BILLION food related searches in a typical month. Fiona Blades, President of MESH, The Experience Agency, that tracks all experiences a consumer has with a brand via mobile, offers, “In some categories we monitor, Digital Owned and Earned experiences can account for as much as 60% of total reported experiences.”.

But perhaps the most important insight about an owned/earned impression is this.

While a paid impression tells us that the MARKETER thinks the user is interesting to the brand…an owned or earned impression tell us that the CONSUMER thinks the brand is interesting to THEM.

This simple ‘aha’ is the reason that digital and social data offer unique insights in 6 ways:

  1. BRAND HEALTH MONITORING: In the aggregate, they give a naturally occurring barometer of how relevant the brand is to consumers, and in the case of social media, they tell us why the brand is relevant
  2. ALWAYS ON MARKETING REPORT CARD: They reflect how good the marketer is at always on marketing
  3. CAUSAL SALES MODELING: It suggests that an owned or earned impression has more impact than a paid impression at driving sales, which is consistent with recent evidence from WOMMA industry study. But let’s include these data into our models and let’s find out…
  4. REAL TIME CAMPAIGN ASSESSMENT: They reflect the degree to which a marketing campaign is having impact on consumer behaviors in time to adjust spending plans
  5. CREATIVE IDEA AND CONTENT TESTING: every Facebook post, Tweet, content article, pageview, click, etc. represent a naturally occurring experiment about which ideas are most interesting to consumers because they are viewed more and shared more. Use this to optimize your content, creative strategies, and website/app experience.
  6. DATA DRIVEN MARKETING PREDICTIVE ANALYSIS: They produce a cookie or other marker that tells us that a particular user is interested in our brand for a certain reason which has programmatic advertising and personalization value

If we think of these as research programs, “always on marketing report card”, “real time campaign assessment” and “data driven marketing action” are “new/new” programs for most. The others offer improvements to current approaches by integrating digital and social data with survey results.

Jeff Reynolds, President of Lieberman Research Worldwide who offers a next generation tracking system called BX (disclosure: I consult with them), speaking at the IIeX conference in Amsterdam, gave killer advice to researchers on their path forward: “Think in data systems, not research studies”.

As an example, imagine how a data system strategy, melding together digital and survey data might work for brand health tracking. Brand health is not equivalent to brand equity; it’s about CHANGES in the baseline. There are baselines in your survey tracking, but there are also baselines to how much people search for your brand, visit its website, talk about it in social media, etc.  So we monitor all the health signs, just like a doctor might look at 20 measures in your blood test. Imagine an integrated data system for a retailer revealing that the lines are crossing for website visits vs. a competitor. Further, we notice the same thing for social media conversation. We notice opposite trends in the attribute rating from our surveys, for both retailers on “offers fair prices”.  We notice that one retailer’s positive sentiment on price related comments spiked up one week while the other retailer took a nosedive.

Which retailer would you rather be?

If you are not indifferent (and you shouldn’t be) you now see how digital and social data, acting as parts of a data system, can offer so much more than surveys alone. By the way, I chose a retailer for this hypothetical example because it is obvious how these digital metrics are directly related to sales over the subsequent weeks. Also, because you are sourcing insights from digital and social data, prescriptive action plans are more obvious from the data because when you drive up your digital and social metrics of brand health, you are actually directly driving sales at the same time.

I have spoken at a number of conferences (IIeX, Australia AMSRS, Toronto MRIA, CASRO digital conference in Nashville) delivering the message that the number one priority for marketing  research is is to find a strategy for integrating survey and digital data into your information strategy.  However, I never thought to present the argument this way…that an owned or earned impression is by its nature a reflection of the relevance of a brand and therefore inherently has important information value.

So hopefully, I’ve won you over and if you are ready to start designing your data system solution for one or more of these programs, please share; I’d love to hear about it.

Share

IIeX EU 2015: Research Is Dead, Long Live Research

IIeX is something special. Next to most other industry conferences, IIeX meetups are a breath of fresh air: invigorating, provocative, and unique. Thus it was in Amsterdam about two weeks ago. There are three big things I took away from an intense 48 hours of presentations and conversations.
Tshirt worn by Chaordix CEO at IIeX in Philadelphia, 2013. Photo Courtesy of Nelson Davis.

Tshirt worn by Chaordix CEO at IIeX in Philadelphia, 2013. Photo Courtesy of Nelson Davis.

 

By JD Deitch, PhD

Regular readers of Greenbook need no introduction to the companies or new techniques that are regularly on display at the Insight Innovation eXchange (IIeX) conferences. Their stories are told on this very blog week in and week out. Nevertheless, as with any good sporting or cultural event, there is something unmistakably better about being there, live.

IIeX is something special. Next to most other industry conferences, IIeX meetups are a breath of fresh air: invigorating, provocative, and unique. Thus it was in Amsterdam about two weeks ago.

Three takeaways

There are three big things I took away from an intense 48 hours of presentations and conversations.

First: research as it is done today—limited in approach, cumbersome in execution, and detached from how companies actually operate—is taking its last breaths.

The impassioned rhetoric of disruption, a fire whose flames I myself have fanned, is now totally mainstream. IIEX was my third conference already this year, and at each I’ve heard choruses of voices singing the same tune.

Amsterdam took that one step further. I thoroughly enjoyed watching Jeff Reynolds from LRW bash a giant nail in the coffin of how we approach brand tracking, and in doing so offer a new vision of doing it better, complete with a P&L that was more attractive for both the agency and the client. (See my own similar piece in the March edition of Marketing News.)

Likewise when Eric Salama spoke about driving cultural change in the vast Kantar organization, there was unmistakable gravity in his frankness. My own employer, Ipsos, is actively engaged in the same process of renewal across the breadth of its business.

Second, the companies and techniques that are the vanguard of the “New MR” are starting to prove their worth.

2015 will be the year that measuring the impact of social media and constant connectedness comes good.

The week before Amsterdam, at CASRO Digital in Nashville, I led a discussion during which my expert panelists agreed that research needed to become source agnostic. Joel Rubinson, one of the discussants, talked about his work showing the impact of social influence on purchasing. He argued forcefully for a better connection between research and clients’ execution, particularly with programmatic advertising. (Joel will be at IIeX Atlanta in June.)

Then, in Amsterdam, I chaired a track of presentations, two of which provided proof points about social media measurement. Fran Cassidy shared results of the IPASocialWorks/MRS project, through which they collected case studies to create guidelines and practices around measuring the impact and ROI of social media. Preriit Souda, ESOMAR’s 2011 Young Researcher of the Year and clearly one of the industry’s bright young stars, shared a multi-source study that blended survey and social media data to evaluate Scotland’s independence referendum.

Another area which, to my mind at least, is on the cusp of great discovery and impact is nonconscious measurement. I was impressed by Jeremy Sack’s (also from LRW) work on Identity Overlap driving brand affinity. I was blown away by Cristina Balanzó’s (Walnut Unlimited) presentation on subconscious human insights, in particular the stunning visuals from her company’s neuroscientific approaches to ad testing. Both speakers articulated cogent frameworks reflecting an undeniable maturity of thought.

And I’d be remiss if I didn’t speak to the rapidly evolving tools that make it easier to do good research. Whether it’s IIEX alum Zappistore blazing a trail for what I’ve come to call “ready to wear research”, or this year’s Insight Innovation Competition EU winner and mobile sample provider Dalia Research, or any of the myriad qualitative and text analytics solutions that just continue to get better, it’s a sure thing that we will see companies like these start to break through to bigger success in 2015.

Third, there is no better place to see the range of techniques and new ideas than IIeX. 

Our industry associations are visibly evolving to stay current amidst the sea of change, but they haven’t a patch on IIeX when it comes to providing a forum for the new. The presentation track I chaired was a microcosm of the range of techniques that, but for neuroscience, have captured the industry’s interest. In addition to Fran and Preritt, Mary Meehan (Panoramix) gave a very structured approach to studying broad cultural trends that reflected the best of what the industry aspires to when it talks about storytelling. Frank Kelly (Lightspeed GMI) spoke about the research they’ve done on using voice—both text-to-speech and speech recognition—in survey research which, one must believe (I do), will replace the keyboard in the future.

The conference has the right combination of tempo, curation, and people to create a very high signal to noise ratio in the sessions, on the exhibition floor, and especially in casual conversation.

Looking forward

IIeX is one of the only conferences that stirs my optimism in the industry. What makes it provocative and unique is that it gets beyond rhetoric to the real world. The techniques on exhibit are available now and are maturing rapidly. The demos are worth the time spent.

The rest of this year’s conferences will no doubt highlight the change underway in our industry. Talk of disruption will become the obligatory first slide of presentations in which companies who may or may not have new arrows in their quiver aim to convince listeners that they’re blazing new trails. IIeX is the real deal though. It is part-showcase, part-crucible for people trying new things explicitly designed to produce better research in front of an audience that is appreciative and imaginative.

Research is dead. Long live research.

Share

Measuring Cognitive Stress & Usability of Surveys

Data quality is directly proportional to comprehension and usability.

cognitive Stress2

 

By Vivek Bhaskaran

Respondent quality is often talked about by social and survey researchers. Representative sampling and random probability sampling theory is the basis of almost all survey and attitudinal behavior research. However, most researchers overlook something even simpler – the usability and cognitive stress surveys place on respondents.

Cognitive stress – is the technical term given to users of systems, to measure the level of comprehension and understanding of a system and more importantly, the internal apprehension and anxiety humans face when they are confused or unsure about a task. The confusion may arise due to various factors – not knowing the colloquial language, complicated statements where respondents are asked to rate and often times simply ambiguous statements that may not apply to individual survey respondents.

Consider this example;

On a scale where “5” means to a great degree and “1” means none at all, how would you rate the following questions about your company;

1)      To what degree do you believe that your top management regards every employee in your company as an innovator, with the potential to produce or contribute to critical business opportunities?

2)      To what degree are you encouraged to come up with innovative ideas?

The first question here is loaded. It poses a substantial burden on the respondent to answer that question – and it is very likely that different respondents, or employees in this particular case, will interpret the question differently. Is the question about management valuing employees, or is it about management valuing innovators – but it seems that innovators are defined as employees with the potential to produce or contribute to critical business opportunities – So is it about innovators as defined here?

Now compare this to the second question – which is much more simpler and easier to understand – at least relatively.

The two questions, as similar as they may seem are measuring two different artifacts – one is measuring management’s ethos and other one is measuring management’s practical implementations. The researcher needs to identify these two as separate items – so that when the results of the survey are in, the researcher can make an accurate recommendation to the CEO.

Survey researchers and social scientists have long struggled with measuring cognitive stress. Many times, the survey questions themselves are not validated or tested for efficacy. This is primarily due to cost considerations and it’s assumed that researchers who are creating surveys are experts in this. They are indeed experts in crafting questions that are not biased or not leading the witness, but they are also human!

Now consider this other example;

Where do you live?

[                                          ]

The obvious issue here is – does the researcher mean country, city or zip code? This is a much easier issue to identify and solve – its an ambiguous question and increases respondent’s frustration – the respondent at this point is trying to guess – which of the three does the researcher really want! These kinds of issue can be identified easily by having someone else “QA” the survey. But, often times, the real world kicks in – researchers can send survey links to colleagues to validate and QA the survey – but that’s usually done as an afterthought. Moreover, colleagues and friends are NOT a reliable way to assure that the survey instrument does not have any glaring ambiguity issues.

We’ve tried and tested a new model for identifying and measuring cognitive stress in surveys – crowdsourcing usability testers. User Experience and Interaction Designers have used this process very successfully in the web/app design space. Almost all digital design agencies perform some sort of usability testing before presenting concepts and ideas to their clients. In the last few years, as crowdsourcing has gone mainstream, remote usability testing has become increasingly popular.

We can take a page out of that model and apply that to surveys. We can have users record their screens and speak about their experience taking a survey. Strides in technology have made remote user testing and usability testing – where the usability testing subjects are using their own devices and tools and verbally walk you through their experience has made this cost effective and easy to use.

We here at QuestionPro have partnered with TryMyUI to provide such an integrated solution to our clients. TryMyUI recently released their Partner API and this enabled us to integrate Survey Usability Testing directly into QuestionPro.

The screenshots below show how to order Usability Tests from the TryMyUI tester panel.

Cognitive Stress

 

 

Usability Test

Conclusion – This qualitative and subjective model to measure and identify usability and cognitive stress, using remote testers and recording their video session, represent a step in the right direction to increase the reliability of the survey data. Data collected via surveys fundamentally represent inputs into a larger decision making process – and as researchers we need to be cognizant about the quality of the data we collect. This process makes it that much more accurate.

 

References & Further Reading:

QuestionPro FAQ on Cognitive Stress & Usability Testing

http://www.questionpro.com/help/601.html

Bureau of Labor Statistics & TryMyUI – Case Study

http://trymyui.com/whitepapers/BLSCaseStudy_TryMYUI.pdf

Share

Macromill Gets Set To Disrupt MR By Embracing “APPification” Of Research

Macromill has just emerged as the first large MR firm to embrace the app model of research as an embedded strategic initiative. What does this mean for the industry as a whole?

automation1

 

In a post a few weeks ago I outlined the major disruptive trends in market research this year. One of those trends was automation, and indeed that is shaping up to be one of the dominant forces in the industry, so much so that several organizations have asked me to do a deeper dive analysis of what is happening in that sector. I can’t share the entire report (yet), but I am going to use a few slides in this post to help support my point that the game has been changed in a very fundamental way by this rapidly growing new model.

First is my view of the evolution of this trend, with a basic segmentation of the types of models being deployed and some of the companies that exemplify each  Here is my take (click on it to make the image bigger):

appmarketsegments

This isn’t an exhaustive review to be sure, and other players are emerging or pivoting in this direction almost daily. In general, if a company is a player in the insights technology sector (sampling, data collection, analysis and reporting) then they are most likely looking at this idea, and the more of that process value chain they own, the more likely they are to be looking at embracing automation as a driver of the research process. The companies that are already dominating in this arena are Zappistore, Instant.ly (formerly uSamp), Google Consumer Surveys, Toluna, Research Nowand Qualtrics, but others are quickly developing their own offerings. Of those companies, Zappistore currently has the most traction and is the market leader today.

In my analysis I call out that no major Full Service player had fully embraced the “APPification”  of Research (a term I just stole from Peter Orban based on his post here) as a wholly owned internal initiative. Millward Brown and TNS loading “apps” in Zappistore are important signals by those firms, but Zappistore is not owned by either of those firms or by Kantar as a whole, so those moves are inherently partnerships. However, yesterday that changed when Macromill announced their 2014 results.

When the MetrixLab/Macromill merger was announced last year I predicted that they would quickly begin to position themselves as a game changing force in the industry, and indeed, Han de Groot is wasting no time in announcing exactly where their strategy lies, and the App Market model is the center point of it:

“Our focus has been to innovate the back end of the research production process. Today the market is ready for automation of the front end. Speed to insights is a top priority, whether we engage with insights managers or marketers. Our clients want fast results through highly intuitive interfaces for data and insights reporting. We believe we’re well positioned to compete in this space because we own the entire value chain of market research: from consumer audiences, to survey technologies, from reporting tools through professional analytics. Most new competitors in the market research automation segment lack research professionals who provide additional on demand analysis services. Instead of offering generic survey tools, we focus on added value solutions such as digital and mobile ad pretesting and advertising campaign tracking. We will transform the results delivery experience by combining highly intuitive reports and dashboards with on demand consulting services. By leveraging our research expertise and staying true to our technology DNA, we will take the “MR App” market to the next level,” adds Global CEO Han de Groot.

And I believe he is right. A huge amount of the research process can be (and is being) automated, which lends itself to business issue specific templating of solutions. The hallmarks of what is easily (and not so easily) “appified” is detailed in the graphic below.

automationuse cases

This is a significant piece of the market. If we look at date from the most recent ESOMAR industry study, up to 57% of all traditional research spend is primed for a shift to the “App Market” model: drivers of revenue

Using the Expanded Definition by ESOMAR as the basis for calculation, online only approaches account for 55% of global spend at $33.5B.

Online Approaches % Of Spend Online Only Revenue
Online MR 18% $11,284,000,000
Social Media Research 16% $10,000,000,000
Online Analytics 12% $7,200,000,000
Communities 4% $2,200,000,000
Web Traffic Analytics 2% $990,000,000
Sample & Panel 1% $920,000,000
Media Monitoring 1% $540,000,000
Survey Software 1% $440,000,000

 

If is being done online, it is most easily automated, and if it can be automated, it can be templated. Those factors, combined with a large scale user based, are triggers for “APPification”.

For any company that is a tempting market to go after to be sure, but for a company like Macromill it is more than  a target; it virtually defines their business and is the next logical step in their development. Remember, this is a company that is guided by a highly successful and visionary CEO, has the backing of Bain Capital, has a strong technology infrastructure and development capability, and has a world class full service organization as well.

It’s that last part that is the real kicker here: although more and more of the research process will be “Appified”, including much “check box” research like testing and tracking, there will always be a need for a strong service component by some clients and for many other types of research. Most of the current players in this space are tech only offerings, and although that isn’t slowing their growth noticeably yet, at some point they will have to develop partnerships or other resources to expand their appeal to clients who need more front-end or back-end support.

Macromill on the other hand, already has those resources in place. They effectively control the entire value chain of research for their clients, and will likely continue to expand that via an extension of their current model of adding in new data channels, technologies, and expanded market penetration.

Don’t take my word for it though. When I received the press release on their earnings, I reached out to Han to get his take on where they were going next. He was kind enough to answer my questions and agree to let me post them here. Read this carefully folks, because Han is laying out a roadmap that I believe strongly is indicative of the path for our industry over the next 5 years.

PR_44_revenue_imageLFMCongrats on a phenomenal year Han! When so many of the other global leading firms seem to be struggling, what do you attribute your success too?

HDG: I take a portfolio approach to the growth of our business. We have built a portfolio of countries, solutions and new ventures – like any other business, sometimes some are hot and some are not – all happening in cycles. In 2014 our majority owned ventures: survey audience provider PrecisionSample, our joint venture with Dentsu, called “DENTSU MACROMILL INSIGHT” and Netherlands based social insights specialist Oxyme have all performed very well against our 2014 plan.

LFM: You’ve been in your new role as global CEO for a few months now; what has been the biggest surprise for you so far?   

HDG: No surprises, MACROMILL and MetrixLab joined forces in 2014 after 8 months of very detailed business planning; what surprised me though in early 2014 while evaluating the Japan MR market was that unlike any other large market Nielsen, KANTAR, GfK and IPSOS have a very small market share in Japan. MACROMILL and Intage have significantly been outperforming the global big 4 in Japan during the last decade.

LFM: Macromill is the first major full service company to bet big on the “appification” of the research process. That seems counter-intuitive for most traditional research suppliers that rely on large margins for the “sample/field/analysis” phase of market research to drive revenue. What is driving that strategy?

HDG: Over the last 10 to 15 years the Internet has squeezed inefficiencies out of the back end ‘production process’ of market research; over the next 5 years the internet will do the same to the ‘front end’ of market research; it will take inefficiencies out of the research selling/buying process as well as the  research delivery process. Look at what a typical order process looks like today between a client and a vendor  – count how many hours are lost in communication back and forth; it is an old fashioned, inefficient process today. Speed to insights has become essential in today’s world: Netflix has won from Blockbuster – how we order and deliver market research will radically change in the same way. “Appification” of market research will make market research (finally) more accessible to marketers, who are not willing to wait weeks for results; they want results within hours and days! Managing the automation behind the app won’t be the real challenge; the true challenge will lie in providing an excellent delivery experience. At MACROMILL we will combine “appification” with great results delivery experiences, that will be our USP. We can learn from Disney and translate the Disney experience to a results delivery processes – we need to immerse clients in the world of their customers. This might sound conflicting with “appification” at first, but I assure you that “appification” will advance our market not only in terms of the order process and the delivery process, but will most importantly advance our deliverables, leading to new results delivery experiences!  

LFM: Are there key capabilities or approaches that you’re excited about potentially adding to your marketplace?

HDG: We’re still considering our strategy towards “market places”  – I expect that many vendor owned MR market places will be introduced. For us, today, it is all about content, more than the market place; we focus on developing apps with a specific topical focus, solving a specific client problem, increasing speed to insights. Whether our apps will be sold on and downloaded from our own App Store or from Salesforce’ AppExchange or Oracle’s App platform, that is to be determined. Today’s apps are ‘order spec’ applications with standardized reporting functionality, the future lies in great results delivery experiences – take a look at the Bloomberg app, simple but highly effective for making better and faster investment decisions.

LFM: In looking ahead at 2015, what do you consider to be your primary strategic imperative?

HDG: The most important one will be attracting top talent. We have just hired a great new global CFO from GE and CTO from IBM, to be announced soon. And we have to make ourselves known – together with Comscore we have been the fastest growing company in the MR industry for the last 15 years since our inception, but nobody knows us; we some work to do!  

Business growth will come from adding new markets and distributing our innovations to the markets where we operate; “appifying” our proprietary MR solutions; integrating our social brand insights solutions (from our Oxyme venture) with our survey based brand and campaign tracking solutions. We have just launched a beta version of a WhatsApp type of interview tool in Japan, a beta version of a ‘Big Data Dashboard’ in the Netherlands, and an iphone/android based product scanning app in Japan that enables us to collect both survey and purchase data from our audiences. We’re also rapidly expanding our consumer audiences in LATAM, Africa and SE Asia – we want to maintain full control over the value chain of market research.

LFM: Thinking of 2020, how will Macromill look then?

HDG: Great question, but after sharing all of the above, I would like to keep this one secret! 

So there you go folks. The early success of the tech companies pioneering this model has now given birth a large, global player that is going to push it to the next level. This shift will be as fundamentally  game changing as the advent of the internet was 15 years ago, and our industry will change dramatically on all levels as a result.

Share

How to Separate Neuroscience from NeuroHype

How do we, with respect to neuroscience, separate the wheat from the baloney? Here are several tips and resources.
Courtesy of www.angleorange.com

Courtesy of www.angleorange.com

 

 

By Kevin Gray

As everyone knows, we only use ten percent of our brains, right-brained people are more creative and pregnant women lose control of their minds.

Except that what everyone knows is probably wrong according to Christian Jarrett.  Jarrett is a contributor to Wired, but not merely a correspondent who writes well about technical matters.  And, make no mistake about it, he writes very well and has recently published a very enlightening and entertaining book entitled Great Myths of the Brain.  Importantly, Jarrett also writes as an expert since he holds a PhD in Cognitive Neuroscience and is editor of the British Psychological Society’s Research Digest.

Being a neuroscientist himself, Jarrett does not think that neuroscience is bunk.  What he does believe is that much of what has been written or said about it in the popular media is bunk.  There is a difference:

We’ve made great strides in our understanding of the brain, yet huge mysteries remain.  They say a little knowledge can be a dangerous thing and it is in the context of this excitement and ignorance that brain myths have thrived…. Salesmen are capitalizing on the fashion for brain science by placing the neuro prefix in front of any activity you can think of…

Not surprisingly, Jarrett is concerned about backlash:

With all the hype and mythology that swirls around the brain, the risk is that people will become disillusioned with neuroscience for failing to provide the revolution in human understanding that many have heralded…There seems to be a rising mood of skepticism, weariness with the clichéd media coverage of new results, and a growing recognition that neuroscience complements psychology, it can’t possibly replace it.  But let’s remember too that neuroscience is its infancy.  We are unearthing new findings at an astonishing rate, many of which are already helping people with devastating brain disorders.

A few of the numerous other popular misconceptions and overstatements he covers in the book include:

  • The brain is a computer
  • Adults can’t grow new brain cells
  • The female brain is more balanced
  • Neuroscience is transforming human self-understanding
  • Brain training will make you smart
  • Brain food will make you even smarter

Jarrett briefly touches on neuromarketing and thinks it holds promise, though he feels there has been a lot of nonsense written about it too.  (In the interest of disclosure, this is also my opinion.)  He points out that many neuromarketing claims appear in newspaper articles or magazines, rather than in peer-reviewed scientific journals and calls for more rigor and balance:

Although it’s early days, and there’s been an inordinate amount of hype, there are ways in which brain scanning techniques could complement the traditional marketing armamentarium.

As an illustration, he describes a hypothetical new food product that is a hit with consumers taking part in a conventional taste test: “Brain scanning and other physiological measures could potentially identify what makes this product distinct…”

Scientific controversies rarely seem to boil down to the simple dichotomy that a theory has been either conclusively proven or conclusively disproven, and even in the hardest of hard sciences there are large grey regions which force us to examine the balance of the evidence in order to come to a sensible judgment.  A sound conclusion, in fact, may be to suspend judgment, and Jarrett is very good at giving balanced appraisals of the evidence regarding the various issues he examines and makes his opinion clear that there often is at least a grain of truth in many myths.

Exaggerations and basic misunderstandings, such as confusing statistically significant with consequential, can spread like wildfire throughout the blogosphere and even respected news media, unfortunately, and what is merely appealing conjecture can be quickly “established” as fact through sheer repetition.  However, claims regarding potential are not proven facts, and I’d urge us all to watch the pea under the thimble when hearing or reading about any breathtaking new claim that purports to be grounded in science. Marketing researchers should in theory be better than most at spotting these stampedes of misinformation but we too can easily fall victim to a herd mentality; just because we’re marketing researchers, doesn’t mean we’ve been immunized against embellishments and outright deceptions.

So how do we, as laypersons with respect to neuroscience, separate the wheat from the baloney?  Jarrett lists six guidelines, which he returns to frequently in the book:

  1. Look out for gratuitous neuro references: “Just because someone mentions the brain it doesn’t necessarily make their argument more valid.”

  2. Look for conflicts of interest: “Look for independent opinion from experts who don’t have a vested interest. And check whether brain claims are backed by quality peer-reviewed evidence.”

  3. Watch out for grandiose claims: “Sound too good to be true? If it does, it probably is.”

  4. Beware of seductive metaphors: “We’d all like to have balance and calm in our lives…”

  5. Learn to recognize quality research: “Ignore spin and take first-hand testimonials with a pinch of salt. When it comes to testing the efficacy of brain-based interventions, the gold standard is the randomized, double-blind, placebo-controlled trial…The most robust evidence to look for in relation to brain claims is the meta analysis…”

  6. Recognize the difference between causation and correlation: “The causal direction could run the other way (people with a larger Y like to do activity X), or some other factor might influence both X and Y.”

Jarrett’s tips are sound advice for evaluating science in general and I believe most also apply to new marketing research techniques.

To elaborate a bit further, if someone claims that something has been “proven” on the basis of a single unreplicated study, in my opinion, they have only proven themselves suspect.  Even a large-scale meta analysis that has been well-conducted and taken study heterogeneity into account probably will not clinch it; that may require multiple, independent meta-analyses.1  Publication bias is another concern.  This is a lengthy matter but essentially refers to the fact that many studies go unpublished, not because of poor quality, but because no statistically significant effects were detected.  A negative finding, however, is just as important as one achieving statistical significance.

Many science writers appear to me to have had little formal education in research methods and statistics, and this is noticeable when an article headline does not match the body of the article or when the writer interprets the original paper or papers being cited inaccurately or selectively.  Many news accounts on any subject are oversimplified, in my opinion, but neuroscience may fall victim to this more often than most because it is inherently so interesting and mystifying and, at the same time, so scary and complex.  “The brain is always busy, whether it’s engaged in an experimenter task or not, so there are endless fluctuations throughout the entire organ.  Increased activity is also ambiguous – it can be a sign of increased inhibition, not just excitation.  And people’s brains differ in their behavior from one day to the next, from one minute to the next.  A cognition-brain correlation in one situation doesn’t guarantee it will exist in another.  Additionally, each brain is unique – my brain doesn’t behave in exactly the same way as yours.”  One wouldn’t have guessed any of this from a typical mass media article!

It’s also wise to be wary of “new evidence” when we hear “rebuttals” such as “Well, that may be true, but new evidences shows…”, which would seem to suggest that the old evidence cited until now wasn’t actually credible.  So, therefore, we should be swayed by the new evidence?  Cherry-picked results are a pet peeve of statisticians and a favorite tool of charlatans of all sorts, as are fancy visualizations, which can be cunningly used to mask thin substance.  To be clear, I am not pointing the finger at neuroscience as a special case, since these sorts of tactics can be employed whenever science is invoked as a reason to trust a claim.  (I offer a few more thoughts on what we should be on guard against in http://www.greenbookblog.org/2014/04/14/innovation-or-sales-pitch/)

I am not a neuroscientist myself, of course, and I think the first step for anyone who wants to learn more about any scientific or technical matter outside one’s own areas of expertise is to find out who the real authorities are and what they are really saying or writing about the subject.  Judging from assorted reading over several years, Jarrett’s views seem to me to be quite representative of mainstream neuroscience (which for the most part has nothing at all to do with marketing).

For readers wishing to dig more deeply into neuroscience, there are many sources.  Bob Garrett, a Visiting Scholar at California Polytechnic State University, has co-authored Brain & Behavior: An Introduction to Biological Psychology, a popular textbook now in its fourth edition.  Many marketing scientists will be familiar with the Journal of the American Statistical Association (JASA), a quarterly publication that has featured many technical papers on neuroscience subjects over the years.  In particular, JASA is an excellent source on measurement challenges facing the field that more often than not go unmentioned in popular media accounts.  The British Psychological Society’s Research Digest can be found here http://digest.bps.org.uk/ and Jarrett’s Wired blog here http://www.wired.com/category/science-blogs/brainwatch/.  In addition, there are several skeptical blogs Jarrett cites that you may wish to have a look at.2

______________________________________________________________________

Notes

1 “Meta-analysis” does not simply mean that a reviewer has examined more than one study.  It is a set of procedures for statistically synthesizing the results of several primary studies.  Like any methodology, meta-analysis can be misused or abused.  There are many online sources about it and I can also recommend Methods of Meta-Analysis (Schmidt and Hunter) and Introduction to Meta-Analysis (Borenstein et al.), two frequently-cited textbooks on this topic.

2 These five blogs are mentioned specifically: http://mindhacks.com/; http://blogs.discovermagazine.com/neuroskeptic/; http://neurocritic.blogspot.co.uk/; http://neurobollocks.wordpress.com/; and http://neurobonkers.com/

Share

12 Reasons Why You Should Build An Online Community In 2015

In addition to being a source of innovation and inspiration for marketing, online communities can benefit your business in many other ways.

community2

 

By Adriana Rocha 

A few years ago when we started building private online communities for our clients, many of them used to ask “Why should I build a private online community, when I have my own Facebook Fan Page”? Well, it was not that difficult to explain the reasons why, but their priority was still to focus building a “strong” presence in Facebook, getting thousands or even millions of “fans”, I mean “likes”. Now that companies realize it is hard to build a true brand community in Facebook, engage their customers, or even assure that their message will be delivered to their followers and fans without having to pay for it, private online communities are returning to the agenda of CMO’s around the world. According to IDC, online communities will continue growing to support business innovation in all areas of the company, representing a 30% increase of investment in 2015 compared to 2014. Forrester also predicts that, as public social networks keep growing, private online communities will gain strength in 2015 and beyond.

Essentially, building an online community means that your company will work with an extended marketing arm, sometimes reaching hundreds or even thousands of customers ( or other stakeholders ) acting as motivated contributors. So, additionally to being a source of innovation and inspiration for marketing, online communities can benefit your business in many other ways:

  • Increase Customer Loyalty

A recent Forrester survey shows that online adult audiences, who want to stay in touch with your brand, are three times more likely to visit your website than following it on Facebook. Both, B2B and B2C industry companies found in branded communities a greater opportunity to create more loyalty and lifetime customer value than through social networks like Facebook or Twitter. Community adds a sense of belonging to a Web site. This can encourage repeat visits and more interest in the site as a whole. Often, there is richer and more varied content, as you don’t have just one moderator creating content for the site, but rather many people joining in with their opinions and feelings.

  • New source of Brand Advocates

An online community can be used to create a program of brand ambassadors, and add customers who are brand advocates systematically through word-of- mouth. Co -creators of content as well as other users and influencers in their own product category can then share content in other social networks. A positive comment about a brand is more reliable coming from a “consumer like me” than a TV Ad. Developing customers as brand ambassadors and defenders is a new medium of mass communication.

  • Control of your Data and Customer Experience

Building private online communities means that you will be in control of the communication and relationship with your customers.  Besides that, the data generated by your community will be just yours, and not available to your competitors.

  • An Extended Reach of your Message

Many brands try to get word of mouth by publishing content in public social networks, hoping to become viral, but this method of “send and pray” often does not work. What is the best way to create a viral effect? How about inviting a few hundred of your most satisfied customers to a single space, classify and recognize them for their cooperation and see the word-of- mouth multiply?

  • Increasing Sales

When people are searching and considering their options for products and services, usually they end up visiting the website of the brand. The inclusion of ratings and reviews on your website can generate more sales. By adding a Q & A or a customer community on your website, you can generate more sales opportunities with new customers and more satisfaction among existing customers.

  • Feedback from Stakeholders

It is very powerful to have a dedicated online community capable to open a window to your customers’ lives and invite them to join your internal marketing team. This idea is not limited to customers; it can also be used to include employees, shareholders, opinion leaders, and other audiences.

  • Reducing Market Research costs

Especially when the target audience is difficult to achieve because of the low incidence; owning an online community and being able to invite customers to participate in research projects provides significant cost savings for both, quantitative and qualitative studies.

  • New possibilities of Consumer Insights

An online community with video and photo tools allows consumers to easily upload video clips or photos from their smartphone, tablet or computer. The benefit of ethnography online (or ‘netnography ‘) is not just financial; the quality and quantity of information gathered improves dramatically, especially considering that before we could only send a team of researchers to the consumers’ homes on a day and time previously scheduled. In addition, the gathered data may be more realistic, since nobody is watching in their home to record the video. It also creates many opportunities for insights, as consumers can share their experiences with the brand or product, from anywhere, at any time, including point of sale.

  • Contribution to Brand Equity

Agile, Modern, Dynamic. Are these image attributes you would like be associated with your brand? A vibrant online community can help. See Starbucks Ideas, for example.

  • Keep up with the Competition

Brands building their own online communities are not just the pioneers adopting new technologies, either for co -creation, social marketing, market research or loyalty programs. If you have not started building your online community yet, you are been part of a small group that can be left behind. Consider this, in the area of market research in the US, according to the last GRIT (Greenbook Research Industry Trends) report for autumn 2014, 56% of respondents said they already are using online communities for market research purpose (also known as Insights Communities). Another 26 % said they are considering building a community, 14 % did not yet have interest or were unsure, and only 4 % said they would never use an online community for market research.

  • Agile and Better Decision Making

Having a few hundred clients in private online communities, eliminates the need for having to search for them each time you want to hear them as usually happens in traditional research projects. You can reduce in days, or even weeks, the time spent on recruiting and re- contacting. In co-creation communities, for example, customers can contribute on numerous occasions, not only when they are there to listen to an interviewer, but rather, they can actually become an integral and continuous part of the marketing team and business innovation process.

  • Engage your Board of Directors

Directors and general managers love to watch and listen to customers talking about their needs and experiences with their products and brands. However, usually they are far from consumers in a day by day basis. So, to bring a group of clients or potential customers to the meeting room increases the understanding of the final consumers and helps the board to make decisions better informed.

Finally it is time for companies to focus their efforts and social marketing strategies on what really works to attract and engage customers. Do you have experience in building your own online community? What are the strengths, challenges and actions that work and do not work? I would love to know your opinions and experiences, so we can use this space to share knowledge and best practices for those still going to venture into the world of online communities.

 

References:

IDC: http://www.idc.com/getdoc.jsp?containerId=prUS25297714

Forrester: http://blogs.forrester.com/nate_elliott/14-11-10-as_social_media_matures_branded_communities_will_make_a_comeback_in_2015, http://blogs.forrester.com/nate_elliott/14-11-17-facebook_has_finally_killed_organic_reach_what_should_marketers_do_next

Greenbook: http://www.greenbook.org/grit ,

Digital MR: http://www.digital-mr.com/blog/view/5-Reasons-why-every-organisation-should-build-a-community-online

 

Share

Rubber … Meet Road. Time to Decide What to Do!

Day 2 of the Insight Innovation eXchange in Amsterdam provided even more content to chew on, making me feel like Dionysus at a banquet being stuffed continually by chefs from all sides.
iiexeu15

Photo Credit: Javier Minguez

 

By Richard Evensen

Day 2 of the Insight Innovation eXchange in Amsterdam provided even more content to chew on, making me feel like Dionysus at a banquet being stuffed continually by chefs from all sides.

One presentation aptly summed up my feelings of content-overload: “Stop Eating the Menu!”

So, now, satiated on content and industry connections, we head back to the “real world” (wherever and whatever that may be) and …

Well, that’s the question. What do we do? More importantly, what do we do differently than we did before?

Post-conference, there are soooo many options but only so much time (and, for market researchers, definitely only so much money). So, how do we take all of these content-rich presentations and – as we advise our clients – ACT on the insights?!

At the risk of being shot for providing yet more content to digest, I offer this simple construct for moving from thinking to acting:

  1. Take a piece of paper (or open an Excel sheet, if you prefer) and label 4 columns with Interesting, Innovative, Important and Urgent. These are defined as follows:
    • Interesting – The tidbits you picked up which you can remember but wouldn’t necessarily implement. “Good to know” but definitely not game-changers.
    • Innovative – The new approaches, techniques, products, processes, etc which are making you wonder “what if?”. These might be game-changers, or may be vaporware.
    • Important – The stuff you know you “need to do” and, in many cases, probably keep putting off. Yup, that stuff!
    • Urgent – The thing(s) you saw which you feel/know could impact you in a major way. Think in terms of both opportunity and threat.
  2. Using the definitions above as guidelines (but not limits), relax now and allow yourself to do some free-thinking about everything you saw, did, heard, felt and experienced at IIeX.
  3. Simply write down whatever comes into your head in the column where it feels like it fits. Don’t over-think it. Just do it.
  4. Using a bit more analytical rigor, go through what you wrote and cross out or move anything which doesn’t fit. It’s OK to cull/change until you have things where you want them. Optimally, you should only have one (max two) Urgent items.
  5. Now, the Action Plan. For the insights you have in each column, do the following:
    • Interesting – Keep on the radar … but do nothing. Yes, sometimes no action is best.
    • Innovative – Request more info … then do nothing unless something is important.
    • Important – Make this/these your 12 month goal. Define the following:
      • Barriers to each
      • Solutions to the barriers
      • Time in which you will take action on the solutions
    • Urgent – Take a similar approach as with Important item(s) … and promise everyone in your support network that you will make this change within the next 6 weeks!

In reality, you may not do anything about the Interesting, Innovative and even Important items. That’s fine. If you make just ONE valuable change, you’ll have a great ROI from the event.

And, if you follow the approach outlined above, you WILL do that one Urgent item. Why? I present to you the most effective technique for optimizing change: Peer Pressure!

And with that, I bid adieu to all the great presenters and people I met and look forward to seeing you at the next IIeX.

 

Photo: Javier Minguez

Share

The New Math of Market Research: Growth = I x I x I

Day 1 of the Insight Innovation eXchange Conference set out to show us a new reality and define a new research paradigm. It did … and then some.

iiexeu

 

By Richard Evensen

It’s always difficult to summarize a conference day, let alone one with 40+ presentations across wide-reaching topics and attendees across the full market research ecosystem!

Day 1 of the Insight Innovation eXchange Conference set out to show us a new reality and define a new research paradigm. It did … and then some. Key must-haves for market researchers who want to grow and thrive in this new reality include:

  • Innovation – Yes, that’s the name of the conference but what you realize here is that there are lots of ways to innovate, including:
    • Business model – Growth hacking, partnering with start-ups and/or uncommon hook-ups, owning a niche or becoming a services Swiss army knife. The research business of today has some different options and, equally, more challenges.
    • Technology – Capture intrinsic responses, integrate with big data, embrace mobile and social. The market research toolbox used to be pretty much standard. Now, there are lots of choices … and they are increasing quickly.
    • People – A new breed of (big) data analysts, data visualization experts and storytellers. The industry is realizing that we should no longer hire like but, instead, hire the skills which are needed to leverage available technology and optimize our business model.
  • Integration – The conference is very much about integration and exchange of ideas and some key must-changes for market researchers include investing inmuch greater integration with:
    • End Customers – Getting close to your customers, engaging them across multiple devices and platforms, understanding Gen Z. If you’re not embracing your end customers, “you’re not working for a good company”!
    • Ecosystem – “Insights is a team sport”. This quote says it all. Forget static trends and realize that we live in a world which is dynamic and connected and you need to tap many sources to understand context and values and guide great business decisions.
  • Insights – This oneis mentioned a lot but, like ‘intelligence’, doesn’t seem to have acommonly-held definition. Some key views which bubbled to the top are that insightsshould be:
    • Holistic – Trends need to include context, survey responses need to identify emotional engagement, quant needs qual, PC/CATI needs mobile and traditional data needs new sources. Take as many sources as you can, while ensuring timeliness and action-ability.
    • Visual – Static PowerPoint, outdated data charts, no videos. We need to evolve and embrace dynamic presentations which pull our clients in. Tech here is still lacking though in my opinion. New business for anyone?

The ‘math of success’ is not just one of these areas … but all of them, at the same time. And it’s making change while already overloaded.

Echoing two of the presenters, we need to “get brave” since “we’ve got five years” (max, in my opinion) to get it right.

Hopefully Day 2 sheds some light on the ‘how’!

Share

Jeffrey Henning’s #MRX Top 10: How Research Firms Can Grow

Of the 4,881 unique links shared on #MRX last week, here are 10 of the most retweeted.

Twitter

By Jeffrey Henning

Of the 4,881 unique links shared on #MRX last week, here are 10 of the most retweeted:

  1. 10 Predictions about the Future of the Market Research Industry in the Digital Age – Michalis Michael shares his thoughts on what is to come down the road for the research industry. His first four predictions: traditional agencies that refuse to change will go out of business; DIY will democratize the industry; social-listening analytics will become essential; and agile research will become mainstream.
  2. Workshop by Kristin Luck: Growth Hacking: Tips and Tricks to Grow Your Business – Join Kristin Luck for her IIEX Europe workshop as she shares her approach on how to use “growth hacking” to your advantage, where you are launching a product, a brand, or a company.
  3. 7 Consumer Types for Successful Targeted Marketing – A Euromonitor white paper describing a consumer segmentation based on a survey of 16,000 global consumers. The segments include Undaunted Striver, Impulsive Spender, Balanced Optimist, Aspiring Struggler, Conservative Homebody, Independent Skeptic and Secure Traditionalist.
  4. Beyond the Big Reveal – Writing for Research, Brian Kushnir of Added Value details what agencies have to do to stay current: changing closely held data to more shareable forms, transitioning from scheduled outputs to agile delivery, and moving from long-form reports to short deliverables.
  5. The Internet of Things and the Coming Data Deluge – This recent ORC webinar looks at how the many “smart” Internet connected devices in our phones, our TVs, even our refrigerators will impact the future of research.
  6. Gallup Migrating Away from Phones in Favor of Online Polling – Gallup is reducing call-center staff as it continues to transition to more online research.
  7. Stop Asking for Margin of Error in Polling Research – Annie Pettit discusses the thorn in the side of statisticians, margin of error, why it was first used and why it has to go.
  8. Super Bowl XLIX: From Sadvertising to Dadvertising? – BrainJuicer looks at emotional advertising, with several examples of successful ads shown during the Super Bowl and why they worked.
  9. Young Researcher of the Year Award – ESOMAR has put out a call to all young researchers who are passionate about their work and wish to be recognized for it.
  10. New Joint Industry Guide Gives Alternative Approach to Measuring Social Media – The MRS, IPA, Marketing Society, Facebook and Twitter have joined forces to publish a guide for evaluating social media for marketing comms.

 

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Share

Two Key Challenges To Measuring The ROI Of Social

The measurement of a social campaign is about much more than likes, shares, downloads, and plays. It needs to be in the context of the objectives, and those objectives need to link to things like sales.

Measuring-Social-Media-ROI

 

By Ray Poynter

Two Guides

Want to know how you should be evaluating social media campaigns? Do you want to know how to balance short-term activation events with long-term effects? The answers are in the recently launched #IPASOCIALWORKS Guide to Measuring Not Counting.

As one of the authors of the Guide I have been involved in several events, including the launch at the IPA, workshops, and conference sessions. Whilst these events have been generally positive, two major challenges have been exposed by our interactions with attendees and people working inside advertisers and agencies.

These two challenges do not include the complexity of econometric modelling and experimental design. Although those topics are complex, there are people who can help. No, the two problems are:

  1. Being asked to measure social too late
  2. Not having access to sales data

Measurement needs ‘baking in’ to social

All too often the agency or social team are asked to evaluate campaigns that are about to start, or perhaps underway, and even sometimes that have finished. Occasionally this is possible, but usually it is just folly.

As the Guide points out, the measurement of a campaign is about much more than likes, shares, downloads, and plays. The measurement needs to be in the context of the objectives, and those objectives need to link to things like sales. To measure the complex interactions in social, and between social and other channels, it is necessary to create a plan for what will happen, when it will happen, and how it will be measured. For example, if a campaign is going to be assessed it is best if it does not all start at the same time, and it is best if different regions or groups are exposed to different elements of the campaign.

Asking for the evaluation too late is likely to lead to a report that talks about likes and shares, rather than sales and shifts in likelihood to recommend.

The Guide sets out a five step plan to ‘bake’ measurement into a social campaign.

  1. What is the campaign/activity designed to do? Defining macro objectives, like sales or likelihood to recommend, and micro objectives, such as downloads, plays, and engagement.
  2. Why social? What is the role of social? Including issues like whether social is being used on its own or in conjunction with other channels.
  3. What decisions will be made on the strengths of the evaluation? For example, are the measurements going to be used to manage the campaign in real-time?
  4. What are the appropriate datasets and metrics? With the objectives and uses defined we can select the right things to measure, and make sure that the campaign is deployed in a ways that facilitate measurement.
  5. Designing the evaluation process? The gold standard is market mix modelling, but that may not be affordable in terms of time or cost. The need for accuracy and the need to be pragmatic have to be traded off to select an appropriate design.

Many of the people we have spoken to say they only get approached at stage 4. In these cases it is essential that somebody takes ownership of 1, 2, & 3. For example that somebody lists the objective, both macro and micro, defines why social was chosen, and highlights the decisions that will be made using the measurements.

Measuring ROI and impact requires sales data

Brands guard their sales data very jealously, for fairly obvious reasons. However, without sales data it is not possible to effectively measure the ROI of most social campaigns. If sales data are not available the teams tend to fall back on measuring the available metrics, such as likes and shares, which in turn devalues the measurement of social.

To measure the impact of social typically requires granular data (e.g. weekly data), it requires data on what happened (e.g. impressions), it requires data on who was exposed to what, and it requires sales data.

If brands want to accurately evaluate social, to properly decide how much they should be spending on it, they need to make sales data available to the teams measuring the ROI – which can either be internal teams or specialist agencies.

Have you had problems accessing sales data when modelling? Do you have any tips to share on how to make the case for more data being made available?

Share