1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Mobile-6.29.16-
  4. mfour_new_1

Participate In The Q3-Q4 2016 GreenBook Research Industry Trends (GRIT) Survey

We need your help uncovering the future of Market Research. Participate in the Q3-Q4 2016 GreenBook Research Industry Trends (GRIT) Survey today!

We’d like to invite you to participate in the Q3-Q4 2016 GreenBook Research Industry Trends (GRIT) Survey which helps us write the GRIT Report.

The market research industry is changing rapidly, and it is more important than ever that we truly understand what the implications are for our business and profession. This edition of the GRIT survey has been updated to include some of the most hot-button topics in our industry today, including:

  • Adoption of emerging methods/tech
  • Usage of traditional Qual
  • Usage of traditional Quant
  • Sample Quality and Respondent Engagement
  • “Buzz Topics”: Automation, AI, Marketplaces, Big Data, Storytelling, Nonconscious Measurement, Attribution Analytics
  • Projected Research Spending/Budgets
  • Use of non-traditional data sources for insight generation
  • Trends Impacting Corporate Researchers: Organizational fit, impact, insourcing vs. outsourcing, internal innovation
  • Supplier Satisfaction levels and drivers of satisfaction
  • Hiring trends
  • Training resources
  • Information sources used by the industry

With your help, we can all better understand and adapt to our evolving research landscape. Would you give us 15 minutes of your time to take the survey and help make the GRIT report possible?

Take the Survey Now

Prefer to take the survey in Chinese, German, Japanese, or Spanish? You’ll be able to select one of those languages at the start of the questionnaire.

About the GRIT Report: Twice every year, we work with our partners to create the most comprehensive survey of the market research industry, and report back our findings to you. If you haven’t seen a GRIT Report before, take a look at the Q1-Q2 2016 edition to get a feel for it. Our goal is to help all insights professionals better understand where the industry is heading, so you can make the right decisions for your organization.

In return for your participation, as soon as it hits the (virtual) shelves, we’ll send you a PDF copy of the full report, so you’ll be among the first to see it.

Thanks in advance for giving back in support of our profession.

Thanks to Our GRIT Partners

Research Partners

AMAI, ARIA, Ascribe, AYTM, Gen2 Advisors, Happy Thinking People,  Lightspeed GMI, MROC Japan Inc., mTAB, NewMR, OfficeReports, PROVOKERS, Remesh, Research Now, Researchscape, Stakeholder Advisory Services

Sample Partners

ACEI, AIM, AIP, Asia Pacific Research Committee (APRC), Australian Market & Social Research Society (AMSRS), AVAI, BAQMAR, BVA, CASRO, DatosClaros,  ESTIME, feedBACK, GIM Gesellschaft für Innovative Marktforschung, LYNX Research, MRIA, MRS, MSU MMR, NGMR, NYAMA,OdinText, Qualitative Research Consultants Association (QRCA), SAIMO, Sands Research, The Research Club, Toluna, University of Georgia / MRII, UTA, Vision Critical, Wisconsin School of Business, Women in Research


Market Research Careers and Education

In an ever changing workforce environment – how have associations changed their offerings for a younger generation of researchers?


By Monica Zinchiak

Recently, the 2016 MRA Insights and Strategies Conference hosted a panel discussion which included industry leaders from CASRO, MRA, PMRG and QRCA which addressed training and education for the  newest talent in the market research industry. The question posed to those of us on the panel: In an ever changing workforce environment – how have associations changed their offerings for a younger generation of researchers? Have associations amped up educational resources and professional development?

Young researchers, getting their start in the market research industry, have more exciting employment opportunities than ever before.  These future leaders have more creative freedom, more academic achievements, more sexy tech to work with and they also have more introductions into the discipline of market research.  Big data and the “internet of things” now has firm roots in the quantitative market research landscape.  On the qual side, we are seeing the widespread adoption of user experience research outside of the tech world.  Hello user labs, customer personas, consumer immersion, corporate ethnography, journey mapping, and human-centered product design.

A great career map can be found here and shows both qualitative and quantities career paths in good detail.

Back to the question though… Have associations amped up education resources and professional development [for a younger generation of researchers]?  Absolutely.  Professional associations are always creating programs for every level of their membership.  One of my favorites is the Qualitative Research Consultants Association (QRCA.)  The QRCA is committed to investing in our future leaders through its Young Professionals Grant (YPG.)  The grant includes full registration fees to attend QRCA’s annual conference, the world’s leading qualitative research event, in Los Angeles, CA January 18-20, 2017. That means FREE!  10 promising young quallies, age 35 years or younger, will be given this opportunity. Applications are open until November 1, 2016 at www.QRCA.org/YPG

“The learning and networking opportunities as well as inspiration that came from the conference was unparalleled to other experiences I have had in my career thus far.  For some, this conference could be a launching pad for their careers, for others, this could be a catalyst in their already-developed career.”  — 2015 Young Professional Grant Winner

In case you don’t know us yet, the QRCA is a not-for-profit assoc. of more than 800 of the top qualitative researchers in the world.  We are moderators, ethnographers, UXers, social media experts, insight whiz-kids, branding authorities and more.


Hard Hat Stats: Some Common and Uncommon Sense (Part 3)

Kevin Gray shares some tips he's picked up after more than 30 years experience as a marketing researcher and statistician.


By Kevin Gray

I’m not a scholar – just a lunch pail guy – but I do have more than 30 years experience as a marketing researcher and statistician.  I’d like to share some tips I’ve learned on my journey.

Embrace ambiguity, or at least learn to be tolerant of it!  Marketing research draws heavily from the social and behavioral sciences, which lack quantifiable natural laws. It’s not Engineering. That said, I should note that Statistics is nowhere near as cut and dried as it might appear from an introductory course.  At the more advanced levels there is often little consensus among statisticians as to what works best in similar situations.

Question legacy practice.  Example: A still-popular way of conducting consumer segmentation is to perform a principal component factor analysis on a large number of values, attitudes and lifestyle ratings (“psychographics”) and then cluster the factor scores with K-means or hierarchical cluster analysis. The clusters are then cross tabbed against demographics, purchase behavior and other key marketing variables. This practice, which I call the “Cluster & Pray” method, was already under criticism when I began my MR career in the 1980s and I have seldom found it useful.  The odds of obtaining a useful segmentation are much better if we use attitudinal statements known to be related to consumer behavior (though there are now more sophisticated approaches to segmentation we can use, too).

Watch out for the Devil in the details.  Example: Though it isn’t the end of the world if you use statistical methods intended for cross-sectional data on time-series data, it isn’t good practice and can lead us astray. If you’re new to time-series analysis, here’s a primer.

Don’t focus on a small slice of consumers based on preconceptions – not evidence – that they are “the target.”  This runs up costs and can lead to very bad decisions.

Remember that all variables in a principal components (“factor”) analysis will have at least some weight in the computation of factor scores. As an illustration, say only two variables out of 30 load heavily on a factor and we name that factor based on these two variables. In reality, they may only account for a small percentage of the variance of that factor’s score, thus the label we gave the factor is misleading and the conclusions we draw from the analysis may be incorrect. This is a very common mistake in MR.

Remember that brand mapping is not just correspondence analysis with our software’s default settings – essentially a clerical task. Using different options may have a substantial impact on the map and there are many good ways to do mapping besides correspondence analysis.

Don’t forget that adding a predictor to a regression will nearly always increase the model R square. The adjusted R square, which compensates for model complexity, is usually more meaningful. (There are many other indices as well.) Secondly, R square is a proportion, so if someone reports that the new model has an R square “25% higher”, does this mean an increase of .25 to .50, for example, or a less spectacular improvement of .25 to .31?

Ignore standardized regression coefficients for dummy variables.  It’s hard to think of a case where interpreting a proportion in terms of standardized units is meaningful.  More to the point, we aren’t truly standardizing since the variance of a proportion decreases as you move away from .5 towards 1 or 0.

Don’t mix apples and oranges, e.g. “Age group has more impact on purchase frequency than gender.” Does this mean that including age group in the model improved model fit more than adding gender did?  If so, then we should say so.  Also, categorical variables such as age group usually have more than two categories and require more than one column in the data file, so adding or deleting these variables will tend to have more impact on a model than adding or deleting variables occupying just one column.

Realize that “importance” is in the eye of the decision-maker. Unfortunately, decision-makers are often not clear what they mean by “important” and it can connote several things to statisticians.  To decision-makers it often implies impact on the bottom line but, when asked about importance, statisticians should pin down specifically what the questioner has in mind.  This can be a very hard question and is not something a computer can answer for us.  For example, a profitable consumer segment may be distinguished from other consumers by just a few variables.  This means, however, that the overall discriminatory power of these variables will be small because they don’t vary systematically among other consumers – in that sense, they are not important.  Statisticians need to tread carefully when they use the word important themselves.

Remember that most “organic” data are not really organic. Finding the trees and plucking the fruit costs time and money. Moreover, the nutritional value of this fruit to the business should not be taken for granted…

Learn how to communicate with statisticians! Statisticians can come across as geeky purists preoccupied with mathematical minutia. For some this is a fair assessment. However, what might seem like technical trivia to non-statisticians may actually have a substantial impact on the bottom line. If you are ever in doubt, ask your statistician specifically how the details they’re fretting about might affect the decisions at hand. Be patient – these things are often hard to explain in words. You may be very happy you asked!

Learn how to communicate with decision-makers. This will be obvious to most marketing researchers, but the best research in the world will be the worst research in the world if its results and implications are badly communicated. Don’t try to dazzle clients with tech talk. Also, let’s not forget that we’re not professional entertainers. Don’t put your audience to sleep, but tell them what they need to know simply and clearly.

Avoid thinking of new methods and new data sources as complete replacements for “traditional” marketing research. They are supplements and complements to it and should be welcomed rather than feared or seen as panaceas. Progress is rarely either/or.

Don’t over-react to hype.  Believing everything we hear is risky but so is rejecting everything we hear. True, a lot of claims about disruptive innovation are downright silly, but that doesn’t mean it’s OK to just to stick to what we know. I’m no soothsayer but my bet is that the world and MR will be very different in 15 years, and I want to be ready for it.

Hope you find this helpful!



With three weeks to go, BrainJuicer is calling the presidential election. They don’t necessarily think she’s heading for the kind of landslide the polls indicate – but they do think Hillary Clinton has it in the bag.



Editor’s Note: GreenBook has no position on the US Presidential race. We have been showcasing the experiment being conducted by BrainJuicer under their System1 Politics brand because we think it’s innovative, interesting, and because I am a bit of a political junkie (much to the chagrin of many of my Facebook friends).  You can read more about the experiment here and here.

Now, despite there being over 2 weeks to go before the election and traditional polls running to extremes on who is ahead, in true BrainJuicer fashion they are calling the election for Hillary Clinton based on the results of their 3Fs tracking study.

Here is their analysis on why the feel confident that the outcome is already decided from a behavioral science perspective.


By Tom Ewing

Our System1 Politics election experiment is almost over. We have been tracking the US election without asking a single voting intention question, instead measuring the deep “System 1” heuristics that humans use to make decisions:

Fame (does something come easily to mind?)

Feeling (do I feel good about something?)

Fluency (is it distinctive – i.e. can I recognise and process it quickly?)

We’ve shown that the 3Fs drive brand share and predict brand growth. We felt they would also do a good job of predicting political outcomes – earlier and more accurately than the pundits and polls. We don’t really believe the polls can be “rigged” – but we know human psychology can’t.

So for the last nine months we’ve been doing regular waves looking at political candidates on the 3Fs. First, we surveyed the host of candidates that filled the primaries back in January, when we called it that Donald Trump and Hillary Clinton would be the nominees.

As the race thinned out, we focused in on those two politicians, ultimately running week-by-week dips during the closing three months of the campaign.

With three weeks to go, we’re calling the outcome. We don’t necessarily think she’s heading for the kind of landslide the polls indicate – but we do think Hillary Clinton has it in the bag.

In a previous post we explored her strengths and weaknesses as a candidate, as revealed in our data. But as these two charts show, that isn’t the real story of this election.

Here are the 3Fs in January:


And here are the same measures today.


Both politicians have risen in Feeling – though Hillary Clinton has maintained a steady lead. But Donald Trump’s once huge advantage in Fluency has collapsed. People no longer see The Donald as distinctive. Clinton’s Fluency has managed to sustain itself far better – to the point that she’s actually overtaken him.

So what happened?

To understand that, you need to understand what Fluency is.


At its core, Fluency measures distinctiveness. But “Distinctive” doesn’t mean “different”. There’s no denying that Donald Trump is a very different politician. His policies and selling points are unique. But in politics as in branding, distinctive isn’t about unique selling points. It’s about being easy to recognise and process.

What boosts Fluency? Distinctive assets – simple, highly recognisable things that stick in the mind and reinforce memory structures. Trump has had plenty of these – from his hair, to his “Make America Great Again” slogan, to his signature “build a wall” policy. Clinton also had some strong distinctive assets – her experience, her associations with Bill Clinton’s presidency, and the fact that she’s the first major-party woman candidate. But the fact that she never hit the heights of Fluency that Trump did suggests these were comparatively weaker.

What damages Fluency? Having no distinctive assets kills it dead. But the other great enemy of processing ease is a lack of congruence. You see this again and again in innovation – new ideas failing to succeed because they’re not fluent enough. And what lets them down is incongruence: parts of the concept clashing with other parts. Think bacon vodka. Or vegetable flavour jell-o. They’re different alright. But they’re not Fluent.

With those things in mind, it’s easy to explain the Trump phenomenon.


As the campaign went  on, Trump’s Fluency fell steadily. The percentiles on this chart show where in our branding norms database his Fluency score at each stage would put him. Even in June, when he was at his strongest in our overall model, his Fluency score was slipping from its early heights. By September, it had collapsed.


The first and most fundamental point is that, quite simply, the novelty of Trump wore off. Against a Republican establishment, he was an excitingly distinctive proposition, using a simple set of distinctive assets – winning, the Wall, making America great again – to knock out opponents. It’s also obvious with hindsight how congruent the Primaries format was for Trump. A competition with him at the centre and the others dropping out one by one? Just like The Apprentice.

In the later part of the campaign, Trump stopped being a novelty. He lost focus on his distinctive assets – not mentioning the Wall at either Presidential debate, for instance – and faced an opponent with higher Feeling who he couldn’t simply fire. All bad for his Fluency.

He also had another problem: incongruence. His biggest drop came between becoming the nominee and the first debate. This was the period where he was having to act like a Presidential candidate in a normal election, reading off teleprompters at the convention and endorsing GOP candidates – going completely against the distinctiveness he’d built up. Was he a maverick outsider or a GOP hero?  Trump was becoming the bacon vodka of politics.

The first debate was an opportunity to boost Fluency in a big way by claiming and owning a vital asset: appearing presidential. One candidate took that opportunity, and it wasn’t Donald Trump. Clinton’s Fluency was also very low before the debate, but jumped back up afterwards, putting her in the lead.

Trump’s Fluency, meanwhile, has dipped still lower, as he dissolves into a mess of contradictions: is he a Republican or does he hate them? Is he presidential or a pervert? As one respondent in our verbatims put it: “Still trying to figure him out.” In other words, no Fluency.


This has been an unusual election with two candidates who feel like the precise opposite of each other. But at the fundamental level of the 3Fs, the fascinating thing is that both candidates have moved in similar ways since January. They’ve both gained Feeling. They’ve both lost Fluency.

With two such different candidates acting in the same way, we can hypothesise that these shifts are created by the campaign process itself, not by the candidates.

So we’d suggest two rules for election campaigns, that candidates need to be aware of.

  1. Familiarity Breeds Contentment. Even with the most hostile campaigns in presidential history, positive Feeling for both Trump and Clinton is higher than it was in January. As people get used to the candidates, and make their decision as to who to vote for, positive emotions rise and negative emotions fall.
  2. Distinctiveness Declines. As familiarity rises, the novelty factor of the candidates wanes, and their Fluency falls. Trump’s Fluency – based on being a wild card – was far more vulnerable to this than Clinton’s. Back in January we noticed she had definite leads on the distinctive assets around the presidency itself – and (thanks to her debate performances) this ‘presidential’ Fluency has worn out much less.

Donald Trump was a classic challenger brand. We love a challenger brand story. But because we love that story, we usually remember the challenger brands that make it. We don’t remember the fads and flash-in-the-pans, the ones who didn’t “cross the chasm” to mainstream acceptance. And right now, with 3 weeks to go and his Fluency in the basement, that’s where Donald Trump is.

(Whoever you support, though, we at BrainJuicer and System1 Politics urge you to vote. None of this analysis will matter if people don’t get out and use their right to vote on November 8. Here’s hoping for a peaceful Election Day, and whoever you vote for, please vote!)


Finally, A Fund For MR Service Businesses! Growth Calculus Launches Unique Investment Fund for Marketing Services Companies

Many small and medium sized research companies are doing excellent work but struggling to find the knowledge and capital it takes to grow. The Growth Calculus fund is filling a void in the market by supporting these suppliers.


By Gregg Archibald 

Growth takes investment.  For some companies in our industry, that investment is much too hard to come by – especially for those companies that aren’t either very large or tech driven.  Growth Calculus is looking at the market a bit differently, with a focus on more traditional measures of opportunity – growth, service, product, profitability and potential.

In my role with Gen2 Advisors, I see many small and medium sized companies that are doing excellent work but struggling to find the knowledge and capital it takes to grow.  We are pleased to see this fund filling a void in the market by supporting these suppliers.

We think the press release below from Growth Calculus explains their focus and underlines why this effort is important better than we can, so we are posting it in it’s entirety. We hope the entire research supplier community will find this as exciting as we do!

Please note that Gen2 Advisors is supporting Growth Calculus in their effort to identify companies that are appropriate for this fund.


Growth Calculus, the growth capital investment and advisory firm, today announced the launch of its inaugural investment fund — the Service Evolution Fund — a sector-specific fund targeted for investment in specialized marketing services companies in the US and Canada.

The Fund seeks to invest in marketing services firms with a track record of revenue growth and profit earned by providing highly specialized expertise to Global 2000 companies across all industry sectors. It is currently evaluating investment opportunities in categories including:

  • Customer experience consulting and analytics
  • Vertical & application-specific research and data/reporting
  • Marketing optimization consulting and analytics
  • Forecasting and behavioral economics
  • Social media data collection and analysis
  • Marketing operations outsource management
  • Mobile marketing data/analytics and app development
  • Advertising production and talent cost management
  • Agency compensation and media auditing
  • Lead generation and sales effectiveness
  • CRM and customer loyalty consulting and analysis

“These are the types of highly specialized services companies that have terrific, un-tapped knowledge assets,” said Patrick LaPointe, managing partner of Growth Calculus. “But they need both capital and an infusion of very senior expertise in key areas to find their breakout growth paths.”

“We believe that marketing services companies are under-served by traditional venture and private equity investors,” said Kathy Bachmann, managing partner of Growth Calculus. “By uniquely combining growth capital of $1 million to $3 million along with intensive expert advisory support in areas like product development, sales operations, marketing demand generation and backroom process efficiency, we believe we can help create higher levels of performance and open doors to new solutions and new markets.”

“This new fund targets a segment of the market that is underserved with a combination of capital and counsel that can accelerate emerging marketing services companies’ growth,” said company advisor Tony Pace, former CMO of Subway restaurants and chairman of the Association of National Advertisers. “Changes in consumer behavior and technological advancement have given rise to many new enterprises in this space. Having worked with many good ones, the desire among them for expert counsel and investment capital to spur growth is nearly universal.”

“Having built, grown, and ultimately sold several companies that all started out in services and transformed into data- and tech-leveraged offerings, I’ve seen how great service firms can create substantial value for their owners, teams, and investors alike,” said LaPointe. “The key lies in learning to ‘productize’ to create more value for clients and greater operating efficiencies. When that happens, revenue and profit growth rates can easily match those of the best software companies and company valuations improve dramatically.”

“Studies in the angel capital universe show conclusively that investors who earn the best returns are those who are highly focused in their selection process, exercise discipline in valuing and vetting each opportunity, and then roll up their sleeves and work with the CEOs of their portfolio companies to help make growth happen,” said Bill Payne, Angel Capital Association’s 2009 angel investor of the year (Hans Severiens Award) and Growth Calculus advisor.

“We also see great opportunities to invest in women-owned businesses,” Bachmann said. “These are substantially under-represented in the venture and angel investment communities today, yet which tend to deliver above-average growth, profitability, and efficiency. That’s why we’ve committed to invest at least 50% of our capital in women-owned marketing services businesses.”

For more information, visit www.growthcalculus.com.

This press release is not a solicitation to invest in the Fund. Investment is only open to Accredited Investors and only after review of the Fund’s Private Placement Memorandum, Limited Partnership Agreement, and Subscription Agreement.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Google Surveys 360: The Digital Marketer’s New Best Friend

Google Surveys is now Google Surveys 360 and is part of the wider Google Analytics 360 suite. Here's why that matters to researchers and marketing scientists.



Since the launch in 2012 of Google Consumer Surveys, the offering has continued to mature in both scope and focus. The core Google Consumer Surveys product provides users a cheaper, faster way to survey a representative sample of their target online population across the web and on mobile devices. In just 2016 alone thousands of businesses have conducted hundreds of thousands of interviews using GCS, informing large-scale research projects and impacting business decisions around the world. GCS has access to over 10M+ available respondents via partner publisher sites and 5M+ downloads on mobile app, Google Opinion Rewards, creating a unique sample source of real, everyday people.

By all measures, GCS has been a successful product and it has accelerated the DIY and automation trends in market research.

Being Google, of course one of the main areas of focus for the company has been use cases related to advertising research and integration with the many other data analytics tools Google offers. I for one have been waiting for quite some time to see how (or if!) the integration into the wider Google Analytics platform would happen, but the wait is over: Google Surveys is now Google Surveys 360 and is part of the wider Google Analytics 360 suite.

The Google Analytics 360 Suite offers a powerful and integrated analytics solution for businesses of all sizes. The tools allow marketers to measure and improve the impact of their marketing across every screen, channel and moment in today’s customer journey. It’s easy to use, makes data accessible for everyone, and helps users discover and activate the “ahas” they need to win. Here is a bit more from their website:

With its cutting-edge technology, the Analytics 360 Suite processes enormous amounts of complex data―then simplifies it all―so your enterprise can easily spot insights and put them immediately to use. It provides one user experience with a single login and it’s fully loaded with cross-product data integrations.

Loaded with six other products, four of which are brand-new and in beta, the Google Analytics 360 Suite makes it easy to share data and insights throughout your organization.




The full suite includes:

  • Audience Center 360, a new data management platform that gives you a panoramic view of the audiences that matter most to your brand. Natively integrated with DoubleClick, it automatically offers access to Google proprietary data and third party data.
  • Optimize 360, a new website testing and personalization tool. Show your customers different variations of your site, then use and refine the best-performing options to increase customer engagement with your brand.
  • Data Studio 360, our new data visualization tool, crunches the numbers and turns them into beautifully informative reports: easy to read, easy to share, and fully customizable so your teams can get exactly what they need.
  • Tag Manager 360 is new and designed just for enterprise with Google’s industry leading tag management technology. It offers simplified data collection and powerful APIs for better data accuracy and streamlined workflows.
  • Analytics 360, formerly known as GA Premium, consolidates data about customer behavior into a single product and makes it easy to perform robust analysis. Get actionable customer insights and then use those insights to earn more from your marketing.
  • Attribution 360, formerly known as Adometry, helps advertisers see the value of their media investments and allocate budgets with confidence. Now rebuilt from the ground up, it helps you analyze performance across all channels and devices to achieve the most powerful marketing mix.
  • Surveys 360, formerly Google Consumer Surveys, let’s marketers conduct powerful, inexpensive and efficient surveys from a global sample, or survey website visitors via re-targeting or the development of customized panels and deep behavioral targeting.  Enhanced analytical tools like cross tabulations allow for exploration of the data or integration with data from other tools in the 360 suite.

The Google Analytics 360 Suite integrates with other Google solutions like AdWords, the Google Display Network and Google BigQuery. Pair it with DoubleClick Bid Manager to reach your best customers at the right moments with auto-optimized bidding. Use your website data to segment high-value customers and remarket to them automatically. This is truly personal and real-time advertising — at scale.


As part of the new integration the Google Surveys 360 user interface was rebuilt from the ground up to be simple and easy to use, with new interactive crosstabs.




The new Google Surveys 360 is positioned as an enterprise platform and has a new pricing model connected to it based on an annual subscription plus per use fees, but for those who have been using Google Consumer Surveys no worries:  the pay-as-you-go solution will still be available for those who don’t need advanced features and support.

The big news here besides the incorporation of dynamic surveys tool into the 360 suite is what that means for marketers, especially for targeting. Use the unrivaled data that Google has and insights into who is visiting most any site in the world, you can now retarget people who have visited your site, viewed your videos, clicked on your links, etc…  Think about the possibilities for agile and iterative testing of ads, concepts, content, etc… with people you know are your target audience!  Now rather than just having descriptive statistics from the traditional Google Analytics tools, digital marketers and researchers can now get to the “why” via survey functions, adding immense depth to web analytics, and using web analytics to add immense depth to surveys.




Plus, with their open API, marketers and researchers can develop a host of new solutions “powered by” Google Surveys 360 to extend the functionality and use cases of the core platform in new directions.

Based on my conversations with marketers and digital data scientists and the struggles they face daily, I suspect that with this new offering Google Surveys 360 may become the new BFF of Digital Marketers.  Here is an example of how they integrated multiple tools in the 360 suite:


Using Google Surveys 360 and Google Attribution 360 together, we were able to quickly answer questions around TV ad performance during the Rio 2016 Olympics. We then displayed this data using Google Data Studio (beta).




The walls between research and marketing have been coming down for a while, but with Google Surveys 360 we are finally looking at an integrated solution that combines some of the best of digital analytics with market research capabilities in an easy to use but powerful platform. The opportunities for marketers to make better decisions from better date are immense. I said in 2012 that Google was “in it to win it”, and they have continued to prove that to be true with the launch of Google Surveys 360.


The Myth of 1:1 Marketing

How do marketers become of value to individual members of their audience? We talk about one to one marketing, and the benefit of going beyond mass messages to build meaningful engagement with customers.

Editor’s Note: Fresh Squeezed Ideas created a video series, The Future of Marketing and How to Win, to not only share ideas on where the future of marketing is headed, but to also provoke some new ways of thinking about brand strategy and marketing.

By Fresh Squeezed Ideas

How do marketers become of value to individual members of their audience? We talk about one to one marketing, and the benefit of going beyond mass messages to build meaningful engagement with customers.

There’s an old saying: “Half the ad budget is wasted, but we don’t know which half.” Of course, that’s not as true anymore as it once was. Now, marketing has technology that’s very powerful at understanding what is resonating with customers, and being able to calculate an ROI.

Take the stance of Lee Clow, who coined the term “marketing arts” as an alternative to the word “advertising”. What’s the difference between advertising and marketing arts? Well, advertising really speaks to sticking a branded message in the faces of passive viewers over and over and over again, until they can’t help but recall the brand and the message, the icons, the jingle, and so on.

The modern equivalent of this is “preroll” and display ads that are showing up in our social newsfeeds and before videos we want to watch. It really begs the question about marketing arts. Marketing arts is really about something much more. It’s about engagement. It’s about experience. It’s about creating something of value to the viewer. So, if this digital era really allows marketers to break free of the 30 second spot, and create an endless array of possible creative engagements, why not take advantage of it? Why are we still bombarded with display ads for stuff we probably don’t care about?

The problem is the scale of executing on the one to one marketing proposition is actually beyond imagination—and scatter-shot preroll is not the solution. So, how do marketers become of value to individual members of the audience? This is a critical question, and you need to start by understanding two things. Number one, products and services plays a very important role in consumers’ lives. And secondly, marketers have an important role therefore to play in the consumers’ lives. You see, if everything that we purchase creates meaning in our lives, then marketers have a very serious responsibility. They’re involved in our lives. So, it’s important to marketers to truly understand us as consumers, and set themselves against the task of enriching our lives with greater meaning.

This leads to the question of: how do you build meaningful engagement with customers through personalized marketing? The key idea is that narratives attract customers. They choose to engage on a one to one basis. And so, the B2C branch should take a page out of the B2B marketing handbook. Content is king. Crafting a narrative that attracts your target customer, rather than trying to throw yourself in front of them online is really the key to success.

The best example of a narrative that attracts customers, rather than trying to intercept them, is from KLM, the Dutch airline. Now, if you leave something on an airplane, you’ve probably had the experience that airline crew, or the airport staff can be indifferent to your specific problem. They’ve got a lot going on. KLM has introduced a sniffer dog so that, when someone leaves something light on the airplane, the flight attendant will let the dog smell the item, slip it into it’s backpack, and it will run off through the airport all the way through to baggage, until it finds the person who owns the item. What a wonderful way to have your lost items returned to you, by this cute little puppy.

It’s attractive because it is an entertaining story and created value for the consumer. For an industry that doesn’t have the best reputation for caring for customers, that’s a pretty awesome way of changing a narrative, and of course it’s very sharable content.

So, that leads to of course, the question of how do you build such a great narrative? Well, it’s very simple. It comes down to deeply understanding both your customers and how they use or how they struggle with your category, regardless of whether the product or service. This can be very complex, but with the right guidance, anyone can navigate this. The real skill is identifying the new narratives that resolve tensions or align to their values, such that customers feel connected. They feel joy. It invokes pride or it inspires them to explore, or have social impact. Once you do all that, you will have come a long way to defining a purpose for your business that will attract individuals to your brand message, rather than jamming them, a less relevant message at a time.


Hard Hat Stats: Some Common and Uncommon Sense (Part 2)

Kevin Gray shares some tips he's picked up after more than 30 years experience as a marketing researcher and statistician.

By Kevin Gray

I’m not a scholar – just a lunch pail guy – but I do have more than 30 years experience as a marketing researcher and statistician.  I’d like to share some tips I’ve learned on my journey.

Don’t confuse the possible with the plausible, and the plausible with fact.

Learn how integrate and use many kinds of data, not just what we collect ourselves. This can go a long way to help us help our clients. These data may include government and industry statistics as well as clients’ own internal data.

Try to prove yourself wrong!  Don’t just go with your gut when making decisions, and check your thinking to see if it is internally consistent and supported by empirical evidence (not cherry-picked data). Longitudinal and time-series data are better suited than cross-sectional data for making causal inferences (e.g., what has worked and what hasn’t).  Use experimentation when you can. Causal analysis is a hot topic among researchers and academics in many fields these days and I’ve listed a number of references I’ve found helpful here.

Create standard questions and questionnaire templates for studies that will be repeated frequently in the future.  Constantly re-inventing the wheel is inefficient and leads to inconsistent quality.

Don’t ask consumers questions regarding their purchase behavior that are so detailed that no human could be expected to answer them accurately.  Don’t ask consumers to rate long lists of values, attitudes and lifestyle statements (“psychographics”) that are unrelated to past, current or future consumer behavior.  Check the literature for attitudinal scales that have been shown to work.

Be aware that response patterns in survey research differ by national culture.  A 50% top 2 box score might be pretty good in some countries but pretty lousy in others.  Employee and customer satisfaction research and NPS can easily fall victim to these cultural differences.

Don’t confuse statistical significance with importance.  Different beasts. On the other hand, p-values, etc., should not simply be dismissed as meaningless…be mindful of the human tendency to think dichotomously!

Understand that a large number of measurements made on a small sample is not the same as having a large sample. While many measurements on a respondent may (or may not) improve measurement precision for that respondent, it does not increase the number of respondents in our sample.

Don’t confuse the sampling methodology with the sample. For instance, a polling company may select a sample via a probability sampling method but unless non-response is trivial, the respondents will not be a true probability sample.

Appreciate how important chance events are in our work (and daily lives). I can wholeheartedly recommend David Hand’s book The Improbability Principle to anyone, marketing researcher or not.

Don’t be overawed by the opinions of “thought leaders” or other self-proclaimed authority figures in the business world. In other contexts, Albert Einstein counseled against this. And he was a real Einstein.

Don’t become overly-specialized.  Any method works well for some kinds of projects but not for others. A true marketing researcher doesn’t just sell canned methodologies and knows how to tailor research to fit their client’s real needs…not just their own infrastructure and sales target. This requires a broad skills set, though, and we need to identify our weak points and work on them. “Be a jack of all trades and a master of at least two,” to quote one of my mentors.

Look for new ideas outside of MR, not just within. The methods we use today nearly all originated in other disciplines and have diffused into our industry, sometimes very slowly. Outside reading never hurts and there is now a lot online and freely available. There are also many professional associations you might consider joining.

Remember that there are better and worse ways to do the same thing. MR is not consistently best-in-class (let’s be honest!) and this is another reason to look to other disciplines for ideas and guidance and not just rely on our own gurus.

Don’t confuse potential with performance. A new methodology may show great promise but we shouldn’t spend our precious budgets on promise alone.  A lot of claims are made these days that turn out to be complete nonsense.  Here are a few tips on how to ferret them out.

Hope you find this helpful!


Market Researchers Need to be More Commitment Phobic

Technology is already outpacing us and our industry, so we need to be smarter in how we collect, interpret and communicate data and insights.


By Heather Williams

Market researchers need to be more commitment phobic.

Yeah, I said it. Now I bet you can’t wait to find out what on earth I could possibly mean by such a statement.

As an industry of very bright and inquisitive people, researchers are often too committed to doing what they know. There only seems to be a handful of people in the industry who are truly shaking things up, apart from some super-dynamic tech start-ups who are banging their heads against walls to get researchers to work with them to create a more harmonious balance between tech and research method.

I don’t need to waste your time with another blog post about how the world is changing all the time, that change is inevitable, that smartphones are our best friends now, etc. You know all that. You LIVE it.

However, I don’t have a problem reiterating the fact that technology is so incredibly central to our lives and it’s already outpacing us and our industry. We need to re-focus. This does not mean we are going to lose our jobs to robots, it means we need to be smarter in how we collect, interpret and communicate data and insights.

How can we expect to accomplish intuitive research and technological harmony if we aren’t happy and willing to adapt to new ways of working, new technologies and methods?

Dr. Albert Einstein famously said: “The definition of insanity is doing the same thing over and over again, but expecting different results”.

Ultimately, research is about exploration of people and culture as to produce a new way of thinking, a new perspective or angle that can help brands be more relevant to their customers. Clients depend on us for exactly this and, as our industry matures and many of our techniques stay the same, I am sceptical of what we are bringing to the table that is genuinely ‘new’. I personally believe we can do a lot more.

With new technologies comes unfamiliar data sets. This can be intimidating and enough of a reason for many people not to engage with a new method or technology. However, I can’t think of an instance where we’ve done something experimental at Firefish where we didn’t uncover something new. In our experience, that gem of an insight you couldn’t have found in a traditional way is always worth the venture into the unknown.

Why do you think Tinder, the dating app, became so popular among those looking for love? Vast amounts of choice could lead to meeting ‘the one’, but you would have to put in the work into going through the ‘data’ (i.e. possible partner profiles) in order for that to become a reality.

The GRIT Report from June of this year states that one of the four most sought after training topics among researchers is ‘Introduction to emerging technologies & methods’ (32%1). It’s wonderful that the appetite is there, but I can’t help but notice the ‘Introduction’ part. A third of our industry has barely touched the surface and is without a solid foundation in how to deal with emerging technologies and, dare I say, possibly an even bigger proportion aren’t actively building their skills & knowledge in this area.

I can empathise, to an extent, as I know it’s not as easy as it looks. I understand that we need to be confident in the work we deliver to clients and this can hold us back when we have 24 hours to deliver a dazzling proposal for a 5-market global study. I also understand that, when you’re an agency who doesn’t have your own platform, you can end up with a patchwork of costly technologies that creates a lot of data streams. This gets expensive and complicated, but this is not a good enough reason to give up. This is the time for us to be brave and to experiment, to break free of our commitment to doing only what we know.

However, what if we weren’t so committed to doing only what we know, but worked harder to push technology along so that it did everything we needed? The brains behind our tech counterparts in the research business crave to be the best but they are also challenged by roadmaps and priorities…but are their priorities the right priorities for us? This asynchronous approach isn’t working. We need to work together.

This is a hot topic of conversation that I frequently have with many of the leading tech companies in our industry. Robin Hilton, Co-Founder and Director of UK-based ResearchBods, has this to say on the matter:

“The market research industry has been one of the latter to embrace digital and technology – and this has seen MR lose its relevance in businesses as marketing, digital and new tech sectors have developed their own techniques and methods to obtain consumer insights.  Many researchers are now starting to use and look at how tech can improve their offering and provide solutions, but agencies are in a difficult place when they receive briefs with very short timelines to respond, so understanding how technology can best help to provide the best solution in a short time and with limited budgets can be difficult. 

The ideal solution is for tech providers and researchers to work together much more closely, using the knowledge of each discipline in a genuine partnership, rather than a client/supplier relationship.  This approach needs genuine trust between agencies and must be worked at – but it also provides researchers with the knowledge and understanding of how tech can help their clients get closer to consumers, and in doing so, provides genuine added value for working with that agency.” 

Ask yourself: why are you so committed to something? Just because it’s easy? I can’t imagine anyone in an analysis session giving up on finding out that ‘ray of insight’ to settle for the easiest and most convenient answer, so why aren’t we challenging the commitment quo to achieve a higher level of technology and research harmony?

To close, I’d like to share 5 tips on how we approach working with technology at Firefish:

  1. Invest time to build knowledge and confidence: We regularly invest time in learning about different technologies available. We know tech isn’t going anywhere and we also know how effective and inventive it can be when part of our projects.
  2. Ask the people what they want: We conduct ‘research on research’ at the end of every digital qualitative project, to learn about the end-user experience, so that we know how to be effective in our research design. For example, we have learned that nearly every study should be mobile-first with the option to seamlessly switch over to desktop/laptop, simply by asking people what worked/didn’t work for them at the end of their project.
  3. Have a clear point of view: We continue to develop our best practice which gives us a clear point of view when speaking to clients and colleagues. This is also a great way to socialise thinking and consistency within your teams.
  4. Be part of the journey: We don’t accept everything at face value because, just like people, technology changes and evolves. We try to be part of this process as much as possible. This includes providing feedback to suppliers and learning about what is on their roadmap so that we can have confidence in the technology we employ for each project
  5. Be bold: We know that we can be creative with technology. A tool sold for quantitative data collection & analysis might also be effective for qualitative research when executed in a smart and clever way. For example, we used online eye-tracking with a robust, quantitative sample size, but our analysis and the way we pushed our supplier to use the tech was underpinned by a strong qualitative approach and analysis.

Commitment is for marriage, children and filing your taxes on time. If the objective is to make data collection, interpretation and communication easier, simpler and faster, then let’s break free and explore, together, how we can become a little more flexible in how we work as a wider industry. Share your experiences and your commitment to this cause in the comments below.

See you at TMRE.

1GRIT report: https://www.greenbook.org/grit


Can Political Polls Really Be Trusted?

When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.



By Ron Sellers, Grey Matter Research

Over the past few election cycles, we’ve witnessed great wailing and gnashing of teeth regarding the supposed inability of the polls to predict the winner and winning margin correctly.  There have been hypotheses that phone research is no longer valid because of low response rates, claims that everyone simply lies in surveys (and therefore that surveys have never been valid), and a variety of other statements to the effect that if political polling is no longer reliable, then maybe no polling or research is reliable.

Perhaps these concerns say more about the failure to understand how research works and how it can be applied appropriately than they do about the actual ability of research to continue to play a critical role today’s world.

Certainly there are some political polls that are flat-out conducted poorly, just as there is some business research that is misleading garbage.  But many of the supposed “problems” with political polls are that pollsters, pundits, and/or media are trying to make research do something it simply cannot do.  Let’s take a quick look at some of the issues, and consider how they also apply to research your own organization may conduct.

Failing to Research the Right People

Recently I saw a front page newspaper article about a national poll that put Hillary Clinton two points ahead of Donald Trump.  Wait – did we start having a nationwide popular election and no one told me?  If that were the case, my daughter would be learning about Presidents Al Gore and Samuel Tilden in school.  Four separate times, a candidate has won the popular nationwide vote but lost in the Electoral College, which means these national polls are simply measuring voters in the wrong way.

In business, the same issue applies.  A customer satisfaction study is great, but not if it only includes long-term customers.  What about all those former customers who no longer do business with you, or those occasional customers who haven’t been with you that long?  Incorporating all of them is a much truer measure of your customer service.  Careful thought about who you’re researching is just as important as the techniques you’ll use or the questions you’ll ask in the research.

Ignoring Basic Statistics

How many headlines have you read about one candidate “leading” another by two points?  Buried in the article is the fact that the survey’s margin of error is ±3.8 points.  Well, guess what – that “lead” isn’t a lead at all, no matter what the headlines proclaim.  And then we’re surprised when the “trailing” candidate ends up winning by two points?

This problem arises all the time in business:  trying to position one product name as the “clear winner” because it outpolled the alternative 38% to 35%, or building complex statistical models in which a new product launch will succeed at 18% consumer acceptance (the number from the research), but not at 16% acceptance (well within the study’s margin of error).  There are plenty of reports that provide extensive data on subsets of 30 people, or that show all findings with decimal points (as if somehow 53.6% brand awareness is more accurate or relevant than 54%).

All research is subject to some margin of potential sampling error.  In addition, there’s a difference between statistical significance and practical significance.  If two potential logo designs are at 62% and 59% favorability, that difference may be statistically significant – but is it significant enough on a practical level that the ratings alone dictate which one to choose?

Believing People Can Forecast Their Future Behavior

Consumers are pretty good at telling us what they think and believe.  They’re not so good at predicting their future behavior.  They may fully intend to buy Crest next time at the grocery store – then they get a great coupon for Colgate, or they see an intriguing package or promotion for a new brand, or they find the store just raised the price on Crest.  Suddenly, their behavior no longer matches their prediction.

Similarly, some voters fully intend to vote but don’t get around to it, forget to mail the absentee ballot on time, or have a sick child on election day.  Even if they do vote, they may be leaning towards one candidate for months and then decide to switch to the other based on the last attack ad they saw, the latest candidate mis-step, or the most recent leak of damaging information.

Pre-election polls are consumer predictions of future behavior, which must be viewed with an understanding of the limitations of this type of measurement.  Particularly when there are so many independents and swing voters (and in the 2016 presidential election, particularly when so many voters have a negative view of both major candidates), people can change their minds multiple times between the last poll they answered and pulling that lever on the voting machine.

In business, the same understanding of limitations is critical.  When you test advertising, asking people “Will this ad make you more likely to buy the product” is not a valid measurement; people just cannot answer this question accurately.  Designing questions that try to box people into a yes/no “Are you going to take this future action?” often results in unreliable data.  That’s one reason Grey Matter usually measures interest or willingness to consider rather than likely to buy; respondents can more accurately answer the former than predict the latter.

Confusing Correlation and Causality

Political polls often make distinctions and predictions by various voting blocs:  women, Latinos, Millennials, evangelicals, etc.  The problem comes when it is assumed that being a member of one of these blocs is actually the factor that determines a person’s voting decisions.

There may be a correlation in the data regarding which groups are supporting which candidates, but can it be determined that being part of a specific group is actually influencing who those people support?  And what happens when different blocs overlap (which they typically do)?  Let’s say women and Millennials are voting more liberal, while evangelicals and Caucasians are voting more conservative.  What happens with the predictions about the vote of a White, evangelical, 26-year-old woman?

Correlation and causality are confused in business research all the time.  Research is wonderful at discovering correlations; for example, we can clearly see a connection between lower incomes and lower levels of education.  It’s the causality that’s a challenge.  Sociologists have been arguing for decades over whether lower-income people lack the resources to achieve higher levels of education or whether less-educated people lack the resources to earn higher incomes.

If your data shows that people who pay more for your product are also more loyal to your brand, is it because some people are willing to pay more because they are more loyal, or is it that people who paid more feel stronger loyalty because they want to justify to themselves how much they paid?  Or is the driving factor something else entirely?

With this data, it could be easy to recommend raising prices to drive stronger brand loyalty, but that could also be incredibly wrong, because you would have inferred causality where it’s entirely possible there is none.

Oversimplifying Complex Issues

Polls often show one candidate “leading” another by something like 45% to 39%.  What happened to the other 16%?  Are they firmly in the camp of a third-party option, or still undecided?

For that matter, how solid are the supposedly decided voters?  There’s a big difference between “definitely voting for Carl Wilson” and “probably voting for Carl Wilson.”  The former is likely pretty well locked in; the latter may well change.

Oversimplifying also applies to defining subgroups.  Political pollsters often prefer quick questions, such as “Are you Catholic?” or “What is your religious preference?”  Problem is, there’s a huge difference between a committed Catholic who attends Mass regularly and someone who was baptized Catholic but hasn’t been to Mass in twenty years.

Gathering and analyzing data is not simple, and treating it as simple leads to misleading data.  Unfortunately, too often qualitative research is considered “art” while quantitative research is considered “science.”  There is plenty of science to good qualitative research, and plenty of art to good quantitative.  Failing to account for the undecided in a political poll is a good example of oversimplifying research, but there are also many ways to oversimplify while exploring critical business decisions.

Dealing with Reporting Spin

Let’s say Bill Ward led Teresa Davis 48% to 35% last month; this month Ward’s lead is 45% to 40%.  Consider four possible headlines about the polls:

  • Ward Continues to Hold Big Lead over Davis
  • Ward’s Support Declines
  • Davis Makes Big Gains, Closes Gap
  • With 15% Undecided, Race Is Too Close to Call

Each of these headlines would be technically correct, yet each puts an entirely different spin on the findings.  And given the fact that what’s happening in the polls can influence voters, the interpretation of the data can actually impact the election.

Business research is no different.  If 60% are highly satisfied and 40% aren’t, what’s the story – the six out of ten people who are highly satisfied with your product, or the four out of ten who are not?  If your brand awareness has gone from 3% to 6%, is the story that brand awareness has only increased three points, or that it has doubled?  How data is interpreted and reported makes a big difference in how it is ultimately used by your organization.

Trying to Make Research Do What It Can’t

Research is a crucially important tool, but as with any tool, it is only valuable if utilized skillfully.  The best electric drill in the world won’t be much help if you’re trying to use it to paint a wall.

People naturally crave certainty, but research is not a discipline that provides absolute certainty.  Instead, it provides guidance.  That’s not to downplay the value of research in any way, given how vital is the guidance that research can provide.  It’s just that too many people look to research to make decisions.  Research should never make decisions – it should inform and guide them.  It should make you better at making business decisions.

The same is true of political polls.  They can inform us how people are thinking right now, and why they’re thinking that way.  They can show us trends over time for each candidate’s support.  They can give us guidance for what may happen in the actual election.  But they are not iron-clad predictions of which candidate will win and by exactly how much.  Viewing them as such and then criticizing them when they’re not unerringly right is more an indictment of our desire for a tool that will forecast the future with full certainty than it is a statement on the validity or value of research.