1. Research Now
  2. Aha!
  3. SIS International
  4. ORC International

Social Media Analytics: Promises, Challenges and the Future

Is social media an asteroid streaking towards traditional marketing research or is it a valuable complement rather than a complete substitute?



By Kevin Gray and Koen Pauwels

Can Social Media Analytics replace traditional MR?

Compared to traditional marketing research methods, social media analytics is fast and inexpensive.  It is also superior to survey research because it’s less prone to social desirability bias and mistaken recall – consumers are speaking in their own voices, frankly and spontaneously, about brands and the things that matter most to them.

Some Questions

These, at least, are some of the hopes many have had for social media for quite a few years now.  Promise and potential are not reality, however.  What are the realities?1  Below are some important questions and concerns many marketing researchers have regarding social media.

  • Many surveys are now turned around very inexpensively in days, or even hours, so the claim that survey research is slow and expensive seems outdated. Also, in the case of qualitative, hasn’t the popularity of online qual eroded much of the advantage social media once held in terms of cost and speed?
  • How much social media demographic data are actually imputed or fictitious, i.e., made up by users to protect their personal privacy? How many posters are really fakes we are unable to detect?
  • Are social media users really more representative of general consumers or of particular groups, such as young urban males, than online panelists are?
  • How many people only listen to social media instead of posting their opinions? Why do people post on social media? How do posters and lurkers differ?
  • Do we only observe the extremes – people who are highly satisfied and those who are highly dissatisfied? Aren’t extreme opinions over-represented in social media conversations?
  • Do posters’ comments reflect their true viewpoints? How many are just amusing themselves or jousting with other posters?
  • Related to this, to what extent and in what ways are posters influenced by comments of other posters?
  • How much do social media data reflect real experience and how much is just opinion? Though this may vary by product category and social media outlet, to what degree do posters’ opinions actually affect buying behavior of those reading their posts?
  • Is posters’ factual recall really more accurate than that of survey respondents? Why?
  • How much influence do “influencers” actually have on the buying process?
  • How much does the abovementioned vary by social media outlet, topic and country?
  • Most importantly, how well do social media metrics match market reality? How stable are these metrics over time?  Do they correlate well with other data, such sales/share figures, cross-sectionally and over time?  In our experience, survey data usually match “hard” statistics reasonably closely, as shown in publications ranging from Journal of Advertising Research2, the Marketing Science Institute3 , Marketing Science4 and the Journal of Marketing Research5.


These questions pertain only to data.  What about analysis?  Social media analytics is a form of content analysis, and content analysis is not easy.6  How about computers – can’t Artificial Intelligence replace human analysts? Both computational linguistics and natural language processing promise to automate classifying the content and sentiment of human communication, but they need to be trained by humans. Superficial training leads to Garbage In – Garbage Out (GIGO), while thorough training can be at least as time-consuming and expensive as traditional marketing research.7  Context and disambiguation remain significant challenges, for example.  Indeed, there will always be gaps because computers cannot be programmed to feel emotions. They are not us. They do not laugh at our jokes.  They have never scored the game-winning goal nor had their hearts broken.  They have never been consumers.

The role of the human analyst has not vanished.


Let’s be clear that we do not deny that social media has given us a wealth of new data or that it has had a significant impact on marketing and marketing research.  Though skeptical of many claims we, nonetheless, count ourselves among the faithful.8  The marketing world has changed dramatically in the past decade and there is no turning back the clock even if one wished to.  Online reviews, for instance, are shaking up the way marketing is done in many product and service categories.9   Though progress has perhaps not been as rapid as some may have predicted, marketing researchers are becoming increasingly adept at mining social media for useful insights.  To some degree it is replacing traditional research.

A Few More Questions

As researchers and consultants, however, we have an obligation to our clients and to our profession to be realistic about what social media analytics can actually deliver.  Here are a few more questions we feel remain largely unanswered.

  • Do social media analytics success stories point to the rule or are they mostly exceptions?
  • Is listening alone really sufficient? Are discussion moderators therefore unnecessary and, by implication, have been all along?
  • Can brand and ad awareness, both spontaneous and prompted, be supplanted by social media metrics?
  • How stable over time are brand image metrics derived from social media? Are variations over time mostly a signal of meaningful changes in brand fortunes, or are they mostly noise?
  • Can we obtain the accurate and fine-grained breakdowns on individual posters needed for detailed cross tabulations and multivariate analysis? Clients frequently require very specific information and opinions from consumers, and quantitative analysts must be able to tie all these variables together for each person.  Can social media give us the detail for individual posters required to develop the rich profiles of consumers and in-depth analysis many clients have come to expect?  The most actionable research is usually research designed to address particular marketing issues and the essential data are rarely just “out there” waiting to be collected.
  • What kind of analytics can it replace? Consumer segmentation?  Key driver analysis with multivariate analysis or machine learners?  Can it replace choice modeling, data mining and predictive analytics?  Marketing mix modeling?  If so, where are the examples?
  • Aren’t many of the shortcomings of traditional marketing research actually reflections of poor skills and inexperience? How does the best of traditional marketing research compare to the best of social media analytics?

The Future

Social media is still quite new, and the media themselves and the analytic tools for exploiting them are still evolving.  Let’s be honest with ourselves – how many true social media experts can there really be?  What would be the risks of suddenly discarding methods that have served us well for so long in favor of an alternative that has not yet stood the test of time?  Why not concentrate instead on using social media qualitatively to assist in questionnaire development, or as one component in marketing mix modeling, or to put a human face on data mining and predictive analytics?  Why not focus on utilizing it in tandem with other qualitative methods?  Social media analytics has already proven itself in these roles.

Is social media an asteroid streaking towards traditional marketing research or is it a valuable complement rather than a complete substitute?  We lean towards the second conclusion and feel social media in the main adds to but will never fully replace traditional marketing research.  We see it as an important new and increasingly indispensible source of insights, but not the catastrophe some have feared nor the research nirvana others have sought.




1 Social Media Intelligence (Moe and Schweidel) provides a good overview of social media that addresses some of the concerns we raise in the article.  Social Media is fairly new and still evolving, however, and much research remains to be done. Furthermore, what might apply in one country may not apply in another due to cultural differences and because Social Media is not uniform across the globe.

2 Lautman, M. & K. Pauwels. “What is important? Identifying metrics that matter.” Journal of Advertising Research 49.3 (2009): 339-359.

3 Pauwels, K. and B. van Ewijk. “Do Online Behavior Tracking or Attitude Survey Metrics Drive Brand Sales? An Integrative Model of Attitudes and Actions on the Consumer Boulevard.” Marketing Science Institute (2014): 13-118.

4 Hanssens, D. et al. “Consumer attitude metrics for guiding marketing mix decisions.” Marketing Science 33.4 (2014): 534-550.

5 Srinivasan, S. et al. “Mind-set metrics in market response models: An integrative approach.” Journal of Marketing Research 47.4 (2010): 672-684.

6 Content Analysis: An Introduction to Its Methodology (Krippendorff) is a comprehensive (and dense) textbook on this subject.

7 See, for example, Artificial Intelligence (Russell and Norvig), Foundations of Computational Linguistics (Hausser), The Handbook of Computational Linguistics (Clark et al.) and Introduction to Information Retrieval (Manning et al.).

8 Vocal advocates of social media (and some other new technologies) typically react to questions such as ours by ignoring them, or by intimating that those posing them are behind the times and set in their ways, or by acknowledging that they are legitimate questions but that, because of recent advances, they are no longer pertinent.  The last response is most convincing when supported by studies that have been replicated by independent researchers with no commercial stakes in the methodology.

9 Absolute Value (Simonson and Rosen) gives many examples of how online ratings are disrupting marketing.


Kevin Gray is a marketing scientist who has been in marketing research for more than 25 years.  His background covers dozens of product and service categories and over 50 countries.  Kevin began his marketing research career on the client side in New York, and he has broad experience with the A-Z of marketing research.  This includes advanced analytics and new product development for Nielsen Customized (CR) and Research International.  He founded his consultancy, Cannon Gray, in 2008 and works with clients, marketing research agencies, consultants and ad agencies located in many regions of the world.  His chief focus is on providing marketing science and analytic support to enhance decision making.  He’s a strong believer in taking advantage of new research tools and data to their fullest…but without letting the tools and data become the ends rather than the means.

Koen Pauwels is Professor of Marketing at Ozyegin University, Istanbul and Honorary Professor at the University of Groningen.  He received his Ph.D. from UCLA, where he was chosen “Top 100 Inspirational Alumnus” out of 37,000 UCLA graduates. Next he joined the Tuck School of Business at Dartmouth, where he became Associate Professor in 4 years and received tenure in 6. Prof Pauwels is Associate Editor at the International Journal of Research in Marketing and has received the most prestigious awards for more than 30 top publications. He consulted large and small companies across 3 continents, including Amazon, Credit Europe, Inofec, Heinz, Kayak, Knewton, Kraft, Marks & Spencer, Nissan, Sony, Tetrapak and Unilever.


The authors would like to thank Raoul Kübler, Professor of Business Administration and Marketing at Ozyegin University for his helpful comments.


Envisioning 2025: Five Technology Changes That Will Reshape Customer Behavior And What They Mean For Customer Intelligence

The technology-driven changes over the next ten years will be immense. Here are five changes that will shape customer behavior and what they mean for companies.
  1. Completing the smartphone revolution

In the next decade, more than 90 percent of the population in developed countries will own a smartphone, and over 80 percent of the world’s economically active adults will have a smartphone connected to the Internet with affordable data charges. An even more connected world has far-reaching implications in commerce, law enforcement, politics, social behavior and much more. The smartphone is going to be the default device for social networks, online commerce, email, messaging and for most of the non-work uses of computers.

For companies, the pervasiveness of smartphones provides an opportunity to get a more accurate understanding of customer behavior. Smartphones allow companies to engage more frequently and while the customers are still doing the behavior in question.

  1. The end of local storage, the ubiquity of the cloud

Large parts of our data are already in the cloud: our social media, our music collections, and bank records are just a few examples. For businesses this extends to software as a service options such as Salesforce and market research platforms.

The ubiquity of cloud technologies means the devices people carry will not need much local storage. Phones, tablets, laptops, video players and wearables will function by connecting to the cloud.

Also, the companies controlling the centralized data will increasingly be able to track and correlate what people do, including where they are, what they view, who they speak to, what they buy, and how they spend their time. The scale of data will make today’s big data look very small.

Cloud technologies already make it easier for companies to understand customer behavior within a short amount of time by making data more easily accessible. As more things connect to the cloud, the time it takes to do customer research will continue to shrink, allowing companies to more quickly act on insight.

  1. Better wearables

Google Glass, Apple Watch, devices such as Fitbit, and apps like MyFitnessPal are just a pale reflection of what’s coming in the wearable marketplace. Companies will push the boundaries of what wearables can do. With greater functionality, wearables will have a wide range of purposes: paying for things by just using them or by just taking them out of the store, tracking employees, and monitoring food and exercise (which will be used by insurance companies to vary premiums and assess claims).

More wearables means more data for companies to sift through. But as customers share their data with customers, privacy and security will become top of mind for more people.

  1. Internet of Things

The Internet of Things is currently one of the most over-hyped technologies, though its long-term promise shouldn’t be underestimated. The IoT refers to most consumer devices being connected to the Internet, including our cars, fridges, heating, electricity, etc. (For a great example of an IoT dashboard, visit this one from the Sid Lee offices in Paris.) The rise of the smart home, smart office, and smart car will optimize the way customers consume utilities, change the way they purchase things, and massively extend the amount of customer data created.

  1. Personalized predictive analytics

The future of marketing and service provision is going to be mass customization, delivering the right message, package, and service to each person. Utilizing all the data sources mentioned above services will be able to better target advertising and supply services. Companies will be able to predict customer needs and determine what is suitable for each individual.

Ad re-targeting is already at the point where ads follow you around the Internet. For example, a customer might visit a sports website, browse some products, and then move on to other sites. Ad re-targeting allows companies to serve ads to the customer from the sports site. In the future re-targeting will be expanded to include shops you browse in, billboards you stop and read, and even products you look at.

Personalized predictive analytics is the natural extension of the sort of big data analytics that seek to work out if you are pregnant or in the market for a wedding photographer, but at a much more comprehensive level. With all the data we will be supplying to the cloud, including biometrics, the options for estimating what we will be doing in the future, what we need in the future and what we can be persuaded to buy, will increase massively.


These changes mean more automation of simple tasks in customer research and a massive increase in the amount of customer data available. So where does that leave professionals working in customer intelligence?

The secret to being valued is to contribute something that others can’t. Competing with algorithms and automated processes is not going to generate value or a satisfactory experience. Finding patterns, optimizing products, allocating advertising dollars will all become tasks for software, platforms and systems.

The four key areas where insight professionals can and should deliver value are:

  1. Focusing on the issues that big data and automation can’t deliver. Questions that require interpretation will require the expertise of customer intelligence professionals. Some of these questions include: Why? What if? How should we change it? How does it make you feel? What do you want instead? What do you want next? These question-answering roles will be combined with customer engagement to help co-create the future.
  2. Creating a customer intelligence approach that blends big data and automation with human-to-human connections to provide complete answers. Big data and automation tend to provide more of the same, not the big innovations and not the human angle.
  3. Becoming advocates for the customer. Ensuring that the views, wishes, thoughts, preferences and aspirations of customers are embedded in every decision. This role focuses on customers as people, as opposed to them being the data points that big data and automation assumes them to be.
  4. Linking business needs to research questions, looking for causality rather than correlation, and identifying which patterns are insight and which are simply spurious or bogus. Insight professionals understand people in a way that data scientists don’t. They understand research processes in the ways that most marketers don’t, and they bring the objectivity that senior management require.


As technology continues to shape new customer behavior, the role of customer intelligence will evolve. While machines and software will take over manual and tactical tasks, the customer research professionals of tomorrow will elevate their role by providing strategic insight to companies. The need to forge stronger relationships with customers means insight professionals won’t be obsolete in the next decade. Indeed, the value that they provide will be more critical than ever as more companies recognize the need to forge stronger relationships with customers.


Previously posted on the Vision Critical blog


How To Get A Graduate Job In Market Research

The good news for recent graduates is a career as a market researcher means using many of the skills already learned at university.
Cryptic crossword on newspaper

Photograph: Darren Marshall/Alamy



By Samantha Bond

If you thought a job in market research meant spending all day in a call center working through a list numbers in the phone directory, you might be surprised to learn that there is a lot more to it than cold calling.

So what do market researchers actually do? They collect data about specific markets for clients; data about what people like, dislike, want and don’t want – even how people behave. They often have an area of specialism, so they might work in fashion or banking, advertising or public policy, and they work in project teams, liaising with suppliers and clients.

If you are a recent graduate, the good news is the role uses skills already learned during your course: analytical thinking, persuasive writing and an ability to distill information.

So, how can graduates go about starting a career in market research?

Choose a research path

First, it is important to pick a research path. Identify whether your skills lie in numeracy (quantitative research) or whether talking to people face-to-face and learning about cultures (qualitative research) is more appealing to you.

Quantitative research is suited to those with skills in statistical analysis. BSc graduates with degrees in mathematics, psychology and economics are well matched to this path, having gained experience of survey design, data processing and analytics software.

BA graduates whose degrees focus on human studies, such as anthropology, geography and sociology, are likely to have the skills needed for qualitative research. This path involves talking to people, observing behavior and understanding cultural context.

If you’re interested in both, a number of agencies offer graduate schemes where you can move between departments.

You should also consider which research sectors are of interest to you. From automotive to food and drink to retail, there are a variety of industries on the lookout for capable graduates.

Build an online profile

Before looking for a job in market research, it’s important to create a professional online profile. Build a LinkedIn profile and hide any potentially controversial Facebook photos. Educate yourself and keep up to date with the latest industry news by following blogs such as Research Live, Greenbook, ESOMAR and Marketing Week.

You can go a step further by blogging and engaging in conversation on social media about relevant market research news. This type of activity acts as a point of differentiation when applying for jobs.

Don’t stick to traditional job application routes

A great place to start looking for roles is a job board. But there are other routes worth trying. Find out which research agencies are recruiting and make speculative inquiries – engaging with these companies via Twitter or emails will show willingness, genuine interest and ambition, all attributes agencies want from their staff.

Consider initially inquiring about work experience – placements are the perfect way to get your foot in the door and gain further insight into the industry.

Highlight research-led degree work in your CV

Writing ability is a core skill for all researchers and a poorly written CV will cast doubt on your suitability. When writing your CV, be sure to highlight relevant research experience, listing projects conducted during your degree course and relevant work placements. Under each heading, bullet point responsibilities and research skills acquired, plus key research insights and recommendations. Draw attention to your industry and sector knowledge by providing links to your social media profiles and any blogposts you have published.

Ultimately, employers want to see evidence that you understand what the industry is about (market research rather than pure marketing) and are specifically aiming for a role in research.

Finally, be bold and creative, experimenting with different CV designs. If submitting a traditional CV format, be visual, use iconography and settle on a clear, well-designed layout. Visual design and innovative thinking are increasingly an important part of research, so consider adding something extra, for example by creating an “about me” website, video or infographic, visually communicating your key strengths and industry experience.

Be enthusiastic at interview

Once at an interview, remember researchers will hire the person, not the CV. This is your chance to bring your personality to life.

You need to echo the company ethos, conveying your passion for both the industry and the company you’re applying to. So, as well as arming yourself with industry and sector knowledge, carry out thorough research on the agency. Many agencies will have a blog, as well as articles on industry websites. Aim to understand the company’s perspective on topical subject matters and go to the interview prepared to discuss your own evidence-based perspective. This type of approach demonstrates initiative, passion and proactivity in a way that speaks far louder than words.


Getting Real About Online Data Quality Best Practices

When done right, data quality is an end-to-end monitoring and vigilance process, with initiatives and metrics all along the 5 Rs of sample: Recruitment, Registration, Respondent Management, Research, and Rewards.

best practices



Melanie Courtright

Over the last few months, I’ve been reading some of the online data quality posts that have appeared here.  At times, I’ve been genuinely shocked by the naïveté, while other times I’ve found myself screaming at my computer, railing against the rhetoric and misinformation. At the end of the day, every conversation about data quality is a good conversation.  But rather than helping to address a noble concern, these inadequate conversations creep in when people are trying to deflect from a product deficiency or compete with each other.  These conversations use fear-based language to create pain points (real or phantom) in an effort to scare people into reactive choices.

The truth is, solving quality issues is, and has always been, a major part of our industry.  All modes of data collection have their own biases. In-person surveys have interviewer bias, mail surveys have non-response bias, phone surveys are biased due to issues surrounding interviewer quality, online surveys are biased due to internet penetration, and mobile surveys are biased due to smartphone usage rates. And you can mix and match their respective biases in a multitude of ways.

Sustainable data quality isn’t about cool new techniques (although some of them are increasingly useful) or the latest technology (although that too can help). When done right, data quality is an end-to-end monitoring and vigilance process, with initiatives and metrics all along the 5 Rs of sample: Recruitment, Registration, Respondent Management, Research, and Rewards. It requires checks and measurements at every point in the research lifecycle. If anyone is talking about quality, and they’re only talking about one area of quality, they are doing you (and the industry) a real disservice.  Great companies understand that quality is a huge investment. You need to partner with companies that aren’t trying sell you the “flavor of the day” quality story.


  1. Recruitment Source Long Range Planning: A formal recruiting strategy that ensures consistency over time. Look for companies with the structure and expertise to ensure reliable, scalable recruitment that results in sustainable feasibility.
  2. Traffic metrics: Volume of traffic coming from every source, by demos, ensuring predictable volumes. Watch for shifts in data quality, minority representation, or technology ownership, to name a few.
  3. Partner Comparisons: Brands, web sites and memberships should and will have unique characteristics. Look at the unique attributes of each traffic source, and figure out what that means to your sample frame and the resulting data.
  4. Blending Strategy: Combining all the data sources into a single panel. Using all of the information above, decide: what is your strategy for blending data from these multiple sources, and how do you ensure consistency over time?
  5. Diversity & Breadth: Offset bias and increase representativeness. It’s crucial to have a broad set of sources that drive people from all walks of life so you must think beyond just demographics to psychographics. You won’t find everyone you need on a single site or a few sites. They’re in remote corners of the web, and you have to reach them where they prefer to live.


  1. Fraud Prevention: Tools that require human eyes and fingers to answer questions, along with interpretation and logic, to participate in research communities.
  2. Digital Fingerprinting /Geo IP/Proxy detection: Tools that look at computer identities, and the network path they came from, that reach beyond deletable cookies and survey tags.
  3. Email/Username/Password Scans: Accounts with the same or similar email addresses or passwords are a red flag for fraudulent accounts.

Respondent Management

  1. Profile Traps and Consistency Checks: Do people overstate illnesses or list too many ethnicities in an attempt to qualify, have data that is inconsistent with previous questions or visits? Are they paying attention and being truthful?
  2. Length of Interview Scans: Watch the speed in your own surveys, and have clients send speed information back to you so that repeat offenders can be flagged.
  3. Client Survey Invalid Rules & Scans: For all clients who use data cleansing and traps, request as much information back as possible, and in real time where feasible. Any time you see a daily increase in invalids, investigate immediately.
  4. Automated Data Quality Practices: By now, every sample company should have trap questions built into the system using randomization and intelligence. It shouldn’t be manual and it shouldn’t be predictable, or it won’t work.
  5. Sampling Protocols/Rules: One of the most important — and over-looked — steps in the process is rules and standards related to the actual sampling. How are the invitations selected? Is there consistency between Project Managers? Between waves of surveys?

 Research Management

  1. Design Partnership with the Client: We are in this together so let’s work as a team to reduce survey lengths, increase engagement, and make the process work better. An important part of data quality is keeping the members who provide meaningful data.
  2. Member Services Approach to Problems and Complaints: Track every interaction with members, and handle their complaints quickly. Look for problem themes and use them to improve the systems and the surveys. Watch for frequent complainers, and use that as a red flag.
  3. Replicable Survey Assignment Process: When using a router, be sure that the routing system doesn’t introduce bias. Routing should have a strong element of randomization which leads to higher replicability.
  4. Device Compatibility – Understand consumers so well that you can anticipate their device practices and design surveys that don’t create instrument bias.


  1. Reward Relevance – Use a variety of rewards that require strong identity validation and that motivate people for all of the reasons that they would participate in surveys: to give back, to get back, and to get a pat on the back.
  2. Community Aspect and Sharing Survey Results: When possible, share survey results with members as part of their reward. Reinforce the importance of their participation and their response quality. Help them become passionate about their involvement, and make them feel part of a community of valued people.
  3. Reporting of redemption anomalies: Watch reward redemptions for unexpected changes. Shifts in incentive choices or a sudden increase in redemption can be important indicators of a potential threat.

Regardless of the recruitment or sampling method, whether it’s river, router, or programmatic, you have to do it all and watch it all.  When you do, you will notice shifts as they happen, rather than after they’ve impacted the data. You will be able to intervene and make changes. That’s where technology steps in — to implement gates and solutions. Watch and react every day. Stay watchful at every point in the survey and respondent process. You can’t rest. You can’t get comfortable. What worked yesterday won’t work tomorrow.

So the next time someone discusses their data quality initiative and elaborates on a couple of the supposedly innovative things they have in place, invite them to have a real conversation about research quality.  I’d be happy to have that discussion with anyone who asks.


How Procurement Views Market Research: Valuable Insight Or Wasted Opportunity?

In the latest Procurement Leaders’ research 22% of MR buyers named performance to be the primary stakeholder preference at their company.



By Aleksei Gontsarov

Getting value for money is, or should be, the raison d’être for those in the procurement function, and buyers of market research services can verify that as much as anyone. Yet a poorly executed piece of research may see the cost completely eclipse the value of the findings.

Efficiency here depends in part on the questions they are willing to ask; poorly planned research projects and unclear initial aims can lead to substantial spending that will result in little insight if any at all. Well-prepared and thoroughly communicated research, however, will help the client to cut unnecessary costs and receive needed help in both short- and long-term company decisions.

Many companies have misunderstood the true purpose of market research, as they tried to learn not about the customers and their wishes, but about the company’s own strategy. Too many buyers want to know which of their products and which of their ideas can be marketed the best. The question should be about the product the customer wants the most.

By exploring such differences the buyer can choose the right MR agency and add value to the brand. The spread of agencies on the market provides enough choice of suppliers; expertise and experience of the candidates should be the driving force behind the final decision.

The saying ‘buy cheap, buy twice’ truly applies to market research, as choosing the right option for research is the main challenge for procurement. The price of that project should, therefore, be seen only in the context of the client’s eventual returns. The abundance of research techniques is, then, a test of buyer’s acumen and ability to work the stakeholders to understand goals. And the price is not the most important factor. Researchers can conduct a million cheap and quick surveys without ever reaching the goal. On the other hand, one expensive method might be the solution everyone was looking for.

The progress of online techniques and the traditional bias towards quantitative methods are reasons online quant methods (mostly numerous online surveys) are enjoying 24% of entire global demand. Online qualitative methods are less common, with a share of 2%, but are not less efficient or outdated. Often a figure can give quick information about the state of your industry or your company, but the client should always look at qualitative data before applying the results to practice.

In the latest Procurement Leaders’ research 22% of MR buyers named performance to be the primary stakeholder preference at their company. Experience and reputation come second, with 18%, just a bit more than price (15%). It is good to know that for the majority of people performance and experience mean more than price, as the lack of value will put the real cost higher than the price.

There are a lot of affordable and modern agencies, who will gladly provide an approval rating for your brand and products, but real insights can come only from an experienced analyst, and that is what the most MR agencies are lacking.

To find out more about how market research companies operate and the procurement of their services, please click here to download a snapshot of the latest Procurement Leaders Market Research report.


This article is a piece of independent writing by a member of Procurement Leaders’ content team and was originally posted on Procurement Leaders


Shocking Incompetence by a Major Brand

How can we have confidence in the future of our industry when a major research vendor has so little basic research competence?




Ron Sellers

This post will be short and incredulous.

There’s a lot of discussion about the lack of respondent engagement, the lack of respect for respondents’ time, the lack of survey relevance, and other issues which many people feel are harming the research industry.  Experts and wanna-be experts debate, discuss, blog, tweet, and give speeches about how to improve the industry.

Yet how much hope is there, really, when one of the major brands in the industry acts like a first-year research intern – and not a particularly bright one at that?

I just took a call from a research vendor to which I have subcontracted quantitative fieldwork in the past.  They wanted to talk to me about my satisfaction with our most recent project.  I recognized the brand immediately – you would too – but I couldn’t for the life of me remember what project I had done with them recently.

Then they told me the survey was about my satisfaction with my most recent project with them:  in January 2013.  The first question was how satisfied I was with the overall experience on a five-point scale.

I assume the survey would have gone on from there to other standard customer satisfaction questions, but I stopped the interviewer and told her there is no possible way I could remember the details of a project nearly three years old.  I had no idea who my client was, what the sample frame was, whether it was B2B or B2C – nothing.  She didn’t have the information in front of her to tell me any of the details to jog my memory.  As I’ve created and managed dozens of projects in the 31 months since the project ran, I was dumbfounded that they would expect me to remember a single thing about that particular project.

I won’t divulge the brand, although I am sorely tempted to just for the purposes of shaming them.  It is inexcusable that a major research firm – really, that a research firm of any size – would actually expect me to remember details from a project nearly three years old.  It’s bad enough when I see research companies fielding bad surveys from clients that don’t have a clue, but with the research company itself creating this project, the lack of basic research competence is beyond mind-boggling.

Companies need to understand how mistakes like this impact their brand.  This particular company (like many others) has spent a lot of money trying to position their brand as one with industry-leading expertise, through white papers, webinars, advertising, conference appearances, research-on-research, etc.  In my mind, all of that has been undone by one personal demonstration of a shocking lack of basic competence and common sense.

Do they truly know so little about research that they expect me to remember details this far back, or do they just not really care about response quality?  Is my business so unimportant to them that they can wait 31 months to find out how satisfied I was?  Is executive management this incompetent to design and approve a project such as this, or this incompetent that they don’t even know about a survey effort being done by someone lower down the food chain in their own company?

Forget industry expertise – they just flunked Research 101, Business 101, and Brand Development 101.  At least they got the trifecta.

This also represents opportunity lost.  Had they contacted me right after the project, I would have been happy to spend time telling them the good and the bad, which they could use to a) be more likely to satisfy me as a client on the next project, and b) help evaluate their overall process so as to improve client satisfaction in general.

Had they called about my perceptions of their brand, their competence, their business model, or anything else in general about the company, I and others could have given them feedback that would have helped them understand how they are perceived by current, former, and potential clients.

Had they called with a survey about why I haven’t given them any business in a long time, I would have been happy to have that conversation with them.  Depending on the direction of the conversation and on the aftermath, they might have even re-engaged me as a client.

Instead, they did this.

I weep for our industry.


Editor’s Note: After Ron sent this, he emailed me to say the company in question contacted him AGAIN with the same survey for the same project, adding further proof to the incompetence claim here. Pretty sad.


Proving the ROI of Research for Professional Service Firms

The net return on invested research dollar is very sizable. If your firm realizes even a small fraction of the documented benefits of market research, you are well advised to make the investment.



By Lee Frederiksen

What are the benefits of market research for professional services firms? Is it worth the cost? And does it really matter?

These are some of the questions we get from our clients when we recommend researching their current clients and target markets. While on some level we all know that market research is a good thing, these are legitimate questions. What exactly are the costs and benefits of market research?

Why Market Research?

Let’s start with why your professional services firm should consider using market research in the first place. There are a number of occasions where market research is appropriate. Here are some of the most typical situations where you’d use it:

  • When your firm is launching a new service
  • When you’re looking to select verticals to concentrate on and specialize in
  • When you’re developing your organizational strategy
  • When your firm is seeing a diminishing market share
  • When your industry environment is changing
  • When your firm needs to accelerate growth

The Benefits of Market Research

While there are many intangible benefits of market research on your current and prospective clients, such as better targeting and a more accurate understanding of how your firm is viewed, there is a much more tangible and direct measure of benefits. In a study of high growth professional services firms, we found that professional services firms that do systematic, structured research on their target client groups are more profitable and grow faster.

This relationship is documented in the figure below.

Chart - Relation of Research to Growth & Profitability

This figure shows the growth rate and profitability of professional services firms that do no research, occasional research or frequent research (at least quarterly) on their target client group.

Because of the specificity of these findings we are in a position to project the true economic benefits of market research for services firms. But first let’s consider the cost side of the equation.

The Cost of Market Research

The cost of a specific market research program is determined by the research method used (face-to-face interviews are more expensive than phone interviews, for example) and the sample size needed. Larger firms will typically require a larger sample. Taking these variations into account, we have estimated the typical market research costs for both occasional research (once per year) and a program of frequent research (quarterly).

Below are estimated costs for a single round of research for three sizes of firms:

  • Small ($ 5M revenue): $10,000
  • Medium ($20M revenue): $20,000
  • Large ($200M revenue): $40,000

For a program of frequent research (quarterly) you can simply quadruple these estimates. And keep in mind that the average duration of a market research project ranges from about 2 to 8 weeks.

The Return on Investment of Market Research

With estimates for the costs and economic benefits, we can now calculate the return on an investment in a program of market research for a professional services firm. For each of the three firm sizes, we’ll subtract the total research cost from the economic gain. This shows the net increase to top line revenue and bottom line profitability over a one year period.


Chart - ROI for Small Firm


Chart - ROI for Medium Firm


Chart - ROI for Large Firm

Two things are immediately apparent from this analysis. First, the net return on invested research dollar is very sizable. If your firm realizes even a small fraction of the documented benefits of market research, you are well advised to make the investment.

The second observation is that the larger the firm, the greater the return on invested dollars. This is a very straightforward relationship arising from the observation that the expense of market research does not rise in lockstep with firm size. Put another way, market research is a relative bargain for larger firms.

Putting Research to Work in Your Firm

Sadly, market research is not magic. To enjoy the generous benefits it can provide, you need to take it seriously and systematically use it to adjust strategy and shape staff behavior. As it turns out, well-conducted and well-presented market research is often a powerful catalyst for change.

Historically, professional services marketing has been driven by hunches and habits. But most professionals are pretty logical and fact based in their work. Consequently, they are often refreshingly open to new data on their clients’ behavior and perceptions. Perhaps that is why it works so well.


Previously posted on the Hinge blog, reposted with permission. 


Keeping up with the Millennials

By examining some of the ways millennials have affected the workforce with their unique generational desires, one can see clues into how this group is also impacting the marketing research industry.


By Kea Wheeler

It seems that many companies are trying to figure out what to do about millennials.  Employers are at a loss of how to incorporate them into the workforce, how to market to them, and how to recruit and engage them in the marketing research industry.

But being in this generation myself, I don’t necessarily see an issue with the group at large. According to some hiring manager’s opinions that is my narcissist side talking, which is heavily associated with millennials (Elance-oDesk & Millennial Branding, 2014). What I do see as an issue is the paradigm and infrastructure that was, and in some ways still is, not capable of adapting fast enough to keep up with this generation.

The same lack of quick adaption in the workplace also goes for the overall market research industry. By examining some of the ways millennials have affected the workforce with their unique generational desires, one can see clues into how this group is also impacting the qualitative marketing research industry specifically.

Millennials want experiences

Millennials are looking for experiences that will enrich their lives, not just their wallets.  In an article entitled How Millennials Could Upend Wall Street and Corporate America an Intelligence Group study found that almost two-thirds (64%) of Millennials say they would rather make $40,000 a year at a job they love than $100,000 a year at a job they think is boring.”

The underlying good news is that perhaps lower incentives can be paid to millennials. The bad news is that a qualitative or clinic setting research has to be an event that peaks millennials interests and offers the chance for their personal enrichment. Coming to sit around a conference table to talk for two hours may not be their idea of an ‘enriching’ experience.  One way to combat this is to strive to have more interesting, unique locations and making seating arrangements less “bored” room style by providing an in-home or outside living space atmosphere.

The notion of this generation wanting more experiences may also affect how follow-up communications are sent to millennials. Putting more thought into the design of your email blasts and invite letters can make the research appear more like an event than a Q&A session – and therefore more appealing to millennials.

Millennials are looking for flexibility

Millennials want the flexibility to be and do what they please and to shift their schedules to accommodate the many facets of their life. Patrick Thean, CEO & Co-founder of Rhythm Systems, expounds on this notion of flexibility in his article, Millennials in the workforce – engaging them, retaining them. Thean states that “flexibility to get their work done any time, from anywhere, is something essentially appealing to this generation” (Thean, 2015).

In the past, flexibility has not been the market research industry’s strong suit. The industry was based on a “come to this location, on this date, at this time, and talk for this amount of time” paradigm. However, new methodologies are helping to give an “anytime, anywhere” aspect to research with the rise of online surveys and asynchronous platforms – like MROCS and smaller Qualitative diaries and boards. While we have a solution for the “anywhere” component with the advent of webcam focus groups, we still must have a dedicated time and date to participate in a discussion.

Trying to pigeonhole a millennial into a designated research time frame may be limiting the amount of this cohort’s participation.

Millennials communicate differently and that’s ok

There have been a myriad of articles about millennials’ inability to communicate through the written word, and particularly, verbally. I would not say that millennials’ verbal communication is lost, but it has shifted to quick bursts with the help of technology. Terri Klass and Judy Lindenberger write in their article Characteristics of Millennials in the Workplace, that “quick and efficient communication is the way Millennials choose to interact (Klass & Lindenberger, 2015).

How does this shift to quick and efficient communication impact qualitative research? Currently, focus groups are 2+ hours. Two or more hours of a millennials’ day may not be viewed as efficient when they believe the discussion can be streamlined down to 90 minutes. As communication has shifted for this generation, perhaps the set-up of research discussions also requires a revamp. Shortened sessions, with fewer participants, may be best when interviewing millennials.

Yes, millennials, including myself, have different views on many things which have shaped their ideals about the work environment and potentially the process of market research. But that does not mean we are disinterested in participating and engaging in either.  It only magnifies that what has worked for past generations does not work for millennials.

Jaleh Bisharat, SVP of Marketing at Elance-oDesk explains that millennials must be unique as “they are inventing what it means to be successful in a technology-driven world…where needs change on a dime and independence and flexibility are at a premium.”

I’ll be narcissistic enough to say that millennials have adapted to this new world and it looks like the ball is in everyone else’s court. How will you keep up?


CASRO and MRA Join Suit Against the FCC’s New TCPA Rules

CASRO and MRA have filed a “motion to intervene” in a court case against new telephone rules from the Federal Communications Commission (FCC).



Editor’s Note: Like anyone who has been in the research industry for more than ten years, I cut my teeth doing CATI based telephone research; in fact my first job in research was running a call center, and my second was as COO for an IVR provider who mostly did outbound political polling work. Back then online research was just beginning to be adopted, and like many I saw the writing on the wall and increasingly began adopting online methods as a “go to” methodology, but even until 2010 when I stopped conducting primary research I would still occasionally use CATI as the best methodology to accomplish the goals of the study, and I would do so again today if needed. Telephone based research is still an important and widely used tool in the researcher’s toolbox, a major revenue driver for many suppliers, and an important employment option for thousands of people.

That is why it’s important that we as an industry continue to have strong advocacy organizations to help government entities understand that tarring researchers with the same brush as marketers is a bad call. Whether we’re talking about TCPA or Data Privacy, the Law of Unintended Consequences (or even fully Intended) can impact the research industry in a profoundly adverse way. Thanks to the efforts of many trade associations, and especially CASRO and the MRA here in the United States, these adverse actions can be mitigated or even stopped entirely. I might not agree with everything they do or the positions they take, but for this reason alone if no other, they deserve our support and thanks.


Joint Press Release:

America’s two national research associations representing the profession and industry of survey, opinion and marketing research have filed a “motion to intervene” in a court case against new telephone rules from the Federal Communications Commission (FCC).

In their motion, the Council of American Survey Research Organizations (CASRO) and the Marketing Research Association (MRA) contend that “the definition of an autodialer” in the FCC’s new Telephone Consumer Protection Act (TCPA) rules that restrict the use of autodialers to call cell phones “must be clarified to focus on the current capacity to generate and dial random or sequential numbers, and/or clarified to exclude calls that involve human intervention in the dialing.”

“If the court rules in our favor, we could walk away with a more constrained autodialer definition and an applicable human intervention test — both of which could be major points of relief for the research industry,” said Diane Bowers, CASRO president.

“We also seek relief from class action litigation over reassigned cell phone numbers. The FCC’s new rules create an unnecessary level of risk for researchers,” added David W. Almy, MRA CEO.

Because “cell phone numbers change subscribers frequently, and without notice,” the associations argued that callers who have expressed prior consent to call a cell phone number should not be held liable if that number has been reassigned to a new subscriber unless the caller gains actual knowledge of the reassignment.

Three organizations — ACA International, Sirius XM Radio, and the Professional Association of Customer Engagement—petitioned the court in opposition to the new TCPA rules. Those petitions were consolidated into a single case before the U.S. Court of Appeals for the DC Circuit. CASRO and MRA filed a motion to intervene under their petition to ensure the interests of researchers and research organizations are represented and addressed in the proceeding.

The 1991 TCPA requires express prior consent to call a cell phone using an autodialer. According to the most recent CDC data, more than 60 percent of American households are mostly or only reachable via cell phone, the calling of which is restricted for any purpose, including research, by the FCC’s implementation of the law.

The FCC’s new Declaratory Ruling and Order, issued on July 10, included a definition of an autodialer broadened to include most dialing equipment, and a one-call-before-liability standard for autodialer calls to cell phone numbers that have been reassigned to new users, whether or not the caller learns of that reassignment.

#  #  #  #  #

CASRO and MRA are the national associations for survey, opinion and marketing research — CASRO represents U.S. research businesses and MRA represents U.S. research professionals.

Reposted with permission from CASRO. 


Deconstructing Twitter

Twitter is massive - there are about 350,000 tweets sent per minute, from tens of thousands of people around the world. Using a corpus of tweets from the NewMR social media study, Andrew Jeavons analyzes the underlying characteristics of Twitter.
Image Credit: Stockmonkeys.com


By Andrew Jeavons 

Twitter is massive. There are about 350,000 tweets sent per minute, from tens of thousands of people around the world. I became interested in the underlying characteristics of Twitter after the NewMR social media study. As a consequence of taking part in that study I had a corpus of nearly 400,000 tweets available for analysis. The tweets were collected over a 24 hour period from 1394 unique users. Using this corpus I did some analysis on what underlies the structure of tweets.

To use any data source effectively, knowing the structure of that data is critical to avoid bias and inaccuracy. No data is pure, they all come with their biases and hidden structures.

Tweets, it seems, are mostly about URL’s. In my sample 89% of all tweets had embedded URL’s. It seems Twitter is a broadcast medium for sharing URL’s given this percentage.

Retweeting is not that common. Low frequency tweeters, having posted 10 or less times in the 24 hour period, had slightly a slightly higher rate of retweeting at 16% than high frequency tweeters who retweeted 12% of the time. The idea of sharing tweets with your followers seems to be a canard.

Roughly 10% of all tweets had links to images. I’m not sure at the moment how many of these images were of cats, I’m working on this one.

73% of tweets didn’t have a hashtag. Around 10% had one hashtag, 8% had two hashtags in them and just under 5% had three hashtags. The most hashtags in a tweet were 17, this is what it looked like:


Hash tags, at least in this corpus, are not providing much information as to the content of a tweet.

These users tweeted from once to 5768 times during the 24 hour period. The number of tweets that were from users who tweets 10 times or less was 1381 – 0.35%. In terms of accounts, 27.% of accounts tweeted 10 times or less during the 24 hours. There was some indication that low frequency posters included less URL’s in their tweets.

Listen Carefully

This corpus was derived from users who sent tweets with the words “market research” in them in a previous study phase. In this respect it is probably biased to financial news twitter accounts that are constantly sending out reports on businesses and market sectors. It shows that you need to be careful to who you are listening to. Hyperactive twitter accounts may be giving different information that quieter accounts. Monitoring how active an account is an important metric.

Hash Tags and Retweets: Not so much 

Hash tags can’t be relied upon to measure themes in Twitter, there are simply not enough of them produced. Even if the figures in this corpus are skewed low (which I have no reason to believe) they only occur in a small number of tweets. Retweeting doesn’t seem to be a huge activity either, Twitter seems to be more broadcast that sharing.

Rise of the Robots

It’s clear in this corpus there are a lot of robots (bots)  – automated tweeting systems. This is the only way users are able to post thousands of tweets per day.  I suspect that because I have picked up business reporting bots in this corpus that they are over represented. However they are a part of the Twitter landscape that has to be considered. Hyperactivity is not necessarily a bad thing, but it does mean that their tweets are not from an individual. All users are not equal.

URL’s are the content?

The high incidence of URL’s in this corpus points to the content of the tweet being the web page or image that is included in the tweet. URL’s can’t be ignored and the content they hold has to be captured.

The Signal and the Noise. 

So what is the meaning in a tweet ? Hash tags are too sparse to be reliable markers of content. The text is obviously important, but in this corpus the incidence of a URL in a tweet is so high that this has to be the primary message. The next stage is to analyse the URL’s which leads the use of automated processing. This corpus has 388,127 tweets in it. Assuming that it takes 15 seconds manually to process and digest each tweet, it would take 67 days working 24 hours a day for a human read all the tweets and that is without looking at the URL’s. We have to sample tweets or automatically process them to extract meaning. What criteria we use for sampling tweets is unclear at the moment. Even if we obtain a sample that is capable of being processed by humans, it’s pretty clear that it will always be a relatively tiny amount and hence prone to error, as are all samples. I don’t think humans or computers are the complete answer to processing large amounts of text, but at least computers can process the whole rather than part.

We may be starting a battle of the robots, automated tweeting versus automated analysis of tweets.

O brave new world.

Note: this post was originally posted on Mass Cognition.