1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Blog-banner-May-Global
  4. mfour_new_1

Going Big with Qualitative Market Research

The ability to ask and analyze at scale has given rise to a different breed of qualitative


By Adam Rossow

The qualitative research renaissance is in full swing. Brands have figured out that context is the key to understanding what’s relevant. And, we have seen through countless examples that left to its own devices, data of any size can lead to missteps based on assumptions.

Despite qualitative’s growing value, drawbacks, both perceived and actual, are still a drag on the more touchy-feely research discipline. A major one being its lack of scale. Many people need the security blanket of large base sizes; the ability to say “3000 people said this”, even if that mass response is too vague or cloudy to prompt action.

It’s understandable. There is always safety in numbers.

But qualitative is branching out. While it’s still the home of the intimate focus group and in-depth individual conversations, it can also be the vehicle that empowers 1,500 consumers to paint a picture of the type of person that stays at an Airbnb, or the platform for 1000 digital natives to provide insight into technology’s role in the shopping experience.

Thanks to advances in sampling, outreach platforms and text analytics, quickly launching a series of open-ended questions to the masses is possible. And, extracting insights from a mountain of responses isn’t nearly as tedious or susceptible to missteps and vagaries as it used to be. That ability to ask and analyze at scale has given rise to a different breed of qualitative. And while a few questions don’t provide the same depth as hour long IDI’s, they do provide a valuable utility for brands – that of quick contextual insight from a large number of individuals.

With just a few open ends clients can get an up to the minute view of their customers, understand if what they are communicating to consumers resonates, uncover which product features are most important, see how their competition is perceived, and more.

Despite needing only a few key ingredients to achieve qualitative at scale, the recipe for success in not so simple. Anyone who has experience working with open ends and text analytics has seen their fair share of one-word answers, word clouds, and uninformative findings. Effectiveness lies in meticulous question formulation and testing, as well as analysis that provides a story, not just word counts and nebulous themes. As with any good qualitative initiative, preparedness and skilled people are vital.

So next time you need answers from the masses, don’t immediately default to quantitative research that’s void of context. You may be missing out on the opportunity to have your scale and your story too.



Personalizing Respondent Rewards for the Global Marketplace

Today’s global marketplace necessitates a thoughtful and nimble approach to doing business. For market research, this means building that connection with respondents to gather actionable data that brands can use on a broad level.


By Jonathan Price, CEO, Virtual Incentives

Globalization has had a significant impact on nearly every facet of our daily lives – scholars continue to explore and study its effect on diversity, culture, politics and more. From a business standpoint, the economic implications of globalization are immense ranging from increased competition and more efficient markets all the way, some would argue, to wealth equity. One thing is for sure: globalization is changing the way we do business on a fundamental level.

A far cry from the idyllic “It’s Small World” concept of each nation and culture working together in harmony under “just one moon and one golden sun”, today’s global marketplace necessitates a thoughtful and nimble approach to doing business. The number of consumers in developing countries continues to rise rapidly, and the global flow of products, services, money and even information is rising in tandem. (Throw government, labor and risk management in the mix, and you have a complex web of opportunity and challenges that we won’t get into here.) Reaching these audiences means an able wielding of technology (which is driving globalization in the first place) in order to reach them in a space that makes sense, both to the target audience and from a business standpoint.

To narrow this giant concept down to its implications for the market research industry, we need to look at ways to effectively and efficiently embrace the global nature of today’s marketplace. Without doing so, they will quickly be left behind. The essence of doing this lies past simply making products and services available in multiple languages and countries. It means building that connection with respondents to gather actionable data that brands can use on a broad level.

For research projects that have a global scope, or are being conducted beyond the “likely suspects” (e.g. the United States), a respondent reward offering can be important in boosting response rates. Virtual Incentives has always had a global offering, with technology like our real-time API and easy-to-use management portal. But to us, “going global” means more than just technology that works across country lines. One good example is the Global eGiftcard, which goes beyond just taking an existing product and sending it overseas. This solution offers advanced personalization, customization and ordering options with more than 600 brands to choose from in various denominations – all with seamless delivery to more than 43 countries in 16 currencies.

Perhaps the most important point that this solution drives home is its focus on the survey respondent. The instant, flexible delivery of culturally relevant, top in-country brands for more than 40 countries goes beyond simply changing language or currency to make a U.S. product fit across borders. In order to effectively conduct research, partners in the process need to deliver true international solutions.

So if “going global” is part of the plan – and it probably should be – there are several things to keep in mind:

  • One size doesn’t fit all when you are crossing geographic and culture boundaries. Customization that goes beyond the “superficial” is key to success.
  • Think about the respondent audience and their needs first. Technology can make the world a whole lot smaller, so it needs to resonate with respondents no matter the country where they reside.
  • Cover the basics – make sure you are delivering your solution in the language, currency and method (e.g. mobile friendly) that fits the audience. A no brainer, right?
  • Evaluate continuously by reviewing the effectiveness of any global solution – gather data and feedback and implement it in future products for constant improvement.

With something as vast as a global presence, it’s important to make sure all the Is are dotted and Ts are crossed. The world may be getting smaller, but for most businesses this means a larger audience and a new approach.


5 Guardrails to Guide Qualitative Learning

It is the perfect time for customer information to meet customer understanding. Acquiring valuable qualitative insights before applying assumptions to data just might provide that powerful point of difference opportunity no one else has discovered.


Earlier this year, Schlesinger Associates installed THE WALL at select facilities.  This dynamic, multi-window interactive wall serves as both a free form and structured display of qualitative stimuli. 

By Mark Murray, Managing Director, MarketResponse International

It’s not in my nature to be the adult in the room, but it’s time to clear the air on using “qualitative” to breathe one’s own exhaust.

Each morning research facilities the world over are replenishing M&M dispensers, filling minibars and testing sound levels for the next episode of “how much do they like us.” And while it would be easy to launch into a cynical diatribe, the productive course of action is to implore the researcher to cling to their objectivity, hone interpretive skills and apply new qualitative methods able to reveal the next product or service consumers couldn’t imagine life without.

Here are some guidelines.

1. Bring “Context” to the project.
There’s value in any opportunity to hear customers and prospects reactions to concepts, products, and offers. Be sure to clarify what to expect, what’s the realistic take-away and perhaps most importantly, the comments which must be ignored.

Sure, it’s invigorating to be the purist and say “exploratory research” is not the proper forum for a Facebook thumbs up or down reaction. Instead, be the realist. Rally around creating a productive conversation behind the glass. Just keep the words “Research” off the report. “Customer Audit” has a nice ring to it.

Accept that in some situations your talents are being used to “facilitate” instead of “moderate”. Recognize the difference and get the work done.

2. Objectivity
It must be unwavering. Some marketers design, field and report qualitative implications internally. Many have productive, long-term relationships with go-to moderators. Both can be valuable resources steeped with an understanding of customer and category.

That said, if there’s a fork in the road between self-preservation and autonomy you’re on the wrong path.
In your first visit with a client’s research director, you’ll know the level of objectivity, senior management sponsorship and respect they’ve achieved through their candor. Cherish those who have it. Propose quant for those who don’t.

3. Annihilate Narcissism
It’s fine to applaud passion for the business, but keep it out of the moderator’s guide.

Esprit de corps is healthy. It fuels incredible accomplishments. Celebrate opportunities to live vicariously through your client’s success. Just remember, a productive devil’s advocate can be the saving grace in getting a strategy right and keeping expectations in line with results. You weren’t invited to validate. Qualitative’s role is to investigate.

4. Research Designed to Stimulate
Perhaps there was a day when research came in chocolate and vanilla. And while the first question is often quant or qual, the first response should be, “what will you do with the findings?”

The menu of exploratory methods is ever growing. Researching research has become a more important part of being a resilient practice. The tools and forums to deliver stimulus are far-reaching, and the morsels of characteristics available for recruiting are virtually limitless. Without letting the process become cumbersome, embrace and use all means available.

We’ve organized methods across Brand, Engagement, Product, and Communications. Each of these Practice Areas house methods designed to deliver the learning needed to answer and measure specific client requests.  For instance, Brand understanding asks us to isolate the core motivations that form strong connections. Communications checks focus on an interpretation of message, capacity to understand and overall appeal. Each request calls for a specific approach and offers the challenge to add new techniques and exercises over time.

Recognize that today life is woven with threads of e-mails. And our conversations have been replaced by a series of texts. The challenge of hosting a focused dialog today makes research design paramount. Commit to learning and incorporating new platforms while distinguishing the “tool” from the job of getting constructive observations and valuable insights through the consumer narrative.
With all that said, the simple handwritten notes of participants’ “ideal moments” remain some of most insightful treasures of our studies – there’s nothing wrong with the tried and true.

5. The Art of the Question and the Empathetic Ear
The proliferation of bias in virtually all content consumed these days is overwhelming. Developing an objective question as a means of getting an unadulterated response has become more difficult. More than ever, we need to guard against discussion guide rewrites that unconsciously entice respondents to draw a target around your dart. Continually ask yourself if a question is designed to prompt an answer or launch a narrative that reveals their story. You need to understand the world in which your client hopes to play a role and not the other way around.

It’s time to face the facts with respect to Qualitative. “We” have reached a point where behavioral tracking, algorithms, and rigid experience designs are asking consumers to do business on the marketers’ terms. All of these factors make it the perfect time for customer information to meet customer understanding.

Acquiring valuable qualitative insights before applying assumptions to data just might provide that powerful point of difference opportunity no one else has discovered.


Do We Have a Place in the Lives of Respondents?

For online data collection companies, respondents are every bit as important as clients. When we don’t value our respondents’ experience, our data becomes compromised.


By Sima Vasa

For online data collection companies, respondents are every bit as important as clients. When we don’t value our respondents’ experience, our data becomes compromised.

Among all the data-collection techniques used, the internet offers the least direct human contact. This can leave respondents feeling alienated, as though they are a mere statistic. As leaders in online data collection, we must place greater emphasis on our respondents’ experience. Without respondents, we have no customers — and without customers, we have no business! The technology driving our industry is incredible, and it’s worthy of attention. But we often forget that the feedback we collect from people is what’s driving this technology.

In this article, I’d like to take some time to learn more about our respondents. After all, they’re the ones who help us make a living! They want to share their opinions and earn rewards, true, but we also have a place in their daily lives.

Whether we acknowledge it or not, the experience we deliver to our respondents has an impact. We need to make sure that our respondents are as satisfied as our customers. After all, they are the heart and soul of what we deliver.

Adopting this perspective, I chose to look more deeply into how the experience we offer shapes respondents’ opinions of us. We developed and fielded a survey of 2,500 of our most active panelists. Here is what we found:

  1. Respondents count on us and plan to take our surveys. 80% of people indicated that they like taking surveys on weekdays, but only when they have a free moment. We learned from this that we need to provide more opportunities for those who can’t make time to respond during the week. One respondent said:

The only thing I have trouble with is doing these surveys on weekdays […] I am doing this one on a Wednesday.  I am a teacher, and weekdays are the most difficult for me to get them done and they sometimes expire before I get a chance to do them.

  1. The Older Demographic has value and wants to contribute in a more meaningful way. The younger demographic, out of all age groups, rated their experience with us the highest. As I dug deeper, I learned that the older demographic wished we offered more surveys geared toward them. Here’s some of what they said:

[I’d like] more surveys geared to older adults. Just because we are over 65 doesn’t mean we don’t contribute.

I may be older but I’m quite aware of what’s going on in the world.  I’m an avid reader, [and] love music, including what my grandchildren listen to.  I’m interested in tech, though [I’m] not a geek.

Reading these comments, and many others like them, we saw the need to give our older demographic more opportunities to share their opinion, and to target more surveys to their age group. They are loyal and want to contribute.

  1. Integration of mobile technology enhances respondents’ experience and increases participation. Respondents frequently requested that surveys be optimized for mobile devices.  We received many comments like these:

“Ensure surveys can be completed on a mobile device…it’s no longer new technology.”

‘Many surveys are not able to be taken on a mobile device. This needs to be changed as more and more people use their mobile devices for things other than talking/texting.”

  1. Respondents want us to value their time. Many respondents expressed frustration at being disqualified after answering fifteen questions. 87% of our respondents indicated that they prefer to respond to 2-5 preliminary questions (even if these questions are repeated in the main survey) than respond to 15-20 questions before being disqualified. Most would prefer to know, at the cost of a redundant question or two, if they qualify upfront.  They understand the trade-off, and prefer being pre-screened.
  1. While the primary reason for membership in our panel is to earn rewards, this varies by age groups. Earning rewards was, unsurprisingly, cited as the primary reason for membership in our panel. While this response was the most common, it varied by age group. For the 18-34 segment (69%), fun rewards were the largest driver. In contrast, 25% of the 35-54 age segment cited “having my voice heard” as the primary driver for membership, while only 15% of the 18-34 segment listed this as their motivation.

It’s natural for us to be concerned with the service we offer our customer base. This article is designed not as an opposition, but as a supplement, to that demand for excellence. We offer our customers data from respondents — if we don’t consider their concerns, the data we offer is compromised.

Our respondents are as essential as our customers. Their concerns must be taken into account if we seek to maximize our product’s value. When we consider our respondents, we ensure higher-quality data for those with whom we do business.


Why You Should Never Sample On Auto-Pilot

How do you decide the right sample variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations.


By Susan Frede, VP of Research Methods and Best Practices, Lightspeed GMI

Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.

How do you decide the right variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations. Without this understanding it is difficult to build an appropriate sample. This understanding should include a discussion of the category and how different groups react to the category. Clients may not always know every group that is important, but most will have a general understanding of how various groups might respond.

Research-Live (May 2016) recently reported an excellent example of the importance of understanding the objectives and the category. Voters in the UK will soon be voting on a referendum on whether or not to remain in the European Union. Results of polls have varied greatly and originally people thought the difference was driven by online versus phone. However, with further digging it was discovered that the decision to remain or not is highly correlated with education. Many of the polls are not controlling on education so that can lead to skews in the results. Those online are also more likely to have higher education levels so that exasperates the difference between online and phone.

Sampling differences may also be accounting for some of the large differences in political polling in the U.S. for the next presidential race. It is important to look at the types of people who support each candidate and ensure the groups are appropriately represented in the sample. In some cases it may go beyond demographic variables. Certainly in U.S. politics, political party is key as many people vote along party lines.

Some might be saying ‘but you have just given us two political examples and this doesn’t apply in the marketing research world’. But it does! Say a client is testing a new idea for a high end product with an expensive price tag. Logic suggests that those with higher income will be more likely to afford the product and purchase it. If the income of your sample skews low then it may appear the product is not viable. Income might become even more important if you are comparing several product ideas and trying to pick a winner. If one of the samples skews high on income and the other low on income, it could look as if the one with the higher income is the winner when in fact it is the sample that is driving the difference.

Generally age and gender are the most common quota variables, but below are a number of examples of what might be important to control on depending on the category. For any category, the key is to think about what demographics might impact respondents’ behaviors and answers.

  • Banking and finance – Income impacts the types of financial products people may own and use.
  • Product consumption – Household size is key because larger households have higher consumption levels.
  • Shopper study – Stores can vary by region.
  • Entertainment/music – Tastes may vary by race/ethnic group.
  • Insurance – Insurance needs change as life stage changes so controlling on things like marital status or presence of children is important.
  • Toys – Age and gender of children can drive toy preference.
  • Hispanics/Canadians – Language is important because it can drive product choice.

Even when sampling is carefully done there can still be unexpected results. This is why it is imperative that the first thing to check when receiving a data file should be the demographics. Do the demographics look like what is expected of the target group? Next brand usage and category habits should be examined. Balancing on demographics reduces the chance that there will be brand usage and habit skews, but differences can still occur. For example, having significantly more users of the brand can greatly impact key measures. When differences in demographics, brand usage, and category habits are discovered, data can be weighted to bring the differences in line with expectations.

Bottom line, sampling needs the same consideration as the rest of the research design and should never be done on auto-pilot.


Bainbridge, J. (May 2016).  Education not taken into account sufficiently in polls.  Retrieved from https://www.research-live.com/article/news/education-not-taken-into-account-sufficiently-by-polls/id/5007442


The Analytics of Language, Behavior, and Personality

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization.

human connectivity

By Seth Grimes

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization. These tasks are accomplished via natural language processing (NLP) technologies, implementing linguistic, statistical, and machine learning methods.

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

NLP touches our daily lives, in many ways. Voice response and personal assistants — Siri, Google Now, Microsoft Cortana, Amazon Alexa — rely on NLP to interpret requests and formulate appropriate responses. Search and recommendation engines apply NLP, as do applications ranging from pharmaceutical drug discovery to national security counter-terrorism systems.

NLP, part of text and speech analytics solutions, is widely applied for market research, consumer insights, and customer experience management. The more consumer-facing systems know about people — individuals and groups — their profiles, preferences, habits, and needs — the more accurate, personalized, and timely their responses. That form of understanding — pulling clues from social postings, behaviors, and connections — is the business Jason’s company, People Pattern, is in.

I think all this is cool stuff so I asked two favors of Jason. #1 was to speak at a conference I organize, the up-coming Sentiment Analysis Symposium. He agreed. #2 was to respond to a series of questions — responses relayed in this article — exploring approaches to —

The Analytics of Language, Behavior, and Personality

Seth Grimes> People Pattern seeks to infer human characteristics via language and behavioral analyses, generating profiles that can be used to predict consumer responses. What are the most telling, the most revealing sorts of thing people say or do that, for business purposes, tells you who they are?

Jason Baldridge> People explicitly declare a portion of their interests in topics like sports, music, and politics in their bios and posts. This is part of their outward presentation of their selves: how they wish to be perceived by others and which content they believe will be of greatest interest to their audience. Other aspects are less immediately obvious, such as interests revealed through the social graph. This includes not just which accounts they follow, but the interests of the people they are most highly connected to (which may have been expressed in their posts and their own graph connections).

A person’s social activity can also reveal many other aspects, including demographics (e.g. gender, age, racial identity, location, and income) and psychographics (e.g. personality and status). Demographics are a core set of attributes used by most marketers. The ability to predict these (rather than using explicit declarations or surveys) enables many standard market research questions to be answered quickly and at a scale previously unattainable.

Seth> And what can one learn from these analyses?

People Pattern Portrait Search

People Pattern Portrait Search

Personas and associated language use.

As a whole, this kind of analysis allows us to standardize large populations (e.g. millions of people) on a common set of demographic variables and interests (possibly derived from people speaking different languages), and then support exploratory data analysis via unsupervised learning algorithms. For example, we use sparse factor analysis to find the correlated interests in an audience and furthermore group the individuals who are best fits for those factors. We call these discovered personas because they reveal clusters of individuals with related interests that distinguish them from other groups in the audience, and they have associated aggregate demographics—the usual things that go into building a persona segment by hand.

We can then show the words, phrases, entities, and accounts that the individuals in each persona discuss with respect to each of the interests. For example, one segment might discuss Christian themes with respect to religion, while others might discuss Muslim or New Age ones. Marketers can then use these to create tailored content for ads that are delivered directly to the individuals in a given persona, using our audience dashboard. There are of course other uses, such as social science questions. I’ve personally used it to look into audiences related to Black Lives Matter and understand how different groups of people talk about politics

Our audience dashboard is backed by Elastic Search, so you can also use search terms to find segments via self-declared allegiances for such polarizing topics.

A shout-out —

Personality and status are generally revealed through subtle linguistic indicators that my University of Texas Austin colleague James Pennebaker has studied for the past three decades and is now commercializing with his start-up company Receptiviti. These include detecting and counting different types of words, such as function words (e.g. determiners and prepositions) or cognitive terms (such as “because” and “therefore”), and seeing how a given individual’s rates of use of those word classes compares to known profiles of the different personality types.

So personas, language use, topics. How do behavioral analyses contribute to overall understanding?

Many behaviors reveal important aspects about an account that a human would struggle to infer. For example, the times at which an account regularly posts is a strong indicator of whether they are a person, organization or spam account. Organization accounts often automate their sharing, and they tend to post at regular intervals or common times, usually on the hour or half hour. Spam accounts often post at a regular frequency — perhaps every 8 minutes, plus or minus one minute. An actual person posts in accordance with sleep, work, and play activities, with greater variance — including sporadic bursts of activity and long periods of inactivity.

Any other elements?

Graph connections are especially useful for bespoke, super-specific interests and questions. For example, we used graph connections to build a pro-life/pro-choice classifier for one client to rank over 200,000 individuals in greater Texas on a scale from most likely to be pro-life to most-likely to be pro-choice. By using known pro-life and pro-choice accounts, it was straightforward to gather examples of individuals with a strong affiliation to one side or the other and learn a classifier based on their graph connections that was then applied to the graph connections of individuals who follow none of those accounts.

Could you say a bit about how People Pattern identifies salient data and makes sense of it, the algorithms?

The starting point is to identify an audience. Often this is simply the people who follow a brand and/or its competitors, or who comment on their products or use certain hashtags. We can also connect the individuals in a CRM to their corresponding social accounts. This process, which we refer to as stitching, uses identity resolution algorithms that make predictions based on names, locations, email addresses and how well they match corresponding fields in the social profiles. After identifying high confidence matches, we can then append their profile analysis to their CRM data. This can inform an email campaign, or be the start for lead generation, and more.

Making sense of data — let’s look at three aspects — demographics, interests, and location —

Our demographics classifiers are based on supervised training from millions of annotated examples. We use logistic regression for attributes like gender, race, and account type. For age, we use linear regression techniques that allow us characterize the model’s confidence in its predictions — this allows us to provide more accurate aggregate estimates for arbitrary sets of social profiles. This is especially important for alcohol brands that need to ensure they are engaging with age-appropriate audiences. All of these classifiers are backed by rules that detect self-declared information when it is available (e.g. many people state their age in their bio).

We capture explicit interests with text classifiers. We use a proprietary semi-supervised algorithm for building classifiers from small amounts of human supervision and large amounts of unlabeled texts. Importantly, this allows us to support new languages quickly and at lower cost, compared to fully supervised models. We can also use classifiers built this way to generate features for other tasks. For example, we are able to learn classifiers that identify language associated with people of different age groups, and this produces an array of features used by our age classifiers. They are also great inputs for deep learning for NLP and they are different from the usual unsupervised word vectors people commonly use.

For location, we use our internally developed adaptation of spatial label propagation. With this technique, you start with a set of accounts that have explicitly declared their location (in their bio or through geo tags), and then these locations are spread through graph connections to infer locations for accounts that have not stated their location explicitly. This method can resolve over half of individuals to within 10 kilometers of their true location. Determining this information is important for many marketing questions (e.g. how does my audience in Dallas differ from my audience in Seattle?) It obviously also brings up privacy concerns. We use these determinations for aggregate analyses but don’t show them at the individual profile level. However, people should be aware that variations of these algorithms are published and there are open source implementations, so leaving their location field blank is by no means sufficient to ensure your home location isn’t discoverable by others.

My impression is that People Pattern, with an interplay of multiple algorithms and data types and multi-stage analysis processes, is a level more complex than most new-to-the-market systems. How do you excel while avoiding over-engineering that leads to a brittle solution?

It’s on ongoing process, with plenty of bumps and bruises along the way. I’m very fortunate that my co-founder, Ken Cho, has deep experience in enterprise social media applications. Ken co-founded Spredfast [an enterprise social media marketing platform]. He has strong intuitions on what kind of data will be useful to marketers, and we work together to figure out whether it is possible to extract and/or predict the data.

We’ve struck on a number of things that work really well, such as predicting core demographics and interests and doing clustering based on those. Other things have worked well, but didn’t provide enough value or were too confusing to users. For example, we used to support both interest-level keyword analysis (which words does this audience use with respect to “music”) and topic modeling, which produces clusters of semantically related words given all the posts by people in the audience, in (almost) real-time. The topics were interesting because they showed groupings of interests that weren’t captured by our interest hierarchy (such as music events), but it was expensive to support topic model analysis given our RESTful architecture and we chose to deprecate that capability. We have since reworked our infrastructure so that we can support some of those analyses in batch (rather than streaming) mode for deeper audience analyses. This is also important for supporting multiple influence scores computed with respect to a fixed audience rather than generic overall influence scores.

Ultimately, I’ve learned to think about approaching a new kind of analysis not just with respect to the modeling, but as importantly to consider whether we can get the data needed at the time that the user wants the analysis, how costly the infrastructure to support it will be, and how valuable it is likely to be. We’ve done some post-hoc reconsiderations along these lines, which has led to streamlining capabilities.

Other factors?

Another key part of this is having the right engineering team to plan and implement the necessary infrastructure. Steve Blackmon joined us a year ago, and his deep experience in big data and machine learning problems has allowed us to build our people database in a scalable, repeatable manner. This means we now have 200+ million profiles that have demographics, interests and more already pre-computed. More importantly, we now have recipes and infrastructure for developing further classifiers and analyses. This allows us to get them into our product more quickly. Another important recent hire was our product manager Omid Sedaghatian. Omid is doing a fantastic job of figuring out what aspects of our application are excelling, which aren’t delivering expected value, and how we can streamline and simplify everything we do.

Excuse the flattery, but it’s clear your enthusiasm and your willingness to share your knowledge are huge assets for People Pattern. Not coincidentally, your other job is teaching. Regarding teaching — to conclude this interview — Sentiment Analysis Symposium in New York, and pre-conference you’ll present a tutorial, Computing Sentiment, Emotion, and Personality. [Use the registration code GREENBOOK for a 10% discount.] Could you give us the gist of the material you’ll be covering?

Actually, I just did. Well, almost.

I’ll start the tutorial with a natural language processing overview and then cover sentiment analysis basics — rules, annotation, machine learning, and evaluation. Then I’ll get into author modeling, which seeks to understand demographic and psychographic attributes based on what someone says and how they say it. This is in the tutorial description: We’ll look at additional information that might be determined from non-explicit components of linguistic expression, as well as non-textual aspects of the input, such as geography, social networks, and images, things I’ve described in this interview. But with an extended, live session you get depth and interaction, and an opportunity to explore.

Thanks Jason. I’m looking forward to your session.


GRIT Says Panel Woes Are Jeopardizing MR’s Future. There’s an Answer.

State-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.


By Michael Smith

A running theme through the 82 pages of the most recent Greenbook Research Industry Trends Report (GRIT) is that the quality of survey sample has eroded to the point of crisis for market research.

GRIT sounds a repeated alarm over what its authors call “a known problem…with no solution in sight.” But there is a solution — all-mobile panels — which I’ll explore in a bit.

First, some facts and quotes from the report that lay out the dimensions of the panel crisis:

  • 38% of GRIT’s more than 2,000 industry respondents expect sample quality to get worse over the coming three years; fewer than 28% believe it will improve – and among clients of market research providers, optimism sank to 23%.
  • “Clients and suppliers agree that sample quality is getting worse, and there is little alignment on what to do about it. This is a perennial topic; when will the industry do something about it?”
  • “The smartphone revolution and declining participation are indeed problems that need to be addressed. Few disagree with this belief, but there is far less consensus around the extent of the problem, its implications and the range of solutions.”
  • “The difficulty of accessing truly representative sample sources….could be viewed as the single largest area of concern for the industry….We are running out of online panelists…”
  • “There are few legitimate excuses one can muster for not confronting the sample problems that plague the industry. There’s no doubt that the solutions are hard, but…far too many people…are dragging their feet.”
  • “The real existential threat to our industry is…the future of research participation. The real question therefore is when will people catch on? When will responses to these questions drive change?”
  • “We believe that the death spiral is accelerating for those researchers who fail to act. The poor experiences they create are starting to contrast markedly against the unique and engaging experiences by new entrants as well as the small number of innovators who’ve been unafraid to embrace change.”

The last sentence points the only way forward. Innovate. Embrace change.

A Formula for Successful Mobile Research

My argument is that state-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.

The misconception is borne out by one of GRIT’s most telling findings: 74% of respondents think they’re already doing mobile research, more than any other “emerging method.” An additional 17% are considering trying mobile for the first time.

MFour has long struggled to make the industry realize that not all mobile research is created equal. There’s good mobile and bad mobile, mobile that’s artless and mobile that’s state-of-the-art. There’s pure mobile that’s solely geared to smartphones, and diluted mobile that ties smartphone respondents to fading online survey technology. There’s mobile that fails and mobile that works.

MFour Mobile Research, Inc.’s aim since 2011 has been to define what mobile research can and should be, then create the new software and new approaches to panel-building that alone can make mobile work. Success means solving both ends of the equation: developing the right technology and recruiting and cultivating the right panel.

Developing The Right Mobile Technology

We’ve broken with all trappings of online research. Instead, we deploy technology that’s new to market research, the native app. Our proprietary app, Surveys on the Go®, instantly loads an entire survey into the respondent’s phone – including any pictures or multimedia content needed to enhance questions and answers. Embedding the survey into the phone is what makes it “native.”

Why does it matter? Because it frees respondents to complete surveys at their convenience. They don’t have to interrupt what they’re doing. They don’t need to be connected to the internet. Consequently, there’s no risk that the survey will become intolerably slow because of poor connections that lead to snail’s-pace downloads and data transfers. The survey can’t be dropped because of a lost signal.

At the opposite end of the spectrum are hybrid approaches that tether mobile devices to online survey software. A separate, back-and-forth exchange must take place for each and every question and answer. It’s a method that puts the respondent’s experience and the survey’s success at the mercy of internet connections that, as we all know, can bog down or disappear.

Essentially, mobile surveys embedded through a native app don’t have to be short and simplistic. Immune to smartphone signal issues, they can be long and sophisticated, and exploit special smartphone capabilities such as multimedia and Geo-location, which allows inviting panelists to surveys while they are still shopping or have just left a store. In our experience, app-based interviews run smoothly, regardless of location and even with interview lengths exceeding 20 minutes. Even at that LOI, we’ve experienced just a 6% drop-off rate. So much for the five-minute survey limit that’s commonly but wrongly posited for mobile research.

As for building a reliable, representative sample, good technology that begets a good respondent experience goes a long way toward drastically improving participation.

Curating A Winning Mobile Panel

With the right mobile technology, it’s possible to recruit the right kind of mobile panel. Ours numbers more than a million active respondents who take surveys solely on their smartphones and other mobile devices. They seem to like it, as reflected in strong ratings and comments at the App Store and Google Play. The mere fact that respondents can give us direct, unsolicited and very public feedback on their survey experiences makes app-based mobile a superior tool for becoming aware of panel problems as they arise – and taking quick action to solve them. It makes us accountable – as any firm that’s serious about its responsibilities and confident in its capabilities ought to be.

There’s much more to tell about the all-mobile approach – not least its ability to reach Millennials, Hispanics and African-Americans who, as GRIT notes, are vital to research but increasingly inaccessible to online surveys.

The Solution to Successful Mobile Research

But my main point is that the industry needs to understand that the available mobile technologies differ drastically. Then firms can make the natural comparisons, try different mobile providers, and see which can deliver a good panel and fast, reliable, representative data.

I think the most important sentence in the new edition of GRIT comes near the end, in a section called “Opportunities for the Market Research Industry” that examines ways forward from the current dead end.

“Mobile research has been seen as an opportunity for many years, but there is a sense that now we are at the stage where we can really start to exploit mobile data gathering techniques.”

Before you can exploit mobile techniques you must get to know one technique from the next. You have to stop stereotyping all iterations of mobile research as prone to the same limitations and drawbacks.

GRIT has done our field a great service by refusing to sugarcoat the sample problem and by sounding a clear alarm that something has to be done about it. There’s just one point I dispute.

I wouldn’t say that market research’s pervasive sample woes are “a known problem…with no solution in sight.” There is a solution, but until now it has been overlooked.

That appears to be changing. MFour’s year-by-year growth since we debuted our native app in 2011 suggests that an increasing number of researchers are starting to make the kinds of distinctions about mobile research that need to be made.

Market researchers need to do some research in their own backyards, to gain insights into their own most crucial interests – especially, as GRIT makes clear, when the industry’s health depends on it. Getting a more sophisticated understanding of mobile, market research’s most widely-adopted but least understood “emerging method,” would be a good start.


The Top 40 Most In Demand Research Suppliers at IIeX North America

An analysis of the 221 private meetings between research buyers and suppliers at IIeX North America.



Whew! It’s been a breakneck past month as we raced towards the biggest and best IIeX event yet: IIeX North America 2016. Last week all the hard work of many folks paid off in droves and we had a fantastic event here in Atlanta. With over 800 attendees from all around the world representing 453 unique organizations (including 115 different client-side companies) and almost 200 speakers over two and a half days, it was jam packed with the extraordinary at every level.

If you missed it, or want to be reminded of what a great time you had, check out this sizzle reel our video team at Smilesstyles Media, put together:



Of course, the lifeblood of IIeX is connecting buyers with suppliers and supporting business, so I thought it would be interesting to take a look at our Corporate Partner program meetings and see what we can glean on what client’s were looking for at the conference. If you’re not familiar with the CP program it’s pretty simple: we work with buyers of research and investors by giving them the opportunity to meet with any potential suppliers/partners they pick during the conference. They can choose from any company attending, and there is no charge for anyone to participate. It’s one of the most popular aspects of IIeX events for all involved, and we’re thrilled to be able to help the industry grow in this way.

In Atlanta, 15 partners met with 112 different supplier companies for a total of 221 private meetings! That’s an awful lot of business value generated! In many cases the Corporate Partners sent teams of people to cover both the meetings and to absorb the content from the agenda or take advantage of the informal networking and exhibitor discussions, ensuring that no stone was left unturned in getting everything they could from the conference.

The participating partners in Atlanta were:

General Mills
Harley Davidson
Nestle Purina

Now here is the interesting part: what were they looking for?

We categorized every supplier invited to the meetings and added up the number of meetings for each group and here is what we found:


IIeX CP Meetings


A few notes:

  • Before anyone asks in the comments, no, we will not divulge supplier names here. You are welcome to make guesses based on public information on attending companies and exhibitors but we will neither confirm or deny. 
  • Neuromarketing includes any method that is based on neuroscience or cognitive science, not just EEG based research. It does not include Facial Scanning or the application of Behavioral Economics models; they are counted separately.  
  • Online Qual includes both traditional approaches and emerging “agile” or automation solutions, as well as hybrids that combine qual and quant in an online setting. It does not include Ethnography or Communities; the companies in this cluster use either a group or IDI model in an online environment.
  • Innovation Consultancy are firms that either focus on NPD or Innovation as their primary offering, or offer tools that are focused on those fronts. Many here are also Full Service firms, but I’m using a bit of insider knowledge in working with the Corporate Partners to give context to why they were selected and in all cases it was due to a need to find new product/service innovation models. Other companies in other categories such as Prediction Markets could also fit here, but since they offer a very specific approach I kept them separate.    

So what’s the big picture here? Here’s my take on the highlights.

Since the inception of the CP Program, Neuromarketing has been at the top of the list. Despite only showing tepid growth in GRIT adoption rankings, interest in the value that nonconscious techniques can deliver for insights remains very high and the client-side continues to explore what suppliers can offer here.

Online Qual has been around for many years, but like online surveys not much speed or cost efficiency was gained by what amounted to a simple form factor shift. A few years ago that began to change in quant with the advent of DIY platforms, micro-surveys and more recently automation and the same thing is now happening with qual. Advances in recruiting tech (sample APIs for instance) as well as integrating new tools such as video analysis, text analytics, and AI-driven probing are making qual much more efficient, cheaper, and closer to real-time. That is driving interest in the next generation of online qualitative tools.

We are also witnessing the emergence of the next gen generation of social media analytics offerings, with the new players entering the market more focused on using social data to drive segmentation, nonconscious measurement, data synthesis or advanced analytics.  Clients have gone through the hype cycle as well and understand the value and use cases for social data (or text in general) and are now anxious to explore how the new class of tools can deliver more value from their predecessors.

Everything else on the list shows continued interest in both established and emerging approaches on a more focused level. Obviously Analytics, Gamification, Research Automation, Sample, Image-based Data Collection, Shopper Insights, Video-based Research, Behavioral Economics Research, Facial Scanning, and Mobile Research continue to be of high interest across the board.

Just as the GRIT Emerging Technology adoption rankings can be used to help gauge where investment dollars should be spent, I think this analysis of where client interest lies at IIeX is another vital data point to consider during the strategic planning process. Of course clients come to IIeX looking for “new stuff” , and we also tend to attract suppliers that fit that mold so there is a certain amount of confirmation bias here and we should not assume that traditional suppliers or modes are not in demand; of course they are. However, there is no denying that at the very least this list points to where the industry is going and it’s well worth exploring what this means to your business.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

What Automation Means for Healthcare Market Researchers

What exactly does automation mean for healthcare market researchers and what kinds of automation are already here or on the horizon that you should be watching?


By Tom Lancaster, Chief Technology Officer, InCrowd

The latest GRIT report on 2016 market research industry trends is out, and I’m really excited to see a special new section on “Adoption of Automation.” I’ve been thinking a lot lately about automation and how it can and should be applied to healthcare-related market research. (See my guest post on marketresearch.com.)

Automation and the prospect of machines doing what humans do is of course nothing new. For businesses, analytics, and reporting have long been automated across sectors and industries (think about the advent and promise of big data!). For busy professionals, word processers instead of typewriters, and meeting scheduling applications have been a boon for productivity and efficiency.

For market researchers in particular, the new GRIT report notes that charts, infographics, analysis of text, survey, and social media data are already popular automated features.

The report goes on to say, “what can be automated will ultimately be automated as well.” I couldn’t agree more. But what exactly does this mean for healthcare market researchers and what kinds of automation are already here or on the horizon that you should be watching?

Here are four important market research features to think about when considering automated solutions:

  1. Sampling—How do you fill a survey with the right respondents and how do you do it quickly? And, with physicians busier than ever, getting them to provide high-quality answers while on the go is also part of the puzzle. The GRIT report notes that “while a third of respondents already use automated sampling, many [are] concerned about the impact of this automation on data quality.” Sophisticated sampling algorithms however now let you optimize for speed and response quality. The better ones do this through a “trickle” sample methodology that goes out to smaller subsets of respondents at a time in order to reduce the number clicking through only to discover that the survey has closed. Such sampling automation means a better survey experience for physicians, which in turn leads to faster responses and increased rates of participation all in the service of higher data quality.
  1. Survey Data—One reason traditional surveys take weeks to compile and report out on is the enormous amount of time and human resources spent on filtering and cleaning data sets. New survey technology applications deal with this problem head on by validating responses as they come in. This is also known as real-time data quality assurance and it uses software to clean data sets during fielding or as soon as surveys close in order to provide high-quality, real-time survey data. Survey technology providers are mastering the art and science of survey data analysis, and market researchers are adopting this approach in large numbers, with nearly 42 percent using some form of survey data automation, according to the GRIT report.
  1. Tracking Studies—The social media timeline—with its algorithms to automate shares, tags, and tweets—is a great analogy for the tracking study. These are your micro-moments captured over time so you can focus on finding joy and meaning in those streams of thoughts and pictures. Market research applications are applying similar timeline algorithms to the tracking study. Automation is able to take the onerous work out of repetitive fielding of tracking surveys, aggregating responses, and providing visual comparisons of multiple waves. In other words, machines do the large quantity of tedious work so market research teams can spend their time defining and analyzing KPIs—and making smart decisions with that data.
  1. Translation & Transcription—In the last 10 years, there has been an explosion of “application program interfaces” or APIs that allow different software programs to connect with each other. Think of looking up that new restaurant in Yelp using Google Maps—this is all done through automated APIs for a seamless user experience that allow you to stay in Yelp the entire time. The same is true for quantitative surveys and qualitative interviews that require translation and/or transcription. The world of APIs allows us to create a unique “network of services” that collects survey feedback, upload the data files to a third-party transcription or translation service provider, and receive the translated materials—all through a single user interface.

Such automation is really about bringing the human factor into your work. It’s how innovation and technology are in fact stimulating a demand for skills only humans have: creative thinking, critical decision-making, complex human-centered analysis.

An example from the past tells us this can happen: When automatic teller machines (ATMs) came out, bank employees were freed from conducting basic transactions behind a counter to engaging in higher-value responsibilities like sales and financial advising—activities that helped build customer loyalty and the company’s brand.

Imagine what automation in market research would allow you to do.



It’s all in the Process: 6 Steps for Successful Market Research

It is vital to have a unique, collaborative, client-centric approach to market research which will enable better, actionable results with higher ROI for clients.


By Tim Glowa, Bug Insights

What’s wrong with what we do now?

Typical market research projects follow a predictable process – the client tells the consultant what they want to learn (most often via a written brief), and the consultant designs a survey to share with the client for review. Through a series of meetings and survey iterations with the research client, the consultant develops a final survey, then fields it, collects the data, analyses it and presents the results (or, more often emails the results) in a report. Job done.

But wait a minute. We think this process is flawed, indeed, we think it often results in poor research. In our view, it is vital to have a unique, collaborative, client-centric approach to market research which will enable better, actionable results with higher ROI for clients.

If you want to get the most out of the market research consultants you hire, you should follow the process that we share in this article. Just six steps that market research clients should take in order to get the best results.

Step One

When a client hires us, the first thing we do is have a kick off call to make sure that we really understand the client’s problems, what they have tried in the past, what hasn’t worked, and what has. We want to learn as much as possible about the company, so we can address their greatest concerns. We call it “being smart” about a company, their customers and competitors. In this kick off call, it is important that we also learn hard metrics like: defection levels, market share, penetration, average tenure rates, etc. After we gather all of this information, we ask what actions they plan to take based on the data we provide for them. This is important, so the client has a clear understanding of their ideal outcome, and so do we.

Step Two

After the kick off call, our team constructs a straw man study based off the metrics the client shared with us. The straw man study is a great way to establish a starting point with the client. Typically, the straw man is a conjoint study that encompasses levels, attributes, demographics, and attitudinal programming. We have found that creating a foundational study is significantly more useful than entering the next meeting with a blank sheet of paper. The straw man gives the client something to react to. During step three, we want them to scrutinize the proposed study, keeping what they like, getting rid of what they don’t, and adding what they need.

Step Three

Our third step in the process is a survey design workshop with the client. We encourage them to bring anyone into the meeting that may be administering the survey, or people who will be implementing the results, often called the “end users”. Typically, a person from finance will join as well. This portion of the process is unique, but critical for positive market research outcomes. Since the client knows their issues from the inside, we need their input into what will work, and what will not. Collaboration between the client and market researchers is imperative during this step for a successful and fulfilling study.

The main goal of this workshop is to have 95% of the survey finished. We want to test something that is as broad as possible, but still attainable.

Step Four

Now that the survey is finalized, we work with the client, or somebody in finance to make gross profit estimates. During this step, we put all the features we are testing into the conjoint study. This leads into optimization, which allows us to get to gross estimates and gross profits for the client changes we may suggest.

Step Five

During this step, the survey is programmed; we gather and analyze the data, and create the choice modeling.  Ironically, with most research programs this is where most of the time and effort is spent with many of the previous steps skipped. But we cannot create the choice modeling our clients need without having gone through the process to this point.

Step Six

At the end of the project, we share the process, and findings with the whole group that participated in the survey design workshop. At this point, the client has been part of the whole process, so we are not presenting new research to a new audience. They are already invested in the process, and are excited to see the results. Here, we share findings, make suggestions and the client builds plans.

Satisfaction through a better process

Each one of these steps contributes to overall market research satisfaction and success. It is too often that clients are left with stacks of data without any actionable steps to take. Those piles of data eventually collect dust, and little (if any) organizational change is implemented. So this is a core issue for the market research industry but also for clients. If you don’t want to see your research dollars wasted, you need a better process.