Note: This post was originally published as “An Evolution Technology Prediction Markets Case Study”
Like most of you, I read a great deal. That’s where much of my learning and understandings come from. Recently, for example, I was reading about web, behavioral and similar groups analytics and discovered that not all analytics methods and packages work equally well for all companies. The analytics tools which are the best for company A might be the worst for company B because companies A and B have different business models, have different KPIs, define “standard” metrics differently but use the same terms in their definitions, … Companies need to determine what they really want to measure before they can decide the best way to measure it.
ore salt was tossed on the wound when I discovered that many analytics packages aren’t even true to themselves; results for a single metric on a single site can vary wildly, they might add or drop data points due to collection methods, and sometimes what is being reported might not even be what is or should be measured.
Later in that same day I was also reading a history of the use and abuse of the press during times of war. Specifically, I was reading about how the US Pentagon manipulated stories and analysis until it found a metric which military advisers thought both made the military look good and gave the public the sense the military was doing it’s job. The metric they came up with was the body count.
Reading how analytics companies are searching for common definitions and KPIs which make them and their methods look good, and reading only a few hours later about the evolution of “the body count” and how it was and is used, I couldn’t help but appreciate that the language used in both readings was so similar, so directed at arguing the same points for the same reasons, that the two authors could have swapped manuscripts and it wouldn’t have mattered much. Traditional analytics is using body counts, they’re just calling it something different.
I do not hold any company in error for doing their job, part of which is to convince consumers that the company’s methods, products and services are the standards by which all others should be compared. Any individual or group proclaiming they have the lock on how something should be done always makes me nervous. History demonstrates that the only constant is change, and today’s lock is tomorrow’s open door.
The most basic metrics, the unique visitor and the conversion, are basically body counts. The former is of all troops on the field and the latter is of survivors. We count one and debrief the other. All metrics beyond those basics aren’t often considered — the wounded, MIAs, counts of others who can no longer tell us their story, can’t share what really happened, what really got them where they are.
No one seems to want to count the liberated, the survivors, the wounded and evacuees, the escapees. No one wants to know their name or hear from them directly — they can’t provide actionable economic impact so ignore them.
Yet research shows that these people are the ones with the most interesting stories to tell and that they want to tell us their stories. That’s what people do, they tell stories to each other, some true, some not. It’s how we create communities; we share experience, we seek to touch each other with words if not our hands, and all people do it. Even people who push others away need something to push against, to touch, until the distance is a comfortable one.
These people — those who are liberated, who survive, who escape, who are wounded and rise again to tell of their experience with our websites, our marketing material, our leave-behinds and downloads — want to tell us their stories.
The current method of analysis — body counts and debriefing — is good at telling us what happened. It’s not good at telling us what could have happened and why it didn’t. In a field where the difference between 1% and 1.1% is the difference between closing the doors and being profitable, knowing the “could have”‘s and “why”‘s is as important if not more important than knowing what happened.
Making these types of prediction markets work requires sophisticated software, time and a willingness of lots of people to participate. The market needs to be defined, set up, advertised, a reasonable reward needs to be established and participants need to be solicited and selected. Once you have the participants you need to ask them questions then do some clever mathematics to normalize the results. Think of a focus group, albeit a very large one (prediction markets using this method vary in size from 1,500 to 31,000 active participants), and you begin to get the idea. Also, make the focus group a broad demographic. Everyone in your group should have some familiarity with the subject matter but beyond that anybody’s opinion is game. Prediction markets differ from the traditional focus group concept because people taking part in traditional focus groups know that they are being evaluated and that they’ll be rewarded for their time regardless of outcome, according to Rivier Business Professor Eric Drouart, Former VP of International Operations for Bristol-Myers Squibb. This prediction market method is more like the real world than a focus group in the sense that participants are rewarded when the markets become profitable.
Another prediction market methodology completely bypasses the problems inherent in focus groups, time, set up and development costs, active participation and rewards. This methodology makes use of some clever mathematics but not to normalize the polling process and results. This method makes use of mathematical tools called concept manifolds and solid probabilities to create virtual (or “synthetic”) cultures. A simple way to think of how synthetic cultures work is this; if you count up the little opinions of everybody in a group, you start to see a ‘group’ opinion emerging on the big things. Synthetic cultures are like personae on steroids. Traditional personae are useful and limiting — you can create a target profile but, unless your entire market matches that one target profile, you have to creating different personae for everyone in your market segment.
Synthetic cultures allow you to create a group personae or cultural identity that matches entire demographics. Instead of a single persona, Pat (a mid-30’s accountant transplant from the mid-West to Boston interested in good wines and personal fitness, no kids but in a good relationship) you get Pat, all of Pat’s friends, co-workers, people who shop at the same stores Pat shops at, have the same upbringing but go into different professions, and so on. Research In Motion has the synthetic cultures to establish itself in new markets, and Forrester Research’s Shar VanBoskirk used synthetic culture concepts in her 8 Nov 05 NEDMA presentation, Integrated Marketing Grows Up.
Prediction markets using synthetic cultures generate their predictions via a sophisticated knowledge of an audience’s beliefs and culture (it’s socio-anthropology). An added advantage of synthetic cultures is that they don’t require the markets to exist, virtually or otherwise. Synthetic cultures predict not only the outcomes of synthetic markets (“Will there be more police dramas on this Fall’s TV schedule?”); they predict the a target audience’s responses to changes in a market (“Will people be willing to watch more police dramas this Fall than are willing to watch them now?”). Synthetic culture prediction markets answer more than whether something will or won’t happen. They venture into the realm of whether or not what happens will make a difference.
The power of either prediction market method comes from the diversity of their participants, and they’ve accurately predicted election outcomes [[(as documented in Reading Virtual Minds Volume 1: Science and History and“Predicting Election Outcomes via NextStage’s TargetTrack” or “Why Dean Led, Kerry was Droll and Lieberman Foundered in 2004”]] and top economic performers among other things.
Learning if Yellow Cars Will Sell
Let’s do a little exercise to give you an idea of how the BMech and PA aspects of prediction markets gain their predictive power, and how knowing the opinions on little things determines the opinions on big things. Let’s start by asking ten people the question, “How would you rate the color yellow; good, bad or indifferent?” We find out that five people like yellow, three don’t like it and two people have no opinion. That equates to 50% good, 30% bad and 20% indifferent.
Now ask a second question, “What do you think of this car, good, bad, indifferent?” The results with the same ten people are 20% good, 40% bad and 40% indifferent. Ask these two questions with a sufficiently large group of people and you never have to ask them “What would you think of this car if it was yellow?” because their likely answer will be the average of their previous two answers; 35% will like the car if it’s yellow, 35% won’t like it and 30% won’t care. Now share this result with a car manufacturer who commissioned this prediction market and they’re decision is
- not to produce that car in yellow for the mass market because 65% of the market either won’t like it or won’t be interested,
- but to market it aggressively to the 35% demographic which will respond favorably.
Wherein Lies the Power
The key to prediction markets’ power is nothing new. Whether you’re working with synthetic markets or synthetic cultures, you have to know
- how to ask questions,
- how to codify the answers and
- how to find the best people to ask.
The caveat to all three of these is that you have to remove all bias from the questions, codification of the answers and from choosing people to answer the questions. That’s not easy to do.
One of the methods NextStage uses to remove bias is to go where people go and simply ask questions. See if you can match the following products to the venues where different synthetic cultures got their start:
2) MPEG/WAV Player
4) Notebook Computer
5) Branded Website
6) Undergraduate College
7) Airline Frequent Flyer Program
8) Brick&Mortar Bookseller
9) Theatrical Movie
b) Fastfood Restaurant
c) Grocery Store
h) Upscale Restaurant
i) Walking the Dog
If you looked at the correct answers and were surprised at how many different venues are used for traditional market testing, think back to removing biases. A cross section is best when it crosses several sections, not just one or two.
One of the tricks to making synthetic cultures work is to ask people to convince you, not ask them to let you convince them. For example, a test to determine if a particular PDA is going to be successful might start out by walking up to people with a PDA and politely saying, “Excuse me, I notice you have a such-and-such and I’m thinking about getting one. Would you recommend yours?” Two or three innocuous, curious and well structured questions later you have two books worth of data. Do that ten times, match the results to the socio-anthropologic norms of your target demographic and you have all the data you need to determine the entire demographic’s response to a given campaign, product or service.
Removing bias in questions means crafting two sets of questions. The first set of questions can be answered by “Yes”, “No” and “Maybe”. The second set of questions are numerical and grow out of the first. For example, the first question is “Do you like the color yellow?” The person answers, “Yes”. Now ask a second set question, “If you had to put a number, 1-100, on that, do you like the color yellow at 100? At 75? At 12?” These two questions together are a person’s soft and the hard experience, or what psycholinguists and semioticists call qualia. Essentially, the two questions combine to ask “How much of a ‘yes’ is that ‘yes’?” Someone’s 60% of “Yes” doesn’t mean there’s 40% of “No”, it means there’s 40% of “not quite ‘yes’ enough” or “not ‘yes’ enough to be 100%”. A slight variation is to replace the 1-100 scale with a time-based scale. The first question might be “Do you like this book?” and the second question might be “Would you read something else from the same author in a week? In a month? In a …?” Synthetic culture prediction markets used this way separate consumers’ intent from their wishful thinking.
Getting in the Game
NextStage’s prediction markets tools take two forms. The first form is NextStage’s award winning TargetTrack™ product. Unlike other prediction markets products, NextStage’s TargetTrack™ utilizes the combined experiences of over 25,000 individuals to determine how well material, products, candidates and businesses will fare in current and future situations, and in many cases results are available in less than 30 seconds. Marketers, advertisers, economists and politicians can determine how slight changes in product placement, design, statements or agendas will affect a very large population or a very small one — say all Americans versus all Hispanic-Americans or Asian-Americans, or all men versus all women — in a matter of moments and refine their messages to optimize the outcome of any or all markets they’re interested in.
The second form of NextStage’s prediction markets tools is embedded in its Intelligent Analytics™ products. Unlike traditional web analytics which provide body-counts, NextStage’s Intelligent Analytics™ determines both the “common thread” or consensus and the “usual story” or average response. The average response is the end product of traditional survey and focus group studies, the consensus response is the end product of prediction markets. Companies using NextStage’s Intelligent Analytics™ to monitor their website activity learn more than just how many people were on what page. They also learn what visitors really think about the site’s navigation, layout, content, and more.