How We Do It
Every week, we pick two ads that are running or have recently run in the presidential campaign. Sometimes the ads are run by candidates and sometimes by other groups interested in the election.
We send these ads to our partners, G2 Analytics and SageEngage and they create an ad rating event that allows people to rate the ads in real time as they are watching them. People rate the ads by telling us the moments in the ad that they both like and dislike.
We are interested in understanding which ads are the most effective and why. To do this research, we turn to some well-known social scientific methods that will help us highlight how well the ads are working viewers.
First, in partnership with the survey research firm YouGov, we invite 1,000 people each week to rate that week’s ads. Each week, different people are rating the ads so they don’t get tired of doing this or sensitized to the rating process. We don’t want expert raters — we want regular people!
YouGov ensures that the sample of 1,000 people represents the general adult population in the United States on things that are relevant to political decision-making. To read more about the YouGov method for doing this, check out this paper.
As the people are being invited to rate the ads, we are randomly assigning them to one of four groups. The first sees only the first selected ad for that week. The second group sees only the second selected ad. The third group sees both ads and the fourth group sees an ad with something entirely unrelated to the presidential campaign — Peyton Manning singing the Nationwide Insurance jingle. It’s important that even people in the no-politics ad group watch an ad to insure we are comparing people who are able and willing to watch an ad to one another.
It’s also important that people are randomly assigned to the groups so that at the end of the ratings for that week we can be relatively confident that all the groups look roughly the same. Because of the randomization we should have the same number of men, women, Republicans, Democrats, and even cat-owners in each group. This means that when we compare things like candidate favorability ratings across the groups, we can be assured we are comparing the same types of people.
We invite people on Tuesdays and finish the ratings on Fridays. Then we analyze the data and put it on the website for you to investigate.
Our overall ranking of the ads — the SpotCheck Score — is a function of the engagement and effectiveness of the ad. We measure the effectiveness of an ad by comparing the work it does for the sponsor to the way the sponsor is evaluated by people in the control group — who didn’t see the sponsor’s ad. Specifically, we are measuring the boost candidates get in net favorability ratings relative to the control group and the drain in net favorability ratings the ad produces on the opposition relative to the control group. If candidates raise their favorables, lower their unfavorables and do the opposite to their opponents (relative to the ratings of people in the control condition) they will have a high effectiveness score.
We then calculate the average percent of the sample that is emotional engaged with the ad per second using the data from G2 Analytics and SageEngage and we multiply that percent by the effectiveness score to give us a sense of the the overall impact of the ad. An ad that is good at persuading people will be both high on effectiveness and emotional engagement. In fact, using data from the first six weeks of SpotCheck, we find that our SpotCheck scores correlate with people’s ratings of the ads’ truthfulness and quality at very high levels (.5 and .6 respectively). We also find in early testing that ads with high SpotCheck scores also have more social medial sharing on sites like Facebook and Twitter, for example.
This project is funded by UCLA and Vanderbilt University with additional support from the Carnegie Corporation’s Andrew F. Carnegie Fellows program. G2 Analytics, SageEngage, and YouGov have also contributed products and services.
We understand the value of these data and experiments and plan to archive the data, questionnaire’s, and links to the ads for other researchers to explore and use in their work within one year of the newly elected president’s inauguration.