Information about our study

Hello and welcome to Inoculation Science. If you see this page, you were redirected here after clicking on a link that you were shown after watching an ad on YouTube. On this page, you can learn more about the purpose of the ad you watched and the scientific research behind it.

Inoculation Science is a project by the University of Cambridge, the University of Bristol and Google Jigsaw in which we’re trying to find out if it’s possible to improve people’s ability to spot manipulation and persuasion techniques commonly used by online content creators. This idea is grounded in inoculation theory, a framework from social psychology which posits that it’s possible to build psychological resistance against future unwanted persuasion attempts through “prebunking” (or pre-emptive debunking). To learn more about how this works, please click this link.

As part of this project, we’ve created a series of inoculation videos, each of which explains a specific manipulation technique or logical fallacy commonly encountered online, such as using emotionally manipulative language, false dichotomies, or incoherence. Please click this link to watch the videos.
To test whether these videos actually improve people’s ability to spot potentially manipulative content, we ran a series of randomized controlled studies in a laboratory setting, with very positive results: compared to a control group, study participants who watched an inoculation video became significantly better at discerning manipulative from non-manipulative social media content, were more confident in their ability to assess the quality of news on social media, improved in their ability to identify trustworthy and untrustworthy content, and indicated being less willing to share manipulative content with others. The results of this study are currently under review at a peer-reviewed scientific journal, but will be linked here once they are published.
For this specific study, we were interested in finding out if the videos (about emotionally manipulative language, which you can watch here, or false dichotomies, which you can watch here) are effective at improving people’s ability to spot manipulative social media content if you run them as an advertisement on YouTube. To do so, we ran a YouTube ad campaign among a random sample of YouTube users which met the following criteria: 1) 18 years or older; 2) a US resident; 3) English-speaking; 4) having recently watched at least one political news video on YouTube. Within this sample, people were randomly shown either the emotional language video or the false dichotomies video as a YouTube ad. A random 30% of people who were shown either video as an ad was also shown a single survey question, where they were asked to identify which particular manipulation technique is being used in a fictional social media post. In total, we created 20 such social media posts, which we stripped of all source and other identifying information. Half of these posts (10) were phrased to be emotionally manipulative, whereas the other half were each manipulative post’s neutral (non-manipulative) pair. You can see an example of a manipulative post and its neutral pair below.
As you can see, both posts are about baby formula posing a potential risk. However, the manipulative post (on the left) contains words such as “horrific” and “terrifying”, “helpless” and “despair”, which are types of moral-emotional words that research shows can increase the viral potential of online content. The post on the right is worded to be much more neutral, and does not contain moral-emotionally evocative phrasing. In total, we created 10 such manipulative-neutral pairs for each video.
In this study, we are testing the hypothesis that participants who watched the “emotional language” or “false dichotomies” video as a YouTube ad are significantly better than a control group (a random group of YouTube users that meets the recruitment criteria and was not shown an inoculation video, only a survey question) at correctly identifying the use of a particular manipulation technique in social media content. We will update this page once we know more about whether these hypotheses were confirmed.
If you answered a survey question after watching one of the above videos, your response will be recorded and used in our study. To ensure your privacy, we do not record any YouTube usernames or other personally identifying information. We are only interested in study participants’ responses to the survey questions as part of our scientific research, and therefore do not collect any demographic or other data. Because we do not record any identifying information (and therefore can’t match responses to any individuals), we are unfortunately unable to remove any responses from our dataset upon request after they have been recorded. This study was reviewed by the Cambridge University Psychology Research Ethics Committee. Should you have any questions or concerns about this study, please contact the coordinator of this study, Dr. Jon Roozenbeek ([email protected]).