Thanks for visiting! I’m a quantitative social scientist and in my research I try to advance understanding of public attitude and behaviour change and the role of public communication therein, and then apply this understanding to help solve societal problems, such as vaccine hesitancy and pandemic preparedness, and the negative effects of political propaganda. I received my Ph.D. in experimental psychology from Royal Holloway, University of London and currently hold an early career research fellowship from the Leverhulme Trust in which I’m studying the impact of personalised/targeted communication on voters’ attitudes and behaviour in democracies. I’m based at the Centre for the Politics of Feelings in London and am also a research affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology. Beyond my substantive research areas, I’m interested in metascience, the R programming language, and giving what we can. If you’d like to know more, please get in touch!
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
According to various sources the world is likely to witness another pandemic on the scale of COVID-19 in the future. How can the social and behavioral sciences contribute to a successful response? Here we conduct a cost-effectiveness analysis of an under-evaluated yet promising tool from modern social and behavioral science: the randomized controlled trial conducted in an online survey environment (“in-survey RCT”). Specifically, we analyze whether, in a pandemic context, a public health campaign that uses an in-survey RCT to pre-test two or more different message interventions — and then selects the top-performing one for their public outreach — has greater impact in expectation than a campaign which does not use this strategy. Our results are threefold. First, in-survey RCT pre-testing is plausibly cost-effective for public health campaigns with typical resources. Second, in-survey RCT pre-testing has potentially powerful returns to scale: for well-resourced campaigns, it looks highly cost-effective. Third, additional evidence for several key parameters could both confirm these patterns and further increase the cost-effectiveness of in-survey RCT pre-testing for public health campaigns. Together our results suggest in-survey RCT pre-testing can plausibly increase the impact of public health campaigns in a pandemic context and identify a research agenda to inform pandemic preparedness.
Much concern has been raised about the power of political microtargeting to sway voters’ opinions, influence elections, and undermine democracy. Yet little research has directly estimated the persuasive advantage of microtargeting over alternative campaign strategies. Here, we do so using two studies focused on U.S. policy issue advertising. To implement a microtargeting strategy, we combined machine learning with message pretesting to determine which advertisements to show to which individuals to maximize persuasive impact. Using survey experiments, we then compared the performance of this microtargeting strategy against two other messaging strategies. Overall, we estimate that our microtargeting strategy outperformed these strategies by an average of 70% or more in a context where all of the messages aimed to influence the same policy attitude (Study 1). Notably, however, we found no evidence that targeting messages by more than one covariate yielded additional persuasive gains, and the performance advantage of microtargeting was primarily visible for one of the two policy issues under study. Moreover, when microtargeting was used instead to identify which policy attitudes to target with messaging (Study 2), its advantage was more limited. Taken together, these results suggest that the use of microtargeting—combining message pretesting with machine learning—can potentially increase campaigns’ persuasive influence and may not require the collection of vast amounts of personal data to uncover complex interactions between audience characteristics and political messaging. However, the extent to which this approach confers a persuasive advantage over alternative strategies likely depends heavily on context.
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.
Party elite cues are among the most well-established influences on citizens’ political opinions. Yet, there is substantial variation in effect sizes across studies, constraining the generalizability and theoretical development of party elite cues research. Understanding the causes of variation in party elite cue effects is thus a priority for advancing the field. In this paper, I estimate the variation in party elite cue effects that is caused simply by heterogeneity in the policy issues being examined, through a reanalysis of data from existing research combined with an original survey experiment comprising 34 contemporary American policy issues. My estimate of the between-issue variation in effects is substantively large, plausibly equal to somewhere between one-third and two-thirds the size of the between-study variation observed in the existing literature. This result has important implications for our understanding of party elite influence on public opinion and for the methodological practices of party elite cues research.