Hello! I’m a quantitative social scientist interested in attitude and behaviour change and computational methods. I received my Ph.D. in experimental psychology and am currently on a Leverhulme Trust research fellowship in which I’m studying the potential impact of personalised/targeted communication on public opinion and behaviour. I’m based at the Centre for the Politics of Feelings and RHUL in London, and am also a research affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology. In addition to my substantive research areas, I’m interested in metascience, research methods, the R programming language, and effective altruism. If you’d like to know more about me or my research, please get in touch!
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
Much concern has been raised about the power of political microtargeting to sway voters’ opinions, influence elections, and undermine democracy. Yet little research has directly estimated the persuasive advantage of microtargeting over alternative campaign strategies. Here, we do so using two studies focused on U.S. policy issue advertising. To implement a microtargeting strategy, we combined machine learning with message pretesting to determine which advertisements to show to which individuals to maximize persuasive impact. Using survey experiments, we then compared the performance of this microtargeting strategy against two other messaging strategies. Overall, we estimate that our microtargeting strategy outperformed these strategies by an average of 70% or more in a context where all of the messages aimed to influence the same policy attitude (Study 1). Notably, however, we found no evidence that targeting messages by more than one covariate yielded additional persuasive gains, and the performance advantage of microtargeting was primarily visible for one of the two policy issues under study. Moreover, when microtargeting was used instead to identify which policy attitudes to target with messaging (Study 2), its advantage was more limited. Taken together, these results suggest that the use of microtargeting—combining message pretesting with machine learning—can potentially increase campaigns’ persuasive influence and may not require the collection of vast amounts of personal data to uncover complex interactions between audience characteristics and political messaging. However, the extent to which this approach confers a persuasive advantage over alternative strategies likely depends heavily on context.
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.
Party elite cues are among the most well-established influences on citizens’ political opinions. Yet, there is substantial variation in effect sizes across studies, constraining the generalizability and theoretical development of party elite cues research. Understanding the causes of variation in party elite cue effects is thus a priority for advancing the field. In this paper, I estimate the variation in party elite cue effects that is caused simply by heterogeneity in the policy issues being examined, through a reanalysis of data from existing research combined with an original survey experiment comprising 34 contemporary American policy issues. My estimate of the between-issue variation in effects is substantively large, plausibly equal to somewhere between one-third and two-thirds the size of the between-study variation observed in the existing literature. This result has important implications for our understanding of party elite influence on public opinion and for the methodological practices of party elite cues research.