Thanks for visiting! I’m an assistant professor at the London School of Economics in the department of psychological and behavioural science where I study the nexus between technology, persuasive communication, and attitude and behaviour change. I’m also a research affiliate of the Human Cooperation Lab at the Massachusetts Institute of Technology and currently hold an early career research fellowship from the Leverhulme Trust in which I’m studying the impact of “microtargeted” persuasive communication on voters’ attitudes and behaviour in democracies. If you’d like to know more, please get in touch.
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 U.S. political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence of a log scaling law: model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are barely more persuasive than models smaller in size by an order of magnitude or more. Second, mere task completion (coherence, staying on topic) appears to account for larger models’ persuasive advantage. These findings suggest that further scaling model size will not much increase the persuasiveness of static LLM-generated messages.
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.
According to various sources the world is likely to witness another pandemic on the scale of COVID-19 in the future. How can the social and behavioral sciences contribute to a successful response? Here we conduct a cost-effectiveness analysis of an under-evaluated yet promising tool from modern social and behavioral science: the randomized controlled trial conducted in an online survey environment (“in-survey RCT”). Specifically, we analyze whether, in a pandemic context, a public health campaign that uses an in-survey RCT to pre-test two or more different message interventions — and then selects the top-performing one for their public outreach — has greater impact in expectation than a campaign which does not use this strategy. Our results are threefold. First, in-survey RCT pre-testing is plausibly cost-effective for public health campaigns with typical resources. Second, in-survey RCT pre-testing has potentially powerful returns to scale: for well-resourced campaigns, it looks highly cost-effective. Third, additional evidence for several key parameters could both confirm these patterns and further increase the cost-effectiveness of in-survey RCT pre-testing for public health campaigns. Together our results suggest in-survey RCT pre-testing can plausibly increase the impact of public health campaigns in a pandemic context and identify a research agenda to inform pandemic preparedness.
Much concern has been raised about the power of political microtargeting to sway voters’ opinions, influence elections, and undermine democracy. Yet little research has directly estimated the persuasive advantage of microtargeting over alternative campaign strategies. Here, we do so using two studies focused on U.S. policy issue advertising. To implement a microtargeting strategy, we combined machine learning with message pretesting to determine which advertisements to show to which individuals to maximize persuasive impact. Using survey experiments, we then compared the performance of this microtargeting strategy against two other messaging strategies. Overall, we estimate that our microtargeting strategy outperformed these strategies by an average of 70% or more in a context where all of the messages aimed to influence the same policy attitude (Study 1). Notably, however, we found no evidence that targeting messages by more than one covariate yielded additional persuasive gains, and the performance advantage of microtargeting was primarily visible for one of the two policy issues under study. Moreover, when microtargeting was used instead to identify which policy attitudes to target with messaging (Study 2), its advantage was more limited. Taken together, these results suggest that the use of microtargeting—combining message pretesting with machine learning—can potentially increase campaigns’ persuasive influence and may not require the collection of vast amounts of personal data to uncover complex interactions between audience characteristics and political messaging. However, the extent to which this approach confers a persuasive advantage over alternative strategies likely depends heavily on context.
It is widely assumed that party identification and loyalty can distort partisans’ information processing, diminishing their receptivity to counter-partisan arguments and evidence. Here we empirically evaluate this assumption. We test whether American partisans’ receptivity to arguments and evidence is diminished by countervailing cues from in-party leaders (Donald Trump or Joe Biden), using a survey experiment with 24 contemporary policy issues and 48 persuasive messages containing arguments and evidence (N = 4,531; 22,499 observations). We find that, while in-party leader cues influenced partisans’ attitudes, often more strongly than the persuasive messages, there was no evidence that the cues meaningfully diminished partisans’ receptivity to the messages—despite them directly contradicting the messages. Rather, persuasive messages and countervailing leader cues were integrated as independent pieces of information. These results generalized across policy issues, demographic subgroups and cue environments, and challenge existing assumptions about the extent to which party identification and loyalty distort partisans’ information processing.