London School of Economics and Political Science (LSE)
About Me
Thanks for visiting! I’m an assistant professor at the London School of Economics and Political Science in the department of psychological and behavioural science where I study the nexus between technology, persuasive communication, and attitude and behaviour change. I also teach undergraduate and postgraduate courses on research methods and statistics, and am affiliated with the Data Science Institute (LSE), the Human Cooperation Lab (MIT/Cornell University), and the Public Opinion Analytics Lab, a research group across the LSE and the Universities of Reading and Southampton. Get in touch if you’d like to know more.
Party cues can influence public opinion, but the extent to which they do so varies dramatically from context to context. Why? The long-standing theory that party cues function as “heuristics” provides an answer, predicting that variation in exposure to policy information, a propensity for effortful thinking, or both causally affects the influence of party cues. However, this prediction has escaped decisive empirical testing to date, leaving in its wake a string of mixed results. Here we characterize the challenges that limit previous tests, and report on two large-scale experiments designed to overcome them. We find that exposure to policy information causally attenuates the influence of party cues, but engagement in effortful thinking per se does not. Our results advance understanding of the “when” and “why” of party cue influence; clarify a number of previously ambiguous findings; and have broad theoretical, methodological, and normative implications for understanding the influence of party cues.
There are widespread fears that conversational artificial intelligence (AI) could soon exert unprecedented influence over human beliefs. In this work, in three large-scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)—including some post-trained explicitly for persuasion—to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods—which boosted persuasiveness by as much as 51 and 27%, respectively—than from personalization or increasing model scale, which had smaller effects. We further show that these methods increased persuasion by exploiting LLMs’ ability to rapidly access and strategically deploy information and that, notably, where they increased AI persuasiveness, they also systematically decreased factual accuracy.
Large language models can now generate political messages as persuasive as those written by humans, raising concerns about how far this persuasiveness may continue to increase with model size. Here, we generate 720 persuasive messages on 10 US political issues from 24 language models spanning several orders of magnitude in size. We then deploy these messages in a large-scale randomized survey experiment (N = 25,982) to estimate the persuasive capability of each model. Our findings are twofold. First, we find evidence that model persuasiveness is characterized by sharply diminishing returns, such that current frontier models are only slightly more persuasive than models smaller in size by an order of magnitude or more. Second, we find that the association between language model size and persuasiveness shrinks toward zero and is no longer statistically significant once we adjust for mere task completion (coherence, staying on topic), a pattern that highlights task completion as a potential mediator of larger models’ persuasive advantage. Given that current frontier models are already at ceiling on this task completion metric in our setting, taken together, our results suggest that further scaling model size may not much increase the persuasiveness of static LLM-generated political messages.
The world could witness another pandemic on the scale of COVID-19 in the future, prompting calls for research into how social and behavioral science can better contribute to pandemic response, especially regarding public engagement and communication. Here, we conduct a cost-effectiveness analysis of a familiar tool from social and behavioral science that could potentially increase the impact of public communication: survey experiments. Specifically, we analyze whether a public health campaign that pays for a survey experiment to pretest and choose between different messages for its public outreach has greater impact in expectation than an otherwise-identical campaign that does not. The main results of our analysis are 3-fold. First, we show that the benefit of such pretesting depends heavily on the values of several key parameters. Second, via simulations and an evidence review, we find that a campaign that allocates some of its budget to pretesting could plausibly increase its expected impact; that is, we estimate that pretesting is cost-effective. Third, we find pretesting has potentially powerful returns to scale; for well-resourced campaigns, we estimate pretesting is robustly cost-effective, a finding that emphasizes the benefit of public health campaigns sharing resources and findings. Our results suggest survey experiment pretesting could cost-effectively increase the impact of public health campaigns in a pandemic, have implications for practice, and establish a research agenda to advance knowledge in this space.
Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.