Research
I investigate how state strategies and generative AI jointly affect citizens’ perceptions of political information, constrain political expression, and influence policy preferences by combining computational and experimental methods.
My research program has produced 8 papers with 5 as first author and 6 as corresponding author. These include a publication in Humanities and Social Sciences Communications, submissions under review at PNAS Nexus, Regulation & Governance, The International Journal of Press/Politics, and projects targeting PNAS and Political Analysis. My working papers have received support from competitive grants from LSE, OpenAI, and Google.
Job Market Paper
[1] AI Labeling Reduces the Perceived Accuracy of Online Content but Has Limited Broader Effects
First and corresponding author; under review at PNAS Nexus (the sibling journal of PNAS); IF = 3.8 Full text at arXiv
Abstract
Explicit labeling of online content produced by AI is a widely discussed policy for ensuring transparency and promoting public confidence. Yet little is known about the scope of AI labeling effects on public assessments of labeled content. We contribute new evidence on this question from a survey experiment using a high-quality nationally representative probability sample (sample size = 3,861). First, we demonstrate that explicit AI labeling of a news article about a proposed public policy reduces its perceived accuracy. Second, we test whether there are spillover effects in terms of policy interest, policy support, and general concerns about online misinformation. We find that AI labeling reduces interest in the policy, but neither influences support for the policy nor triggers general concerns about online misinformation. We further find that increasing the salience of AI use reduces the negative impact of AI labeling on perceived accuracy, while one-sided versus two-sided framing of the policy has no moderating effect. Overall, our findings suggest that the effects of algorithm aversion induced by AI labeling of online content are limited in scope, and that transparency policies may benefit from contextualizing AI use to mitigate unintended public skepticism.
Publication
[2] Panacea or Pandora’s Box: Diverse Governance Strategies to Conspiracy Theories and Their Consequences in China
Co-first and corresponding author, Humanities and Social Sciences Communications, 2025; IF = 3.6, SSCI Q1
Published on nature.com (Nature Portfolio); highlighted by LSE DSI on LinkedIn, X, and official newsletter
- Analyzed authoritarian governance strategies of conspiracy theories (CTs), including propagation, tolerance, and rebuttal
- Combined qualitative case analysis, social network analysis, and topic modeling of 46,387 Weibo posts
- Found that authoritarian strategies for managing conspiracy theories risk losing control and provoking backlash
Under Review
[3] Bureaucrat-Expert Collaboration in Large Language Model Adoption: An Institutional Logic Perspective on China’s Public Sector
Second author; under review at Regulation & Governance; IF = 3.8, SSCI Q1
- Revealed conflicts between political risk control and expert innovation in LLM adoption
- Showed that bureaucrats conceded on technical decisions while enforcing censorship red lines, enabling bounded expert agency
[4] Framing Trump in Chinese and US News: A Computational Visual Analysis
Sole author; under review at International Journal of Press/Politics; IF = 4.3, SSCI Q1
Abstract at SSRN
- Applied computational emotional profiling of 257,056 images to reveal political bias in policy framing
- Found Chinese media portray Trump as more negative and less trustworthy than US media outlets
[5] Governing Online Political Discourse: A Social Simulation of Self-Censorship using Large Language Models
Second and corresponding author; under review at Policy & Internet; IF = 3.6, SSCI Q1
- Applied LLM-based annotation on 343,764 tweets with counterfactual social simulation
- Demonstrated how state regulation triggers conditional self-censorship via collective adapatation
Working Papers
[6] How LLM Sycophancy Shapes Individuals’ Perceived Social Norms and Sways Policy Attitude
First author; in prepartion for Proceedings of the National Academy of Sciences (PNAS); IF = 9.1, SCI Q1
Supported by research grants totaling US$12,300 from OpenAI, Google, and LSE
- Articulated the concept of LLM sycophancy (adaptation to user preferences) and its policy relevance
- Investigated how LLMs’ sycophancy produces biased information and distorts user policy attitudes
- Designed a personalized, human-LLM conversational survey experiment on a probability sample of 3,100
[7] How AI Labeling Affects Policy Support in Social Networks: LLM-powered Simulation and Survey Experiment
Second and corresponding author; in prepartion for Political Analysis; IF = 5.4, SSCI Q1
Extended abstract at SSRN
[8] Advancing Heterogeneous Treatment Effect Analysis
Sole author; in prepartion for Political Science Research and Methods; IF = 2.6, SSCI Q1
Summer Institute and Conference Acceptances
Summer Institutes
Oxford Internet Institute Summer Doctoral Program (SDP 2025)
Oxford Large Language Models Workshop for Social Science (Oxford LLMs 2024)
Summer Institute in Computational Social Science (SICSS 2021)
Political Science
American Political Science Association Annual Meeting (APSA 2025)
American Political Science Association Annual Meeting (APSA 2024)
Annual Conference of European Political Science Association (EPSA 2025)
Annual Meeting of Society for Political Methodology (PolMeth 2025)
Annual Meeting of Society for Political Methodology, Europe (PolMeth Europe 2025)
Communication
Annual Conference of International Communication Association (ICA 2025)
National Communication Association Annual Convention (NCA 2024)
Chinese Internet Research Conference (CIRC 2022)
Sociology
Annual Conference of American Sociological Association (ASA 2025)
Computational Social Science
International Conference on Computational Social Science (IC2S2 2024)
Association for Computing Machinery Web Science Conference (ACM WebSci 2024)