Research
I investigate how state strategies and generative AI jointly affect citizens’ perceptions of political information, constrain political expression, and influence policy preferences by combining computational and experimental methods.
My research program has produced 8 papers with 5 as first author and 6 as corresponding author. These include a publication in Humanities and Social Sciences Communications, submissions under review at Government Information Quarterly, PNAS Nexus, Regulation & Governance, The International Journal of Press/Politics, and projects targeting PNAS. My research has received support from competitive grants from LSE, OpenAI, Google, and Social Science Research Council (USA).
Job Market Paper
[1] Scope of Public Aversion to AI-labeled Policy Information: A Survey Experiment
First and corresponding author; under review at Government Information Quarterly; IF = 10.0, SSCI Q1
Full text online
Summary
Explicit labeling of online content produced by AI is a widely discussed policy for ensuring transparency and promoting public confidence. Yet little is known about the scope of AI labeling effects on public perception of policy communication. To examine the potential transparency–trust trade-off, I present evidence from a preregistered, nationally representative survey experiment (n = 3,861). I demonstrate that AI labeling of a news article about a proposed public policy reduces perceived accuracy and policy interest. However, its effects do not spill over to policy support or general misinformation concerns. Counterintuitively, increasing the salience of AI use reduces the negative impact of AI labeling on perceived accuracy, while one-sided versus two-sided framing has no moderating effect. Overall, my findings indicate that the adverse effect of AI labeling is limited in scope and empirically support its proper implementation.
Publication
[2] Panacea or Pandora’s Box: Diverse Governance Strategies to Conspiracy Theories and Their Consequences in China
Co-first and corresponding author, Humanities and Social Sciences Communications, 2025; IF = 3.6, SSCI Q1
Published on nature.com (Nature Portfolio); highlighted by LSE DSI on LinkedIn, X, and official newsletter
- Analyzed authoritarian governance strategies of conspiracy theories (CTs), including propagation, tolerance, and rebuttal
- Combined qualitative case analysis, social network analysis, and topic modeling of 46,387 Weibo posts
- Found that authoritarian strategies for managing conspiracy theories risk losing control and provoking backlash
Under Review
[3] Governing Online Political Discourse: AI-Based Computational Analysis and Social Simulation
Second and corresponding author; under review at PNAS Nexus (the sibling journal of PNAS); IF = 3.8
- Applied LLM-based annotation on 343,764 tweets with counterfactual social simulation
- Demonstrated how state regulation triggers conditional self-censorship via collective adapatation
[4] Framing Trump in Chinese and US News: A Computational Visual Analysis
Sole author; under review at International Journal of Press/Politics; IF = 4.3, SSCI Q1
Abstract at SSRN
- Applied computational emotional profiling of 257,056 images to reveal political bias in policy framing
- Found Chinese media portray Trump as more negative and less trustworthy than US media outlets
[5] Bureaucrat-Expert Collaboration in Large Language Model Adoption: Institutional Logics in China
Second author; under review at Regulation & Governance; IF = 3.8, SSCI Q1
- Revealed conflicts between political risk control and expert innovation in LLM adoption
- Showed that bureaucrats conceded on technical decisions while enforcing censorship red lines, enabling bounded expert agency
Working Papers
[6] How AI Sycophancy Distorts Social Perceptions and Polarizes Policy Discussion: Evidence from Human-AI Conversational Experiments
First and corresponding author; in prepartion for Proceedings of the National Academy of Sciences (PNAS); IF = 9.1, SCI Q1
Supported by research grants totaling US$12,300 from OpenAI, Google, and LSE
- Articulated the structural bias from AI sycophantic adaptation and its sociopolitical consequences
- Investigated how LLMs’ sycophancy distorts user perceptions and polarizes policy discussions
- Designed a personalized, human-LLM conversational survey experiment on a probability sample of 3,100
[7] How AI Labeling Affects Policy Support in Social Networks: Social Simulation and Survey Experiment
Second and corresponding author
Extended abstract at SSRN
Advancing Heterogeneous Treatment Effect Analysis with Machine Learning and Causal Inference
Sole and corresponding author
Summer Institute and Conference Acceptances
Summer Institutes
Oxford Internet Institute Summer Doctoral Program (SDP 2025)
Oxford Large Language Models Workshop for Social Science (Oxford LLMs 2024)
Summer Institute in Computational Social Science (SICSS 2021)
Political Science
American Political Science Association Annual Meeting (APSA 2025)
American Political Science Association Annual Meeting (APSA 2024)
Annual Conference of European Political Science Association (EPSA 2025)
Annual Meeting of Society for Political Methodology (PolMeth 2025)
Annual Meeting of Society for Political Methodology, Europe (PolMeth Europe 2025)
Communication
Annual Conference of International Communication Association (ICA 2025)
National Communication Association Annual Convention (NCA 2024)
Chinese Internet Research Conference (CIRC 2022)
Sociology
Annual Conference of American Sociological Association (ASA 2025)
Computational Social Science
International Conference on Computational Social Science (IC2S2 2024)
Association for Computing Machinery Web Science Conference (ACM WebSci 2024)