Peiyang Song

I am a 4th-year undergrad majoring in Computer Science at California Institute of Technology (Caltech), advised by Prof. Steven Low, with a minor in Robotics advised by Prof. Günter Niemeyer. I am a researcher in Berkeley AI Research (BAIR) Lab, working with Prof. Dawn Song and Dr. Jingxuan He. I am also part of Stanford AI Lab (SAIL), advised by Prof. Noah Goodman and Dr. Gabriel Poesia. I am fortunate to have worked with Prof. Anima Anandkumar (Caltech), Dr. Kaiyu Yang (Meta), Prof. Tim Sherwood (UC Santa Barbara), and Dr. Jeremy Lau (Google) during my undergrad.

I am currently applying for a PhD position starting Fall 2026!

宋沛洋  /  Email  /  CV  /  Bio  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
Recent News

[Dec 2025] We released our paper on Adaptation of Agentic AI, with public repository here. Hope you enjoy reading it!
[Dec 2025] Honored to receive the Best Paper Honorable Mention Award @ NeurIPS LAW Workshop, for our Personality Illusion paper.
[Dec 2025] I am attending NeurIPS 2025 in San Diego, CA, from Dec 2 to Dec 7. Excited to catch up with old and new friends!
[Nov 2025] Our paper LeanProgress on guiding proof search with proof progress prediction is accepted to TMLR.
[Nov 2025] Our paper AI Impact on Human Proof Formalization Workflows is accepted to NeurIPS MATH-AI Workshop.

Research

My research focuses on LLM reasoning, agentic AI, and neuro-symbolic AI. I aim to build intelligent agents capable of rigorous, reliable, and creative reasoning by combining the strengths of neural and symbolic paradigms. My work is organized around one central theme and two closely related directions:

  • Central theme: neuro-symbolic LLM agents for formal reasoning. I integrate neural models (LLMs) with symbolic systems (e.g., Lean) to develop LLM-based agents that can reason formally in mathematics and code, with correctness guarantees and interpretable reasoning traces [LeanDojo] [Lean Copilot] [LeanProgress] [LeanAgent] [Human-AI Formalization].
  • From formal to natural-language reasoning. Building on formal reasoning foundations, I study how LLM-based agents reason in informal, natural-language settings, drawing on cognitive science and analyses of human-like reasoning to diagnose and mitigate systematic reasoning failures [Reasoning Failures] [Personality Illusion] [A-Not-B] [Adaptation].
  • Neuro-symbolic foundations of efficient reasoning systems. Beyond reasoning algorithms and behaviors, I study how neuro-symbolic structure can be embedded into model representations and system-level design, enabling interpretable and energy-efficient inference and handling of linguistically dense, culturally grounded language phenomena [Delay Space] [DelayNet] [Idioms].

My long-term research goal is to build AI systems whose reasoning is as creative as human intuition and as dependable as formal logic.

Large Language Model Reasoning Failures
Peiyang Song*, Pengrui Han*, and Noah Goodman (* Equal Contribution)
ICML AI for Math Workshop, 2025
preprint / full release coming soon

We present the first comprehensive survey dedicated to reasoning failures in LLMs. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities.

Adaptation of Agentic AI
Pengcheng Jiang*, Jiacheng Lin*, Zhiyi Shi*, Zifeng Wang, Luxi He, Yichen Wu, Ming Zhong, Peiyang Song, Qizheng Zhang, Heng Wang, Xueqiang Xu, Hanwen Xu, Pengrui Han, Dylan Zhang, Jiashuo Sun, Chaoqi Yang, Kun Qian, Tian Wang, Changran Hu, Manling Li, Quanzheng Li, Hao Peng, Sheng Wang, Jingbo Shang, Chao Zhang, Jiaxuan You, Liyuan Liu, Pan Lu, Yu Zhang, Heng Ji, Yejin Choi, Dawn Song, Jimeng Sun, Jiawei Han (* Equal Contribution)
Preprint, 2025
arXiv

Cutting-edge agentic AI systems are built on foundation models that can be adapted to plan, reason, and interact with external tools to perform increasingly complex and specialized tasks. As these systems grow in capability and scope, adaptation becomes a central mechanism for improving performance, reliability, and generalization. In this paper, we unify the rapidly expanding research landscape into a systematic framework that spans both agent adaptations and tool adaptations.

AI Impact on Human Proof Formalization Workflows
Katherine M. Collins*, Simon Frieder*, Jonas Bayer, Jacob Loader, Jeck Lim, Peiyang Song, Fabian Zasier, Lexin Zhou, Shanda Li, Shi-Zhuo Looi, Jose Hernandez-Orallo, Joshua B. Tenenbaum, Cameron Freer, Umang Bhatt, Adrian Weller, Valerie Chen†, Ilia Sucholutsky† (* Equal Contribution, † Equal Advising)
NeurIPS Workshop on Mathematical Reasoning and AI (MATH-AI), 2025
preprint / full release coming soon

We conduct an initial exploration into people's formalization process with and without AI. We collect more than 80 hours of video from seven participants formalizing informal proofs with and without AI on a range of mathematical problems covering different levels of difficulty and domains. We offer a first characterization of people's formalization process, noting places where AI assistance helps, and a few instances where it may hurt.

The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Pengrui Han*, Rafal D. Kocielnik*, Peiyang Song, Ramit Debnath, Dean Mobbs, Anima Anandkumar, and R. Michael Alvarez (* Equal Contribution)
NeurIPS LAW Workshop: Bridging Language, Agent, and World Models, 2025, Oral Presentation + Best Paper Honorable Mention Award; NeurIPS Workshop on LLM Persona Modeling (PersonaNLP), 2025, Oral Presentation
arXiv / project / code / media

LLMs say they have personalities, but they don’t act like it. Alignment today shapes language, not behavior. This linguistic–behavioral dissociation cautions against equating coherent self-reports with cognitive depth.

Lean Copilot: Large Language Models as Copilots for Theorem Proving in Lean
Peiyang Song, Kaiyu Yang, and Anima Anandkumar
International Conference on Neuro-Symbolic Systems (NeuS), 2025
1.2k+ stars on Github, ranking 2nd (only after Mathlib4) among all Lean projects
arXiv / project / code / proceeding / poster / demo / slides / tutorial / media

We introduce Lean Copilot, a framework for running neural network inference directly in Lean. It enables various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users, including tools for suggesting proof steps (tactics), selecting premises, and searching for complete proofs using LLMs.

Delay Space Arithmetic and Architecture
Rhys Gretsch, Peiyang Song, Advait Madhavan, Jeremy Lau, and Tim Sherwood
IEEE Micro, 2025, Top Pick Award
proceeding

What operations can you perform efficiently when you use the “time of arrival” of a signal’s edge to represent a number? We present negative-logarithmic delay space arithmetic as a completely new approach to temporal coding. Under this approach, general purpose arithmetic is transformed to a “soft” version of the standard temporal operations in such a way that preserves all of the algebraic identities.

LeanProgress: Guiding Search for Neural Theorem Proving via Proof Progress Prediction
Suozhi Huang, Peiyang Song, Robert Joseph George, and Anima Anandkumar
Transactions on Machine Learning Research (TMLR), 2025
arXiv / project

LLMs struggling with long proofs? We present LeanProgress, which uses a novel critic model where “distance” to the goal state acts as a key signal for step prediction, boosting performance on neural theorem proving in Lean. Our method achieves 75.1% prediction accuracy, with a 3.8% gain in proof search with step prediction.

LeanAgent: Lifelong Learning for Formal Theorem Proving
Adarsh Kumarappan, Mo Tiwari, Peiyang Song, Robert Joseph George, Chaowei Xiao, and Anima Anandkumar
International Conference on Learning Representations (ICLR), 2025
arXiv / project / code / proceeding / poster / media

LeanAgent continuously learns and improves on ever-expanding mathematical knowledge without forgetting what it learned before. It has a curriculum learning strategy that optimizes the learning trajectory in terms of mathematical difficulty, a dynamic database for efficient management of evolving mathematical knowledge, and progressive training to balance stability and plasticity.

Creative and Context-Aware Translation of East Asian Idioms with GPT-4
Kenan Tang*, Peiyang Song*, Yao Qin, and Xifeng Yan (* Equal Contribution)
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2024
arXiv / code / proceeding / demo

To compile a dictionary of East Asian idiom translations demands much time and creativity even for expert translators. To alleviate such burden, we automate high-quality data generation with GPT-4, and discover Pareto-optimal prompting strategies on both faithfulness and creativity, outperforming existing translation engines and human baseline.

In-Context Learning May Not Elicit Trustworthy Reasoning: A-Not-B Errors in Pretrained Language Models
Pengrui Han*, Peiyang Song*, Haofei Yu, and Jiaxuan You (* Equal Contribution)
Findings of Empirical Methods in Natural Language Processing (EMNLP), 2024
arXiv / code / proceeding

Motivated by the crucial cognitive phenomenon of A-not-B errors, we present the first systematic evaluation on the surprisingly vulnerable inhibitory control abilities of LLMs. We reveal that this weakness undermines LLMs' trustworthy reasoning capabilities across diverse domains, and introduce various mitigations.

Energy Efficient Convolutions with Temporal Arithmetic
Rhys Gretsch, Peiyang Song, Advait Madhavan, Jeremy Lau, and Tim Sherwood
ACM Int'l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2024
proceeding

We introduce energy-efficient convolution that improves the energy per pixel of each convolution frame by more than 2× compared to the state-of-the-art while improving the energy delay product by four orders of magnitude, by developing a new temporal arithmetic with a negative log transformation.

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar
Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2023, Oral Presentation
arXiv / project / code / dataset / model / poster / proceeding / media

Can LLMs generate mathematical proofs that can be rigorously checked? We release LeanDojo: an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant.

Selected Media

My research has been covered by many media. Some representative ones include:

Selected Awards
  • NeurIPS LAW Workshop Best Paper Honorable Mention Award (2025)
  • Caltech FCC Appreciation Award (2025)
  • ICLR Notable Reviewer Award (2025)
  • George W. Housner Student Discovery Fund (2025)
  • IEEE Micro Top Pick Award (2025)
  • Early Research Scholarship (2023)
  • Caltech SURF Award (2023)
Teaching
Academic Services
  • Conference Reviewer @ NeurIPS, ICLR, ARR, ACL, EMNLP, IJCNLP, AACL, etc.
  • Workshop Reviewer @ MATH-AI, AI4MATH, DL4C, VerifAI, Re-Align, LLM-Cognition, BehavioralML, WorldModels, MoFA, etc.
  • Admissions Ambassador @ Undergraduate Admissions Office, Caltech.
  • First-Year Caltech Connector (FCC) @ Student & Family Engagement Office, Caltech.
  • Organizing Staff @ Agentic AI Summit 2025, UC Berkeley.
Miscellaneous

Outside of research, I enjoy long walks, running, cycling, and badminton. I also love reading and writing, and sharing good meals with friends. When I’m on vacation, I like traveling and doing nature photography, and I finally have time to dive into longer books.

Before research became my main joy, I spent my high school years exploring many fun activities, including math and algorithm contests, debate tournaments, and much more.


Last updated: Dec. 2025. Website template credit: Jon Barron.