Peiyang Song

I am an Honors CS undergrad in College of Creative Studies (CCS), UC Santa Barbara, co-advised by Professor Richert Wang and Professor Phill Conrad. I am a SURF fellow in the Computing + Mathematical Sciences (CMS) Department at Caltech, advised by Professor Anima Anandkumar and co-advised by Dr. Kaiyu Yang. I am also a researcher at UCSB ArchLab, working with Professor Tim Sherwood and Dr. Jeremy Lau (Google).

宋沛洋  /  Email  /  CV  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
News

[December 2023] Attending NeurIPS at New Orleans, LA, presenting LeanDojo and Lean Copilot.
[November 2023] Our paper Lean Copilot is accepted to NeurIPS 2023 MATH-AI Workshop.
[September 2023] Our paper LeanDojo is accepted to NeurIPS Datasets and Benchmarks Track as an Oral Presentation.
[September 2023] We released Lean Copilot for LLMs to act as copilots for theorem proving in Lean: Video demo.
[June 2023] We released LeanDojo, an open-source playground for LLMs to prove formal theorems in Lean.

Research

My research interest is in Machine Learning (ML), from its applications in Natural Language Processing (NLP) and Computer Vision (CV), to its foundations in Systems and Programming Languages (PL). My recent research works are mainly in two directions: 1) neuro-symbolic reasoning and AI4Math combining Large Language Models (LLMs) with Interactive Theorem Provers (ITPs); 2) energy-efficient machine learning with temporal logic.

Towards Large Language Models as Copilots for Theorem Proving in Lean
Peiyang Song, Kaiyu Yang, and Anima Anandkumar
Under review, 2024
NeurIPS Mathematical Reasoning and AI (MATH-AI) Workshop, 2023
arXiv / code / demo

We introduce a framework for running neural network inference directly in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users, including tools for suggesting proof steps and completing intermediate proof goals using LLMs.

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar
Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2023, Oral presentation
ICML Knowledge and Logical Reasoning in the Era of Data-driven Learning (KLR) Workshop, 2023
arXiv / project / code / media

Can LLMs generate mathematical proofs that can be rigorously checked? We release LeanDojo: an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant.

Awards
  • Early Research Scholarship (2023)
  • Caltech SURF Award (2023)
  • UCSB Creative Studies Honors (2022)

Site source