Peiyang Song

I am an undergraduate student studying Computer Science at California Institute of Technology (Caltech). I am a researcher in the Computation & Cognition Lab (CoCoLab) at Stanford University, advised by Prof. Noah Goodman. I have been fortunate to work with Prof. Anima Anandkumar (Caltech), Dr. Kaiyu Yang (Meta), Prof. Tim Sherwood (UC Santa Barbara), and Dr. Jeremy Lau (Google) during my undergrad.

宋沛洋  /  Email  /  CV  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
News

[June 2024] I am joining Stanford CoCoLab, working on mathematical reasoning with LLMs.
[Feb. 2024] Our paper Energy Efficient Convolutions with Temporal Arithmetic is accepted to ASPLOS 2024.
[Dec. 2023] Attending NeurIPS at New Orleans, LA, presenting LeanDojo and Lean Copilot.
[Nov. 2023] Our paper Lean Copilot is accepted to NeurIPS 2023 MATH-AI Workshop.
[Sep. 2023] Our paper LeanDojo is accepted to NeurIPS 2023 Datasets and Benchmarks Track as an Oral Presentation.

Research

My current research interest is in machine reasoning, especially AI for mathematics. In the past, I have also worked on energy-efficient machine learning systems.

Towards Large Language Models as Copilots for Theorem Proving in Lean
Peiyang Song, Kaiyu Yang, and Anima Anandkumar
NeurIPS Mathematical Reasoning and AI (MATH-AI) Workshop, 2023
arXiv / code / demo

We introduce a framework for running neural network inference directly in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users, including tools for suggesting proof steps and completing intermediate proof goals using LLMs.

Energy Efficient Convolutions with Temporal Arithmetic
Rhys Gretsch, Peiyang Song, Advait Madhavan, Jeremy Lau, and Tim Sherwood
ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2024
paper

We introduce energy-efficient convolution that improve the energy per pixel of each convolution frame by more than 2× compared to a state-of-the-art while improving the energy delay product by four orders of magnitude, by developing a new temporal arithmetic with a negative log transformation.

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, and Anima Anandkumar
Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2023, Oral presentation
arXiv / project / code / media

Can LLMs generate mathematical proofs that can be rigorously checked? We release LeanDojo: an open-source playground consisting of toolkits, benchmarks, and models for LLMs to prove formal theorems in the Lean proof assistant.

Awards
  • Early Research Scholarship (2023)
  • Caltech SURF Award (2023)
  • UCSB Creative Studies Honors (2022)

Site source