Chris Donahue

Chris Donahue

Dannenberg Assistant Professor

Computer Science Department, Carnegie Mellon University

About

Email:

I am an Assistant Professor in the Computer Science Department at Carnegie Mellon University, and also a Research Scientist at Google DeepMind on the Magenta team (part-time).

My research goal is to develop and responsibly deploy generative AI for music and creativity, thereby unlocking and augmenting human creative potential. To this end, my work involves (1) improving machine learning methods for controllable generative modeling for music, audio, and other sequential data, and (2) deploying real-world interactive systems that allow a broader audienceβ€”inclusive of non-musiciansβ€”to harness generative music AI through intuitive controls.

I am particularly drawn to research ideas with direct real-world applications, and my work often involves building systems for real users to be evaluated in-the-wild. For example, my work on Piano Genie was used in a live performance by The Flaming Lips, and my work on Dance Dance Convolution powers Beat Sage, a live service used by thousands of users a day to create multimodal music game content.

Previously, I was a postdoc at Stanford CS advised by Percy Liang. Before that, I completed a PhD at UCSD co-advised by Miller Puckette and Julian McAuley.

News

  • πŸ“œ (Jan 2026) Two papers accepted at ICASSP 2026 including FoleyBench.
  • πŸ“œ (Jan 2026) One paper accepted at CHI 2026, preprint forthcoming.
  • πŸ“œ (Dec 2025) Paper on multi-modal translation for music AI was published in TASLP
  • 🎢 (Dec 2025) Our project on multimodal music AI, co-led by Dr. Annie Hsieh, was awarded a grant through the Schmidt HAVI program
  • 🎀 (Dec 2025) Two invited talks at NeurIPS 2025 Workshops: GenProCC (recording) and AI4Music (recording)
  • πŸ§‘β€πŸ« (Nov 2025) Reappointed as Assistant Professor at CMU.
  • 🎀 (Oct 2025) Invited talk What music can teach language models, CMU LTI Colloquium (recording)
  • πŸ“œ (Sep 2025) Two papers (Music Arena, Live Music Models) accepted to NeurIPS 2025 Creative AI Track, to be presented in the main conference.
  • πŸ“œ (Sep 2025) Paper accepted to the LLM4Music workshop @ ISMIR 2025
  • 🌐 (Aug 2025) My Google research project SingSong featured in the Pixel Recorder app
  • 🌐 (Jul 2025) Music Arena released (paper)!
  • πŸ“œ (Jul 2025) Paper accepted at WASPAA 2025 on sound morphing.
  • πŸ“œ (Jul 2025) Two papers accepted to ISMIR 2025 on music evaluation and real-time adaptation (pre-print forthcoming).
  • πŸ“œ (Jun 2025) Two papers accepted at ICML 2025 workshops (R2-FM, DataWorld, pre-prints forthcoming).
  • 🌐 (Jun 2025) Led the effort of a new open weights real-time music generation model along with my team at Google DeepMind: Magenta RealTime.
  • πŸ“œ (May 2025) Our paper on Copilot Arena accepted to ICML 2025.
  • πŸ—žοΈ (May 2025) My PhD student Wayne Chi (co-advised w/ Ameet Talwalkar) quoted in WSJ article.
  • πŸ—žοΈ (Apr 2025) CMU SCS news article featuring Copilot Arena.
  • πŸ… (Apr 2025) Our paper on co-design for audio codec LMs received the Best Paper Award (top 1) at the NAACL Student Research Workshop 2025.
  • πŸŽ“ (Apr 2025) My PhD student Wayne Chi (co-advised w/ Ameet Talwalkar) has received the NDSEG Fellowship.
  • πŸ“œ (Mar 2025) Paper on Copilot Arena accepted to the HEAL@CHI workshop.
  • 🎢 (Mar 2025) Project proposal w/ Annie Hsieh (CMU CFA) funded by the AIxArts incubator fund at CMU.
  • πŸ› οΈ (Mar 2025) Workshop proposal on ML for Audio accepted at ICML 2025.
  • πŸ… (Mar 2025) Our paper on AMUSE recognized with a Best Paper Award (top 1% of submissions) at CHI 2025.
  • πŸ“œ (Mar 2025) Paper accepted at the NAACL Student Research Workshop 2025.
  • 🎢 (Mar 2025) Shoutout from Darkside for helping them train RAVE for their album Nothing.
  • πŸ“œ (Feb 2025) Our work on VERSA accepted to NAACL Demo Track 2025.
  • πŸ“œ (Feb 2025) New pre-print on Copilot Arena
  • πŸ“œ (Jan 2025) Our work on AMUSE accepted to CHI 2025.
  • πŸ—žοΈ (Nov 2024) Blog post on Copilot Arena
  • πŸ“œ (Nov 2024) One paper accepted at the NeurIPS 2024 Open World Agents Workshop.
  • 🎀 (Oct 2024) Invited talk at SANE 2024 (video, slides).
  • 🌐 (Oct 2024) Launch of Copilot Arena, a VSCode extension for evaluating LLMs for coding assistance.
  • πŸ“œ (Oct 2024) Three extended abstracts to appear at ISMIR Late Breaking Demos.
  • πŸ“œ (Oct 2024) One paper accepted at the NeurIPS 2024 Audio Imagination Workshop.
  • 🌐 (Aug 2024) Official launch of Hookpad Aria, a Copilot for songwriters.
  • πŸ“œ (Jun 2024) Our work on Music-aware Virtual Assistants accepted at UIST 2024.
  • πŸ“œ (Jun 2024) Two papers accepted to ISMIR 2024.
  • πŸͺ‘ (Apr 2024) Named as the Dannenberg Assistant Professor of Computer Science.
  • πŸ›ŽοΈ (Apr 2024) Serving as Senior Program Committee Co-chair for ISMIR 2024 - record number of submissions (415).
  • 🌐 (Mar 2024) A Copilot-like tool for musicians featuring the Anticipatory Music Transformer was launched in beta.
  • πŸ“œ (Mar 2024) Music ControlNet to appear in TASLP (IEEE/ACM Transactions on Audio, Speech, and Language Processing).
  • πŸ“œ (Mar 2024) Anticipatory Music Transformer to appear in TMLR (Transactions on Machine Learning Research).
  • 🌐 (Mar 2024) Launch of MusicFX DJ Mode, a real-time music audio generation tool developed by my team at Google.
  • πŸ—žοΈ (Nov 2023) SingSong incorporated into Google DeepMind’s Music AI Tools.
  • πŸ“œ (Nov 2023) Work presented at HCMIR Workshop by Michael Feffer.
  • πŸ“œ (Nov 2023) New preprint on controllable music gen led by Shih-Lun Wu (applying to PhD positions!)
  • πŸ—žοΈ (Oct 2023) Interviewed for Pitchfork article about MusicLM
  • 🎀 (Oct 2023) Invited talk at Stanford HAI Conference (recording, slides)
  • πŸ§‘β€πŸ« (Oct 2023) Guest lecture for CMU LLM Course (slides)
  • πŸ‘‹ (Oct 2023) New PhD students: Irmak Bukey and Wayne Chi
  • πŸ§‘β€πŸ« (Sep 2023) Started as Assistant Professor at CMU

G-CLef

Our group's logo, a mashup of a treble clef (G-Clef) and CMU's mascot Scotty created with DALL-E 2.

I lead the Generative Creativity Lab (G-CLef) at CMU. Our mission is to empower and enrich human creativity and productivity with generative AI. We focus primarily on the intersection of music and AI, though we also work on other applications such as programming, gaming, and natural language. Please visit this page to learn more about our research interests and to apply.

Mentees

Irmak Bukey CSD PhD student
Irmak Bukey
CSD PhD student

Wayne Chi CSD PhD student Coadvised w/ Ameet T.
Wayne Chi
CSD PhD student
Coadvised w/ Ameet T.
Yewon Kim CSD PhD student
Yewon Kim
CSD PhD student

Nathan Pruyne CSD PhD student
Nathan Pruyne
CSD PhD student

Alumni

Alexander Wang Music Tech MS student Now PhD @ CMU HCII (incoming)
Alexander Wang
Music Tech MS student
Now PhD @ CMU HCII (incoming)
Yichen (William) Huang Visiting researcher
Yichen (William) Huang
Visiting researcher

Xun (Rick) Zhou CS MS studentNow Quant @ Minhong
Xun (Rick) Zhou
CS MS student
Now Quant @ Minhong
Shih-Lun Wu LTI MS student Now PhD @ MIT EECS
Shih-Lun Wu
LTI MS student
Now PhD @ MIT EECS

Recent Papers

Quickly discover relevant content by filtering publications.
(2024). Vision Language Models Are Few-Shot Audio Spectrogram Classifiers. In NeurIPS Audio Imagination Workshop 2024.

arXiv PDF BibTeX

(2024). Local Deployment of Large-Scale Music AI Models on Commodity Hardware. In ISMIR LBD 2024.

arXiv PDF BibTeX πŸ•ΉοΈ Demo

(2024). Just Label the Repeats for In-The-Wild Audio-to-Score Alignment. In ISMIR 2024.

arXiv PDF BibTeX Code Video Examples

(2024). Hookpad Aria: A Copilot for Songwriters. In ISMIR LBD 2024.

arXiv PDF BibTeX Project Page