DiffMuse

Introduction

Among many other types of media, machine learning models are capable of generating and completing symbolic music. In symbolic music generation, a machine learning model learns how to generate creative continuations of given musical pieces which maintain all the important musical elements such as key, tempo, rhythm, style, and harmony. We anticipate that a combination of a diffusion model– an increasingly popular type of deep generative model– and the recent sequencing model S4 will allow us to perform this task extremely well. We hope to make our model available for use as a web application for musicians and composers who have hit a creative roadblock, or just want to have fun. If our results are sufficiently strong, we also hope to publish an academic paper, in which project developers can be coauthors.

If you are interested in applying some of the latest advancements in machine learning to a cool task, then this is a great project for you to work on! We are looking for two developers with experience in deep generative modeling to join our project, with an expected time commitment of 3-4 hours per week. Prospective developers should also have experience with basic ML (CSC311-level), deep learning (CSC413-level), and the necessary mathematical background to understand the papers referenced in the proposal. If you have any questions about the project, please send an email to elliot.schrider@mail.utoronto.ca or walter.merjo@mail.utoronto.ca.

Proposal

Thumbnail generated using Midjourney: https://www.midjourney.com/

The Team

Alston Lo
Director
Walter Merjo
Director
Elliot Schrider
Director
Anatoly Zavyalov
Developer