Music Generation Using Supervised Learning and LSTM

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a few methodologies for composing music using deep learning algorithms and long short-term memory (LSTM) neural network. The LSTM model is created by training with a set of input files from a music library. The trained model then synthesizes music when an arbitrary note is provided. The quality of the music is calculated by comparing the harmony and few other parameters of the synthesized music with the trained files. The music library is made with a set of MIDI files, and based on the chosen library, a unique model shall be created. For the model creation, the library files are converted into a suitable format and encoded in order to make it compatible with the LSTM network. Though the outcome of this experiment is a continuous music, the harmony and notes can still be improved to solve the discontinuity problem.

Cite

CITATION STYLE

APA

Tony, S. M., & Sasikumar, S. (2022). Music Generation Using Supervised Learning and LSTM. In Lecture Notes in Electrical Engineering (Vol. 766, pp. 477–485). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-16-1476-7_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free