How can I make an ai that creates tabs by listening to a song. I’m thinking… you give it a song, it sperates the tracks(I heard peter Jackson has a machine that can do this using machine learning/ai), then makes tabs for it. Because of the nature of the guitar, many different ways exist to play the same music in different positions/shapes. Id like it to show me a few of the most reasonable ways to play the isolated melody/chords. I realize how involved this will be, I took a machine learning course but I only really learned… How to load and manipulate data into jupyter and use keras or tensor to process the data through layers of a neural network… Which are just a few lines of code including the layer type and the shape of the data. Then some testing, refining, and exporting of the optimal iterations of the ai. That’s the gist of what I got from the course and the project we did. Sooo… Does this sound at all applicable to what would be required of me to do this project? I can’t seem to find… Where to really start… Other than deciding what parameters to track and create a dataset for. Though even that is a challenge.

  • PresentDelivery4277B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Welcome to MIR (music information retrieval). If this was easy, I would have been done with my masters 2 years ago. What you are describing, multiple instrument transcription, is still an open research question and if you can satisfactorily solve it you will make a lot of very select researchers very happy. If you want to limit this to just a guitar you could probably start with an existing transcription algorithm and teach something to derive fingering. Something like a HMM would probably work fine.