Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A piece of music playing software that can react to the lead of an instrument. I'm picturing the software will be able to play a concerto (or simply a duet) with a real instrumentalist and react to dynamic and tempo changes in real time, like an orchestra under a conductor.

The same principle can also be used to create a real-time software harmonizer [1] for live performances, but this problem already has a reliable solution through hardware.

[1] - https://www.youtube.com/watch?v=DnpVAyPjxDA



Are you familiar with Magenta? [0] It's not real-time, but definitely an important area of research.

Also check out Dan Tepfer [1] who is doing amazing work with an algorithmic approach to reactive live performances, with great call and response tactics.

I myself am slowly prototyping a fully artificial AI band which can be orchestrated using very high-level musical ideas and a big helping of intelligent randomization and algorithms based on music theory.

I've been prototyping in Andrew Sorensen's Extempore [2] and have laid much of the groundwork like melody/harmony/rhythm generation as well as basic modulation of these elements to create longer musical structures which utilize motifs in multiple ways

Currently it is a matter of shedding pre-computed tables of "nice" sounding progressions or purely random progressions, and creating a more fundamental approach which can derive the appropriate progressions from the given user parameters. I am also expanding the program's ability to generate aesthetically pleasing and unique singular motifs which drive these algorithmic compositions.

I have a band leader / conductor module which provides cues and other synchronized data, which even without advanced motif generation and modulation still allows for things not currently possible in any non-code musical production software such as global dyamics, timing changes, progression sharing, (eventually) directing impromptu solos, etc.

Reach out via email if you'd like to discuss more!

[0] https://magenta.tensorflow.org/ [1] https://www.youtube.com/watch?v=SaadsrHBygc [2] https://extemporelang.github.io/


Thanks for your resource. Never heard of Dan Tepfer or Extempore - such a great way of imagining music!

What I was planning was something simpler - much like generating sounds from a written score, but like live classical performances, the generated sound reacts to the cues of the player.

I'm not exactly familiar with Magenta, but the thing I'm currently trying to implement (at a very early stage) is Deepmind's Wave2Midi2Wave which is part of the dataset released with Magenta [1]. I'm not aware if they'd released any code as well.

[1] - https://magenta.tensorflow.org/maestro-wave2midi2wave




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: