Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of my "back of the head" idea is to build a nerual net that could translate the SID commands into the sound it produces. Anybody know what would be the most suited NN architecture to do that (I've looked around a bit, but didn't find anything "ready-to-use")....


That sounds challenging - I'd love to see how you get on. I had a kind of related idea - train a model on a particular synth, then when passed a sample with a synth sound in it, it would try to suggest the patch settings that would most closely emulate the sound.


These players work in a perhaps surprising way, they emulate the complete machine to run the part of the original code that programs an emulated SID chip. So it’s not like MIDI or something where the storage is a stream of commands in sequence.


You could finally create a SID spare part which sounds just like the original. (In fact, which sounds exactly like one individual chip!)


If you train it on entire library and use real SID, maybe ? There is plenty of quirks in the chip that not all songs use


Definitely have to try on an entire library and you could also on top of that feed the chip random (but recorded) input.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: