One of my "back of the head" idea is to build a nerual net that could translate the SID commands into the sound it produces. Anybody know what would be the most suited NN architecture to do that (I've looked around a bit, but didn't find anything "ready-to-use")....
That sounds challenging - I'd love to see how you get on. I had a kind of related idea - train a model on a particular synth, then when passed a sample with a synth sound in it, it would try to suggest the patch settings that would most closely emulate the sound.
These players work in a perhaps surprising way, they emulate the complete machine to run the part of the original code that programs an emulated SID chip. So it’s not like MIDI or something where the storage is a stream of commands in sequence.