Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In an ideal system, there is just one nice stable clock as you describe.

In real systems, for example a video call, there is one clock for the microphone ADC, one clock for the video frame rate, one clock for the sender's computer, one clock for the receivers computer, one clock for the receivers screen refresh rate, one clock in the receivers DAC, etc.

Whenever these clocks drift slightly (or a lot), most software will try to compensate by stretching or compressing the audio waveform (check logs for 'audio 67 us ahead, adjusting').

Such an effect is very noticeable in some cases. The main one being if you're doing a voice/audio call to someone in the next room and you can hear them both directly and via the call. If they sing, you can hear the result going in phase and out of phase seemingly at random.

You get the same if you hit play on the same MP3 on a Mac and a windows PC at the exact same time. While the on-screen timers never visibly drift, there are enough millisecond level drifts to really notice. Doesn't happen with two Mac's.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: