Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty sure that #2 isn't true; signal processing folks will be able to phrase this better than I can, but I think that if you have enough information to capture the waveform at a given frequency, you also have enough information to precisely place it in time - phasing errors are more likely due to quantization error, which is about bit depth, not sample rate. No?


[edited: I was wrong]


This is completely incorrect, by shannon (http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_samplin...). The sampling frequency determines the maximum frequency that can be captured, not the temporal resolution. That said, a transient containing higher frequencies will be sharper than a transient that doesn't, but its onset time resolution will not be determined at all by the sample rate.

Said another way, two band limited pulse signals with different onset times, no matter how arbitrarily close, will result in different sampled signals.


> two band limited pulse signals with different onset times, no matter how arbitrarily close, will result in different sampled signals.

This is true, but different than what I am arguing. You're saying that a listener over time will be able to tell that the two signals differ. I am saying that a listener will be able to determine this at fractional wavelengths.

It's similar to dithering a high dynamic range signal onto a lower bit depth: more than two samples are required for "evidence" of two different signals, while sampling at a high enough rate will tell you this almost instantly.

Again, I don't know if human ears are able to detect this, just that I haven't seen it addressed in these discussions.


I'm not sure what you're getting at.

As a thought experiment, let's consider a pulse that has been band-limited to 20kHz. Are you arguing that the analog output of a (filtered, idealized) DAC would look different depending on whether the dac was running at 44.1kHz vs 192kHz? If so, I don't think many people would agree with you.

Any difference in the "timing" of the output wave would have to come from energy that falls above nyquist of the slower sample rate. So, while I agree with you that the timing would be sharper, this is exactly caused by "higher frequencies", not by some other sort of timing improvement.


> Are you arguing that the analog output of a (filtered, idealized) DAC would look different depending on whether the dac was running at 44.1kHz vs 192kHz?

No. I'm arguing this: take a 44.1kHz signal and upsample it to 192. It's the same signal, same bandwidth and everything. Duplicate the stream and add a 1 sample delay to one of the channels. When you hit play, that delay would be there. If you downsampled the 44.1kHz signals after applying the delay to one of the channels, you would almost hear the same thing. The difference is that you could not detect the difference between the signals until after a few samples. With the 192kHz stream it would be unambiguous after 2.

Remember, Nyquist-Shannon holds if you have an infinite number of samples. If your ears could look into the future then what you say is perfectly correct, but they need time to collect enough samples to identify any timing discrepancies.


You are right.


i think what jaylevitt is referencing to is that there is interpolation going on in the dac. that could mean (i'm no dac expert, so not sure) that the dac could guess more granular than the sampling rate would allow the start points (of transient e.g.)

but the question for me is how exact that guessing is. correct me if i'm wrong but, that interpolation happens twice: when recording by the adc and on playback by the dac.

so a lot of that whole discussion (yeah, finally something about acousticts :) depends on how accurate interpolation works in adcs and dacs.


This is the core secret of the sampling theorem. It says if you have signals of a particular type (bandlimited) you can do a certain kind of interpolation and recover the original exactly. This is no more surprising than the fact that you can recover the coefficients for an N degree polynomial using any N points on it, though the computation is easier.

It turns out that if you reproduce a digital signal using stair steps you get an infinite number of harmonics— but _all_ of them are above the nyquist frequency. The frequencies below the nyquist are undisturbed. Then you apply a lowpass filter to the signal to remove these harmonics— after all, we said at the start that the signal was bandlimited— you get the original back unmolested.

Because analog filters are kinda sucky (and because converters with high bit depth aren't very linear), modern ADCs and DACs are oversampling— they internally resample the signal to a few MHz and apply those reconstruction filters digitally with stupidly high precision. Then they only need a very simple analog filter to cope with their much higher frequency sampling.


But at a given sample rate, if I'm sampling at bit depth 2, doesn't that quantization error end up temporally shifting the sine wave I'm reconstructing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: