Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans will hear the impact > 20kHz frequency has on the lower frequencies, not the 30kHz frequency itself. That's been proven a million times.


If that is true, surely in your up thread example of recording a triangle, the "impact on lower then 20kHz frequencies" would already have happened during the recording process in between the triangle and the microphone, and would have been captured perfectly on recording equipment that's proven capable of capturing everything below 20kHz? So we'd "hear" the effect as part of the recording instead of requiring it to happen in our listening room…


>That's been proven a million times.

Then you should be able to provide at least one citation.


If you're not going to hear the frequency, then there's no reason to record it, so I don't see what you're objection is.


Yes, but if you sample the frequency to create a step wave, then neglect to filter the results, you will end up reproducing tons of high frequencies. That is why we need to filter the output for signals >20KHz...to remove these harmonics that result from reproducing the square wave.

Of course, filters aren't perfect, and result in phase shift and roll-off. So we over-sample the signal to create a signal with a much higher frequency than 20KHz, so that the filtering occurs well outside the audible band, allowing us to filter out all of these harmonics without affecting the desired signal.

Basically, the end result is that by sampling the signal, you are introducing high frequency content that must be removed prior to playback. This high frequency content is one of the reasons old CD players from the 80s and 90s cause "listener fatigue", although I have no sources to back up that last statement.


Yup... people need to get very clear in their heads the difference between the recording/sampling/mixing/mastering stages, where high bitrate/width/gear/knowledge is helpful, and playback, which is a completely different thing.

(not for eatmyshorts -you get this I gather) - everyone gets that "upsampling" can't add detail to a recording right? You can't get more than you've got.... no matter what you do. There is no magic. You upsample so you drive harmonics generated in the digital-to-analog process during playback further up in the spectrum so when you get to the analog stage you can use a nice gentle analog filter to filter them out. Without the upsampling, you need a nasty steep analog filter to filter them out, and that can have audible side-effects (or at least measurable) in the audible spectrum. eatmyshorts - correct me if I mis-stated any of that please....


You got it 100% correct. You upsample simply to move the frequency of the analog filter higher, with a gentle rolloff (and ideally a 1st order filter, so you introduce no phase effects) to get your final signal.


In other words, your theory is that the superposition principle doesn't hold for sound waves.


Well, the superposition principle only holds in linear media. Sound waves can propagate in linear media, but they can also propagate in nonlinear media, and any medium that can carry sound will go nonlinear at sufficiently high amplitudes.


Note the lack of citation


http://en.wikipedia.org/wiki/Overtones

I don't know about the physics of the speaker itself generating the overtone (in cabinet), but it could certainly resonate a wine glass in the room, for example.


Yes, overtones exist, and yes, overtones affect the sound, and yes, if you filtered the sound to remove overtones in the audible range then it would sound different. However, if you remove overtones outside the audible range then it will not make an audible difference (this is what xiphmont was saying in TFA).

So no, your wikipedia link is not a citation for the claim that cmer made.


"A/B or GTFO," I believe is the parlance of our times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: