Same goes for 16/24 bit, however, the difference between 16 and 24 bit is actually audible
No, the difference is not audible at all. At 16 bits of depth on a normal low-level audio signal (~0.3 volts), we're talking about less than 0.000005 volts per amplitude step. This difference gets lost in the THD already at the DAC in your audio output stage. Then it gets lost again in the amplifier. And again in the cable to your speakers or headphones. And then it gets lost again in the speaker elements. What survives in a normal low-level audio signal is about 14 bits of resolution.
44100 is not a bad sampling rate, but it necessitates very sharp aliasing filters, which are audibly bad. A bit more headroom is well needed there.
44.1khz IS a bad sampling rate for accurately reproducing anything except a triangle wave or square wave above 5khz.
why do you think "This difference gets lost in the THD already at the DAC "? Do you have numbers to back it up? What's the noise floor of DAC? What's the noise floor of an output stage? Do you have the number?
high dynamic range is not about the lowest volume you can hear, it's about the voltage resolution between this sampling point and the next. Base on your assumption, we can all see black whether we use 16bit RGB color or 24bit RGB color, what's the point of using 24bit RGB?
Many years of building audio equipment (in particular analog synthesizers), and equally many years of being meticulously anal with getting the best components for my circuits, reading specifications of down to every single op-amp I've ever employed, is why I think so.
I am not saying that there aren't any DACs on the planet that can't handle five millionths of a volt, but I am saying that five millionths of a volt isn't surviving through the particular DACs and the rest of the electronics used in your PC/living room hi-fi audio equipment.
Heh, it's funny to see this late-nineties debate get re-hashed here. Also kind of fun.
If it were true that there's no audible difference between 16 and 24 bit, companies like Alesis, Otari, ProTools, etc. wouldn't have spent the last 15 years ditching 16 bit like an old pair of smelly sneakers. (better metaphors welcome).
Seriously, anyone who has sat down in a real listening environment for 5 minutes A/Bing 16 vs 20 bit, 16 vs 24, etc. hears the difference immediately. There's no question. This is why you can buy ADAT 16 bit 'blackfaces' for $100, down from their original $4,000.
Sure, moving up from 16bit recording was an improvement, but having done engineering for a company listed above for over a decade, I can tell you that we went 24bit/192kHz because of market demand, not for any real technical reasons. We thought it was fairly unnecessary ourselves. It was also kind of an arms race with other companies, much like the megapixel arms race for digital cameras.
Yes, and anyone who has ever sat down infront of an LCD flatscreen watching their favorite movie on DVD/BD using gold-plated $200 HDMI cable instead of $4.99 Walmart HDMI cable see the extra sharpness immediately. This is why non-gold plated non-OFC HDMI cables are down to $4.99 a piece from their original $49.99 during introduction.
The difficulty I had is that the same person claimed they could hear the difference between 44 kHz and 96 kHz, when the article (and all other comments which cited outside sources) claims that is well outside of human capability.
That's cute. Obviously you've never recorded a rock band while riding the pre to compensate for 16bit's terrible noise floor and horribly limited headroom. You've never had the joy of ruining a perfectly good take because of that wonderful sound it makes when the volume spikes into digital distortion despite compressing the wazoo out of the input source. Glorious sound, digital distortion. Run a dentist drill through an old Speak & Spell and you'd just about have it.
You've never rented an expensive tube EQ during a mix to cover up 16bit's grating harshness from 10k to 15k. Or tried like mad to make the bass drum sound like a freaking bass drum and not a pie pan slamming against the back of a plastic trash can. And yes, we had good mics and pres, all standard studio stuff. Decent, not brilliant, converters, but it was the 16bit that was the problem. Getting those 20bit XTs for the first time was like walking into the Promised Land.
Sure, there's lots of marketing ploys out there, lots of snake oil. Moving up from 16 bit was not one of them.
It looks like you are jumping in without actually having read the article in question. That's ok, but you are wasting space building a straw man proceeding to vigorously demolish him.
The original article explicitly mentions how 24bit is useful for recording.
Speaking of jumping in without reading...I wasn't responding to the article. I was responding to the commenter that said you couldn't tell the difference between 16 and 24 bit.
And you cannot tell the difference. The reason to record using 24 bits is so you don't have to be as precise centering the recording level. If that level is centered then you can capture fine with 16 bits (by the way that is also explained in the article).
> Professionals use 24 bit samples in recording and production [11] for headroom, noise floor, and convenience reasons.
...snip...
> Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
The original article does say that yes, during recording and production, 24 bit audio gives you a lot more room to play with. That doesn't mean that you can hear the difference between 16 and 24 bits for the final recording; just that 24 bits give you more room to keep out of trouble during production.
No, the difference is not audible at all. At 16 bits of depth on a normal low-level audio signal (~0.3 volts), we're talking about less than 0.000005 volts per amplitude step. This difference gets lost in the THD already at the DAC in your audio output stage. Then it gets lost again in the amplifier. And again in the cable to your speakers or headphones. And then it gets lost again in the speaker elements. What survives in a normal low-level audio signal is about 14 bits of resolution.
44100 is not a bad sampling rate, but it necessitates very sharp aliasing filters, which are audibly bad. A bit more headroom is well needed there.
44.1khz IS a bad sampling rate for accurately reproducing anything except a triangle wave or square wave above 5khz.