"24/192 Music downloads, and why they make no sense"

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
:sing:
Datz 'cause your amps are too big! :D

Put me in the "192Khz is a waste of bandwidth" camp. 24/96 seems nice. I was happy with 48KHz DAT when that was alive.

:D


What I can tell you guys is that there is a huge difference between 16/44.1 and 24/192. There is too much "hash" with 16 bit, remember that every bit represents twice the information, so going from 16 to 24 you'll get 16 times more depth. Admit 192 kHz does sound softer, but at the same time it's more natural sounding, the down side , eat's up more resources.

24/96kHz maybe the best trade off ... :sing:Upsampling DACS sound horrible to me , don't likey .
 
I have fooled many with extremely well recorded 16/44.1

You cannot easily tell that difference. I still believe that 24 bit has a huge advantage when you have the right equipment and that the recording is well done. Other than that, you can get many 24 bit recordings that are very badly recorded and is only a hype for selling... I bought many HD recording and they don't always reflect their format...

Many are just resampled from the 44.1 format and it does not always do justice...

Reel to reel tapes had amazing sound (when well recorded) but are really hard to get these days

Do
 
Last edited:
With software, it's 100%.

The difference can reliably be detected by ear, but only if you really boost the volume control during the softest passages (or during the silent parts). Otherwise, it's, errrr, lost in the noise.

One of the authors (meyer or moran) detected hi res 15/15 using a short section of very quiet "room tone" which was amplified about 20dB higher than their usual reference level.
 
To me and probably many others, listening to music is a pleasure and entertainment. I don't really spend time listening to 4 seconds passage to compare A and B in a repetitive loop just to know if there is more distortion, etc...

If it sounds good to my ears and is pleasurable then it is a win, otherwise I just put aside and don't waste time on it... Maybe only listen to it in the car where it matter less... ;)

I do however have high respects for those who take that time to make it more a science.

Ciao!
Do
 
I'm not sure whether you don't understand what you are saying, or I don't understand what you are saying. Below half the sample rate you can reproduce the input whatever its frequency. You either have enough samples (below Nyquist rate) or not enough (above Nyquist rate).

If I have a 48K sample rate and am just reproducing a 100 Hz. tone I have way more samples than I need. So with smart software IF the encoding is done to say 24 bit accuracy you can bounce the output +/- the LSB to get more resolution. This also requires that your 16 bit DA is 24 bit linear.

So in theory it is possible to get better resolution at frequencies below the Nyquist limit. Of course there are no real 24 bit converters or 16 bit converters that are more than 16 bit accurate!
 
Exactly. People who dispute this just don't understand sampling. It is very instructive to play with DSP and input frequencies near the nyquist limit without an anti-aliasing filter.

I have crunched 24/96 down to 16/44 and heard no difference at all. Does that mean I am deaf, no because some guys (Meyer and Moran) did the same thing in 2007 and published a JAES paper about it and under double blind conditions, nobody could tell the difference with music signals.

Ron, you have to be very very careful about the point of reference used in any testing. For example a certain Arny K used to send around a CD that was supposed to demonstrate the lack of audible difference between two or more "conditions". Sure enough, no difference was audible. Problem is/was that with the particular source material chosen nobody could possibly hear any differences unless they were GROSS differences.

The selection of source material is key and critical.

Then the specific signal chain used is also key and critical.

What this means to me is that in most situations, for most recordings there is little or no audible difference to be heard between things like higher and lower bit rates and bit depths. It really does not matter for most people on most systems. But that does not exclude the situations where it can be audible and makes a difference.

These papers and tests really should limit their conclusions to the specific test conditions under which the test was made, despite their interests or desires to state that they can be generalized to all situations. Maybe if there was enough independent corroboration via similar tests using equal or better test conditions, then it might be reasonable to extend the perimeter of inclusion wider. Imo, of course.

Also, it is important to leave room for undocumented or documented confounding factors, and the effects of masking (if any)...

Needless to say I am skeptical of most of these "tests" since I find the lack of controls and full disclosure of the test conditions to be troubling.

Ymmv.

_-_-bear
 
So are we saying that the difference can be heard, but only on the quiet passages (or 'silence') when it is boosted by 20dB above normal listening level? For normal music listening that is the same as saying that the difference cannot be heard? This is what is meant by "huge difference"?

I'm not following you, are you saying there is no difference between 16 bit vs 24 bit .... ?
 
simon7000 said:
you can bounce the output +/- the LSB to get more resolution
Isn't this what dither does?

a.wayne said:
are you saying there is no difference between 16 bit vs 24 bit
You tell me there is a huge difference. Others tell me there is a small difference which can only be heard on quiet passages when boosted by 20db, which seems to me to be no difference for normal listening. Tests appear to show no difference. Who do I believe?
 
Useful definition of "accuracy" for audio is entirely in the dynamic specs - spurious free dynamic range, distortion, noise # are better than 16 bits for many ADC, DAC chips today

nearly 20 years ago I used a 100 kHz 16 bit ADC with internal auto calibration promising 18 bit linearity – and I was able to resolve and verify the “below lsb” linearity with averaging


I would rather see higher sample rates at 16 bit than more of the totally foolish 24/44 Beatles releases

Higher sample rate would address any lingering debate over highest human audio frequency perception capability and allow totally inaudible dither+noise shaping by pushing it to ultrasonic frequencies beyond the analog reconstruction filter corner frequency
 
These papers and tests really should limit their conclusions to the specific test conditions under which the test was made, despite their interests or desires to state that they can be generalized to all situations. ...

Needless to say I am skeptical of most of these "tests" since I find the lack of controls and full disclosure of the test conditions to be troubling.

So since Meyer and Moran were careful not to draw a "generalized" conclusion, tested in a variety of systems, and disclosed controls and test conditions, you have no problem with their work, presumably. I quite liked their conclusion, which basically said, we can't find a situation where the differences between 16/44 and hi res can be heard, if you disagree, let's see your data.
 
You tell me there is a huge difference. Others tell me there is a small difference which can only be heard on quiet passages when boosted by 20db, which seems to me to be no difference for normal listening. Tests appear to show no difference. Who do I believe?

So your issue is not with there being a difference, your Issue is with the word "huge"...?

I guess first we have to define huge, it might mean something else to you , the difference is huge to me. Do you believe all speakers or all audio systems have the same resolve, I'm sure there are systems/setups where there would be no difference ..

Useful definition of "accuracy" for audio is entirely in the dynamic specs - spurious free dynamic range, distortion, noise # are better than 16 bits for many ADC, DAC chips today

nearly 20 years ago I used a 100 kHz 16 bit ADC with internal auto calibration promising 18 bit linearity – and I was able to resolve and verify the “below lsb” linearity with averaging. I would rather see higher sample rates at 16 bit than more of the totally foolish 24/44 Beatles releases

Higher sample rate would address any lingering debate over highest human audio frequency perception capability and allow totally inaudible dither+noise shaping by pushing it to ultrasonic frequencies beyond the analog reconstruction filter corner frequency

Try 24/96, 24/44 .1 is an improvement by the way , I would prefer all to go 24bit with sampling frequency anywhere from 44.1-192 Khz...

so why does upsampling sound better to me ?

Not sure, with high resolution systems or headphones , you can hear the noise ( grain) maybe poor system resolution

So since Meyer and Moran were careful not to draw a "generalized" conclusion, tested in a variety of systems, and disclosed controls and test conditions, you have no problem with their work, presumably. I quite liked their conclusion, which basically said, we can't find a situation where the differences between 16/44 and hi res can be heard, if you disagree, let's see your data.

What data would suffice, how would this test have to be conducted for it to be satisfactory to your taste , ohhh for the record , they are wrong , I cannot agree with their conclusion..

One may agree or not that Hi-res sounds better, but there is no denying there is an sonic difference. Anyone can devise an liar , liar situation if that is their intent.

I'm sure you are aware of such .................
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.