Blind DAC Public Listening Test Results

Status
Not open for further replies.
If you can't hear the quantization noise and distortion on CDs when the music reaches very low levels where only 4 bits perhaps are being used then you are severely hearing impaired.
With a 16 bit signal, to have only 4 bits being used, the signal has to be about 72 dB below 0 dBFS. So, if you're into experiments, try this one:

Fire up Foobar 2000 which has a volume control calibrated in accurate dB. Crank up the overall volume with Foobar at maximum to a comfortable loud listening level. Then try to turn it down to -72 dB. At least on my system, Foobar only lets you turn it down to -55 dB and then goes to off. At -55 dB I can only ever so faintly hear anything let alone tell if it's suffering "quantization noise and distortion" as you claim.

Or, if you want a true -72 dB instead of -55 dB. Take a favorite track, and use Audacity to apply -72 dB of gain reduction. Play the normal track as loud as your dare, then play the -72 dB track without touching the volume. Can't hear much can you? I'll be happy to post examples of what a -72 dB track sounds like on my blog if there's enough interest?

The above demonstrates the only way you can hear what you describe is to ADD A HUGE AMOUNT OF UNREALISTIC GAIN--i.e. "crank up the volume" as you describe to a completely unrealistic level. So your "test" is completely invalid for regular listening. I'm sorry if you don't accept that, but it's just a fact.
 
I didn't "outright reject" your test, I said it's not realistic to "crank up the volume" (your words) to where you can hear such differences and hold that up as realistic proof. I can get out a microscope and prove that plate that you claim is clean because it just came out of the dishwasher really isn't perfectly clean. But does that matter? No. It's clean enough. Just like 16 bits is enough for the way people really listen to music.

I wholeheartedly disagree.

Would you consider music that's 72dB below the peak level in a classical recording to be unrealistic to expect to hear clearly??? I don't. I expect to hear it. -72dB is easy to hear with headphones at only moderate volume.

I’ll give it one more try with a very similar but not identical situation (I know I said I gave up but my plane is not here yet). There’s another test that can prove my assertion that there is an audible difference between CD (44.1kHz/16bit) and 24bit/44.1kHz formats. I know were talking about SACD here, but there is nobody I know who owns DSM recorders to make SACD recordings let alone burners and blanks.

You need a very high quality digital audio recorder capable of 24bit/44.1k recording to perform this test and a sound source that is a pure tone (pure sine wave). A quality sinewave function generator is a good source (or a digital one even better).

Step 1) Create a recording of a pure tone at 24bit/44.1k. 500Hz works well since it’s not irritating to listen to, but almost any frequency will work, just stay under 5k (for the sake of your ears and to make the distortion easier to hear). Make certain you use almost all of the dynamic range the 24bits offer.

Step 2) Digitally reduce the signal level of the recording by about 72dB. This is equivalent to throwing away about 12 bits of information, leaving the equivalent of a 12 bit recording using 24bits, the MSBs not being used. This is a realistic simulation of low level passages in classical music.

Step 3) Digitally convert the 24bit file to a 16 bit file by truncating the lower 8 bits. Now you have reduced the bit resolution to 8 bits, with the top 8 MSBs not being used. Remember, all we are doing is simulating a low level passage that would be about 72dB down from the peak, still well above the noise floor of -90db or greater for CDs.

Step 4) Listen to the two files. If you can’t hear the difference, then you are hearing impaired. The 16 bit file will sound distorted, the 24bit file won’t.

Program material that’s at -72dB on a CD is represented by only 4 bits. On a 24bit recording it’s represented by 12 bits. Now, if you want to refuse to perform the test, or if you assert that 4 bit resolution sounds as clean as 12 bit resolution then feel free to pontificate.

Thanks.
 
Looks like we cross posted.

OK, change the test to -50dB. The quantization distortion will still be audible, just not as much.

If you can still barely hear the signal at -72dB then it's part of the music, is it not? And if it's not, why not just make CDs 12 bits? That's all you need according to your logic. 72dB and you're happy. We're on two different planets!

BTW, I meant LSB, not MSB. Typo. Sorry.

I think I'm stuck in this airport forever. HELP!!!!!!!!!!!
 
Last edited:
I wholeheartedly disagree.

Would you consider music that's 72dB below the peak level in a classical recording to be unrealistic to expect to hear clearly??? I don't. I expect to hear it. -72dB is easy to hear with headphones at only moderate volume.

I’ll give it one more try with a very similar but not identical situation (I know I said I gave up but my plane is not here yet). There’s another test that can prove my assertion that there is an audible difference between CD (44.1kHz/16bit) and 24bit/44.1kHz formats. I know were talking about SACD here, but there is nobody I know who owns DSM recorders to make SACD recordings let alone burners and blanks.

You need a very high quality digital audio recorder capable of 24bit/44.1k recording to perform this test and a sound source that is a pure tone (pure sine wave). A quality sinewave function generator is a good source (or a digital one even better).
You're, again, missing the whole point. I have never disagreed you can prove the advantages of higher bit depth/sample rates with various theoretical methods (like sine waves, manipulating digital files, unrealistic volume settings, etc.). As I said, that's easy.

I can do the same thing with THD measurements. It's far easier to hear THD with a sine wave than with music for example. But when you try to use such an example to explain why someone's tube amp, single ended MOSFET amp, or zero NFB amp, might sound bad, they get all defensive and argue "nobody listens to sine waves". So that argument should apply both ways. Either sine waves matter across the board, or they don't. It's not fair audiophiles get to play that card only sometimes when it suits their particular argument.

But regardless, I'm only talking about listening to real music, over real systems, under real world conditions (including realistic volume settings). That's what the blind tests I've posted references to attempt to do.

And, for anyone interested in what -72 dB sounds like, please try this:

1 - Play a regular file with peaks that get close to 0 dBFS (i.e. a properly recorded/ripped file) and set the volume as loud as you comfortably like to listen.

2 - Then, without changing any volume settings, play the downloaded FLAC file below and see how loud it is. That's what -72 dB sounds like.

Sara K Brick House -72dB Excerpt

Note: The above excerpt is copyrighted and provided for non-profit educational purposes only. It will also only be available for a limited time.

And you can pick any dB level/number of bits/etc. you like. If it's so easy to hear what you describe under real world conditions, please point me to the blind listening tests that credibly demonstrate this easily audible difference?

Again, at the outset of all this, I agreed there are clear theoretical advantages to higher bit/sample rate formats. I also agree they have audible advantages for recording and processing (i.e. mastering) digital audio. But I don't think it's been credibly demonstrated they have audible advantages in real world playback of music.
 
OK, change the test to -50dB. The quantization distortion will still be audible, just not as much.

Maybe I should just have the word "dither" tattooed on my hand.

No, you will not hear quantization distortion at -50dB. Or at -72 dB. You'll hear a noise floor, and one lower than nearly any analog tape unit ever made. And just as with analog, you'll hear signal below the noise floor.

Say it again. "Dither." All recordings use it and for a good reason.

Every once in a while, it's worth getting out of the armchair and doing an experiment.
 
Jkeny, it's easy to prove the superiority of higher bitrates on paper. I don't think anybody is contesting that. Bringing jitter into that argument, as in the JosephK quotes you've posted, doesn't really change anything.

The AES listening tests I've referenced account for all sources of distortion--jitter included. You can try to make a case that with certain configurations of A/D's, D/A's, types of connections, etc. it might make a difference. But I come back to my earlier point that if it were just a matter of say using a certain S/PDIF interface that would make 16/44 sound worse, why hasn't anyone done that--especially when there's so much money to be made if they do?

There are also AES papers that discuss reducing THD from say 0.005% to 0.0005%. But nobody (that I know of) is claiming anyone can hear the difference as 0.005% is already considered below the threshold of audibility. They're simply academic/theoretical papers on distortion reduction.

Jitter is a bit controversial as it's not as easily quantified as say THD. So I'm not surprised some want to drag it into the bitrate argument as it's a more "squishy" topic. But, ultimately, it's accounted for in the listening tests I've presented. And if it didn't surface as an audible problem, it very likely wasn't.
First & foremost there are many that would deny the superiority of higher bit rates in the audible frequency range because of the Nyquist theory. JosephK shows how SPDIF is flawed, how 16/44.1KHz SPDIF is particularly flawed & how 24/192KHz avoids this particular flaw.

Secondly, using a logical approach to the issue would seem to make sense. Measuring & identifying the particular characteristics of 16/44 Vs higher sample rates AT THE SPDIF OUTPUT. This establishes that there is some possible effect in the audio band.

Now the job is tracing this through each step of the playback process to see it's effects at each stage.

I'm not dragging jitter into the higher samplerate because it's "squishy" as you say - I raised it because it shows a distinct measured difference in the audio frequencies.

Now whether these are universally audible will depend on so many variables that I wish you luck in trying to control these variables. I would take it that controlling these variables is the first step in your "scientific" approach?

Absence of evidence is NOT evidence of absence, you know!
 
Last edited:
I'm not dragging jitter into the higher samplerate because it's "squishy" as you say - I raised it because it shows a distinct measured difference in the audio frequencies.
The key word above is "measured" versus my key word of "audible". I can easily measure all sorts of advantages to higher resolution formats. The problem is finding anyone who's reasonably demonstrated (i.e. via a well run blind test) those measurable benefits are audible during playback under real world conditions. I keep looking, and I keep asking, but the only credible evidence I've seen points more to the opposite conclusion--that high resolution formats sound the same as 16/44 under real world conditions.
 
Again, let me repeat, the first step in any derivation of cause & effect is to show that the cause has an effect in the area under investigation - in this case the audio frequency range. This has been measured.Are ther other papers/research that shows the measured differences in the audio band between 16/44 & 24/192?

Having agreed this, the question of audibility is whether these effects are below the audible threshold? So again I'll state absence of evidence is not evidence of absence!
 
Step 3) Digitally convert the 24bit file to a 16 bit file by truncating the lower 8 bits. Now you have reduced the bit resolution to 8 bits, with the top 8 MSBs not being used. Remember, all we are doing is simulating a low level passage that would be about 72dB down from the peak, still well above the noise floor of -90db or greater for CDs.

Congratulations, you just performed a non-linear operation on your data (truncation). No wonder you get to hear distortion - you put it there yourself.😉
 
Last edited:
I keep looking, and I keep asking, but the only credible evidence I've seen points more to the opposite conclusion--that high resolution formats sound the same as 16/44 under real world conditions.

Dude, did you EVER listen to a SACD disc? It has usually the same album in stereo CD and stereo SACD so it is easy to switch between them.
Listen first and then argue. Don't come with unscientific and biased tests found the net.
Yes, for a deaf person or from a defective player there is no difference. But that doesn't extrapolate to all cases...
 
Dude, did you EVER listen to a SACD disc? It has usually the same album in stereo CD and stereo SACD so it is easy to switch between them.
Listen first and then argue. Don't come with unscientific and biased tests found the net.
Yes, for a deaf person or from a defective player there is no difference. But that doesn't extrapolate to all cases...
Same album yes. Same mastering often not. And, for the record, I have a few dozen SACDs. Some of the multi-channel ones are great.

I think I've been more scientific and UNbiased than you have here.

To the best of my knowledge, none of the listening tests I've referenced used deaf people or defective players.
 
Come on, remastering is just for surround part. The stereo part is usually just straight from the original magnetic tapes.
And even if is remastered, the CD layer is derived straight from that new remastered stereo DSD.
That's not been my experience. But I don't have that many pure 2 channel dual SACDs. Anyone else know more about this one?

Abraxalito at least seems to agree with me earlier in this thread:

http://www.diyaudio.com/forums/digital-line-level/185281-blind-dac-public-listening-test-results-5.html#post2513644

In any event, I don't see how either of us can be sure if we can't ask the mastering engineer, unless as Abraxalito suggests in his post, analysis shows they're clearly not the same.
 
Some of the stereo layers of SACD in surround editions are derived from the original CD stereo mix at the same quality like the CD. I think that is the case with "The other Side of the Moon". But that doesn't proove nothing but the fact they cut costs not remastering the stereo...
Only a native stereo SACD can be compared.
Alternativelly, you can download samples of same program and play them in your system. Foobar can decode DSD, but it does it thru PCM conversion.
Design w Sound 2011 AJP3 – Free Hi-Res Samples
2L Test
 
Hi, NwAvGuy, I stumbled upon your DAC blind listening tests while doing a google search for "DAC blind listening." It's nice to see that there are people actually interested in controlled equipment comparisons rather than methods subject to bias (like so-called "reviews.")

I spent some time looking at your methodology. The idea to ADC the outputs of various DACs and headphone amps under load, then post the captured level-matched audio files for anyone to compare was brilliant. This allows trusted audio community members to participate, so that readers cannot argue that the listeners were unqualified. It also allowed you to recruit a larger group of listeners than would otherwise be practical.

However, I do have some concerns regarding your analysis and the conclusions drawn from the experiment. The primary issue is that there is no statistical analysis presented. It is impossible for the reader to determine if the "ranked" order was due to random chance or if there were differences among the tested devices that truly existed. I can tell you based on experience that with 20 listeners, each selecting either several "favorites" or providing a single rank list, you probably will need a much larger sample size and series of repetitions in order to achieve results that meet statistical significance (i.e. your "findings" are likely to be reflecting actual differences rather than due to random chance). I know you stated that this was an "informal survey," but if it is, there are no meaningful conclusions that can be drawn. That is, even if the Benchmark DAC1 had the most "votes," it doesn't necessarily mean, based on your study, that it was sonically superior to the uDac (I "expect" that it would be, but your study doesn't necessarily demonstrate his hypothesis.)

I'm very interested in how your ODAC work will turn out. It would be fabulous if a DAC that is sonically indistinguishable from the Benchmark DAC1 can be purchased for a fraction of the price. You mentioned that you would be conducting blind listening tests in a similar fashion, and I'm wondering if you have access to a statistician who can help you design your test methodology and the data analysis?

If I can make a friendly suggestion: one elegant way to demonstrate that two DAC's are similar (which I think is what you'll want to show), is individually testing each audio clip and output pair separately.
i.e.
Test#1: ABX of ClipA played on lineout of DacA vs. DacB - The clips from each DAC are randomized, and the listener selects same/different. Test is repeated X number of times, and statistical analysis is done.
Test #2: ABX of ClipA played on headphone-out of DacA vs. DacB - again, same/different test done X number of times, analysis done on this test separately.
Test #3: ABX of ClipB played on lineout of DacA vs. DACB - and so on....

At the end of the day, you'll be able to obtain analysis results like: (examples)
- 5 out of 20 listeners were able to differentiate the lineout of DacA from DacB using clipA more reliably than would be expected by chance
- 8 out of 20 listeners were able to differentiate the lineout of DacA from DacB using clipB
- 18 out of 20 listeners were able to differentiate the headphone-out of DacA from DacB using clipA.
...and so on.
OR NONE of the 20 listeners was able to differentiate the lineout of DacA from DacB using ClipB more often than would be expected by chance.

Statistics would be very simple/straightforward doing it this way.
But fair warning, if you have negative results, people are going to argue at least one of the following:
- the listener's equipment (listener's DAC, amp, and headphones) had insufficient resolution to distinguish actual differences between DacA and DacB. To borrow your example, if you are trying to see if 3 blue shirts displayed on a monitor are different shades of blue, and you're using an old-school monitor that only has 16 colors, all 3 blue shirts will look the same, even though they are actually different shades.
- your listeners, either due to poor hearing or inexperience or whatever, cannot actually hear small differences in sound quality even though they exist
- the added ADC+DAC process actually introduced artifacts or affected SQ in such a way that actual differences in DacA vs. DacB were "masked."

Best of luck.
 
I spent some time looking at your methodology. The idea to ADC the outputs of various DACs and headphone amps under load, then post the captured level-matched audio files for anyone to compare was brilliant.

Not so fast.

First we'd need to know that the ADC was transparent. Second and third, the ADC bandlimits what's sent to it and might generate some aliasing. Whilst I wouldn't argue the bandlimiting was an issue where the next stage is a transducer (as in headphones), it most certainly could be where the next stage is an amp or pre (as for a DAC). By no means a few DACs put out excess HF noise beyond the audio band.

A fourth issue is that the ADC would capture only the differential mode signal and its sensitivity to common-mode noise is probably uncharacterized.

The onus would be on the person making the files available to ensure these four effects - two potential (lack of trransparency and aliasing) and two certain (bandlimiting and CM noise) could not be invalidating the results.
 
Not so fast.

First we'd need to know that the ADC was transparent. Second and third, the ADC bandlimits what's sent to it and might generate some aliasing. Whilst I wouldn't argue the bandlimiting was an issue where the next stage is a transducer (as in headphones), it most certainly could be where the next stage is an amp or pre (as for a DAC). By no means a few DACs put out excess HF noise beyond the audio band.

A fourth issue is that the ADC would capture only the differential mode signal and its sensitivity to common-mode noise is probably uncharacterized.

The onus would be on the person making the files available to ensure these four effects - two potential (lack of trransparency and aliasing) and two certain (bandlimiting and CM noise) could not be invalidating the results.

Hi, abraxalito. I actually already mentioned the added ADC+DAC step as a potential problem if listeners are unable to identify a difference. It's the last bullet point of my post.
 
Ah so you did - my apologies for not reading your post through carefully enough right to the end. Calling RS's idea a stroke of brilliance was what prompted me to respond 😛 I've used more technical terms than you and broken what you said in outline down to individual issues so that RS can address them individually (if he so wishes) 😀

I thought about the DAC part of it but that will be different for each user but the same DAC will be used in comparisons. If the DAC used sucks then it might well mask differences but that's an onus on anyone listening to upgrade their DAC 🙂 No-one has a chance to change the ADC (except RS) so its got to be blameless for these tests to carry any weight.
 
Status
Not open for further replies.