DAC blind test: NO audible difference whatsoever

High-end audio is not worth the money? Depends how much money is worth to you. If you really want to make a lot of money you mass produce cheaply for the mass market who buy the newest trendiest thing they have been told to buy. If, for whatever reason, this is not for you, a viable option is small production quantities with a high mark up. Best way to achieve this is to cover it in an expensive looking veneer, it doesn't matter that much what's on the inside, I can pretty much guarantee to you most of the people who buy this kind of bling are not going to pick up a screwdriver and have a look.
 
Given the fact that we already discussed a lot these things in one of the first OP´s threads ( World´s Best DAC´s in 2015) in quite a few other threads as well - although adimtting that it is sometimes tedious to follow through all the hazzle - i guess it is safe to conclude that the OP simply isn´t interested in any advice that would help him to do _good_ tests.

From my pov it is interesting to see something like that, mainly because i can´t imagine a good reason for it.

I mean, we have cited the relevant literature about psychophysics in general and about perceptual evaluation, about statistics and some ITU recommendations, calculated some probabilites, pointed out what any statistical hypothesis test can do and - more important - what it can´t do (namely to prove something) mentioned the relevant theories behing this sort of statistically testing and ... and .. , but to no avail.
 
I find the cost reference a bit odd... surely 20USD limits the design budget but you can still screw up with 3000USD in your hand.

//

The big picture, in case you all objectors, have forgotten, is not only we should've witnessed a positive identification between a 19,99$ and a 3,000$ devices... but that (yet to come) difference still would've to be proven WORTHY, if any would come...

Identification is only the very first, basic, step.
Then, the appreciation testing would be on the to-do list. ''is the more expensive device sounds better, now that you can differentiate them?'' Then the evaluation of price v.s. quality. ''Ok now that you prefer the more expensive one, HOW MUCH would you pay extra for it''...

See where i'm getting at? Is it clearer now?

We're LIGHTYEARS from that.

LIGHTYEARS.
 
I would like to stop following this thread, because we are going round in circles. Over the last ten years I have seen over a dozen dac blind tests conducted by objectivists and none by the subjectivists. All of them ended about the same:


Objectivists: no differences were found

Subjectivists: I will accept no less than one million attempts. But even then, I'll still just assume you had a lucky guessing streak or flawed methodology (с)

My conclusion, already voiced by Jon: if I can't detect any noteworthy differences in the equipment with 100 times difference in a typical listening room using a typical audio system, unless money burn a hole in my pocket I stick with the cheaper one. For a 100 hundred times difference I expect the difference to be no less than striking and it must be awfully hard to find conditions when it is not (Volkswagen Polo vs Bugatti Veyron).

Since the objectors here are so proficient in blind tests I urge them to conduct their own tests and share the results with us. This will be a lot more weighty argument in the discussion than everything you have come up with so far.
 
Since the objectors here are so proficient in blind tests I urge them to conduct their own tests and share the results with us. This will be a lot more weighty argument in the discussion than everything you have come up with so far.
Excellent idea. I bet they won't though and they'll produce statistical evidence
that there is no way to prove the accuracy of the statistical evidence that is produced by such a test
 
Since the objectors here are so proficient in blind tests I urge them to conduct their own tests and share the results with us.

Okay, I have performed almost the same exact tests as Jon, but in my tests about 90% of the participants can hear a difference between DACs about 90% of the time. Oh, and just like Jon, I don't keep any detailed records or do accurate statistics, I just kind of do it however I feel like should be good. Does that satisfy what you need?
 
Last edited:
I bet they won't though and they'll produce statistical evidence
that there is no way to prove the accuracy of the statistical evidence that is produced by such a test

Daniel Kahneman once said if there is a jury trial and one side has to explain statistics to the jury, that side will lose. He meant that statistics is hard to understand and juries like to hear coherent stories rather than math proofs. Coherence of a story is the idea that of all the factors in the story that one one knows about, all the factors fit together nicely like pieces in a jigsaw puzzle. This is what gives humans a feeling of confidence a story is correct, its coherence. The problem is that often there are way more unknown factors than known factors, and often the real cause of something is due to the unknown factors, so the preference for coherent stories over math proofs amounts to a type of cognitive bias that worked better than nothing for our ancestors, but does't always work best in today's world.
 
Okay, I have performed almost the same exact tests as Jon, but in my tests about 90% of the participants can hear a difference between DACs about 90% of the time. Oh, and just like Jon, I don't keep any detailed records or do accurate statistics, I just kind of do it however I feel like should be good. Does that satisfy what you need?

Please start your own thread, publish the test details in it along with the results and we will go from there
 
Okay, I have performed almost the same exact tests as Jon, but in my tests about 90% of the participants can hear a difference between DACs about 90% of the time. Oh, and just like Jon, I don't keep any detailed records or do accurate statistics, I just kind of do it however I feel like should be good. Does that satisfy what you need?

Please start your own thread, publish the test details in it along with the results and we will go from there

Fair comment, Jon has published quite a lot of information on his tests, including the room, equipment used, test process applied, result data etc. Without that information, most of the argy bargy here couldn't happen...
So, it's fair to request a match of information to a similar level of detail.....
 
Daniel Kahneman once said if there is a jury trial and one side has to explain statistics to the jury, that side will lose. He meant that statistics is hard to understand and juries like to hear coherent stories rather than math proofs. Coherence of a story is the idea that of all the factors in the story that one one knows about, all the factors fit together nicely like pieces in a jigsaw puzzle. This is what gives humans a feeling of confidence a story is correct, its coherence. The problem is that often there are way more unknown factors than known factors, and often the real cause of something is due to the unknown factors, so the preference for coherent stories over math proofs amounts to a type of cognitive bias that worked better than nothing for our ancestors, but does't always work best in today's world.

Certainly true - as has been discussed in this thread re ABX, a lot of probability theory and stats is counter intuitive, and can be hard to understand until you do the maths.
It's routinely misused in courtrooms in interpretation of key info like DNA tests, fingerprints...
The Monty Hall problem is always a good example.
 
I bet they won't though and they'll produce statistical evidence that there is no way to prove the accuracy of the statistical evidence that is produced by such a test

I guess this is the bitter end of the story.
Jon has done so much that very few would be willing to do.

I feel it is unfair to ask so much scientific rigor for tests of this kind done by individuals.
On the other hand I think Dave is insisting on statistical discipline because such tests shouldn’t be aiming for proving something.

I consider such tests as very good for the participating individuals as they provide to them adequate empirical evidence about their discriminating capabilities.

It only follows as a consequence the question (and this is a question the participating individuals may ask themselves) if there is a reason to invest in equipments of the type they tested. It shouldn’t be the object of the test.

Me, I am benefited by Jon having communicated his tests here.

Daniel Kahneman once

Mark thanks for this post.

George
 
I feel it is unfair to ask so much scientific rigor for tests of this kind done by individuals.
Yes, this is DIYaudio after all. We are fortunate Jon had a $3000 DAC knocking around to do this test with, I am, thanks Jon. Also I believe the listeners were sincere, why wouldn't they be? Also, isn't this the kind of test we amateurs would do if we had the equipment?
 
I've done this with my Yamaha RXV659 amp with built in BurrBrown DACs vs an RPI Pianoa2.1 with TI DACs. I can switch input sources easily but its not blind. Same output amps, speaker and room. I have no motivation to pay more. They sound just as good to me.
 
125 success
138 error
total 263

I'm not sure i'll have the patience to reach 1,000... My mind is already onto the organisation of the 4th blind test and, frankly, i'm slowly but surely starting to lose interest in that DAC duel...

You have enough samples now to extract statistics.

The AB test should have a binomial distribution, as you only have 2 outcomes, just like a coin toss. So the 95% confidence interval, assuming its a unbiased random pick, is 0.5+/-1.96*( 0.5 * 0.5 / 263)^0.5 or p= 0.5 +/-0.0604 or [0.4396 to 0.5604] which is what you got (actual 0.475).

Now for some semantics. If your null hypothesis was "there is a difference", then you are within 95% confidence of a random result, so you cannot statistically say "there is a difference". For most people the converse is also true that "there was no difference" detected.

Great experiment Jon, I can't believe how long this discussion has persisted. Witches, Spanish Inquisition, magic cables, golden ears, electron spin, untrained listeners, incorrect music, magic mushrooms, still no snacks, unquantifiable qualities ...

Start some more 🙂
 
Markw4 said:
Okay, I have performed almost the same exact tests as Jon, but in my tests about 90% of the participants can hear a difference between DACs about 90% of the time. Oh, and just like Jon, I don't keep any detailed records or do accurate statistics, I just kind of do it however I feel like should be good. Does that satisfy what you need?
We will now wait for the objectors to point out that your tests were worthless, as they have assured us that their objections to Jon's tests were quite unrelated to the outcome so your 'amateur' tests with the opposite outcome must be equally invalid. I won't hold my breath.
 
We will now wait for the objectors to point out that your tests were worthless, as they have assured us that their objections to Jon's tests were quite unrelated to the outcome so your 'amateur' tests with the opposite outcome must be equally invalid. I won't hold my breath.

Ah, but there is a difference. I didn't claim I proved almost everybody can hear the difference between DACs. If I had claimed something like that, then I hope the objectors would point out my error. I would try it just to see what they would do, except making such a foolish claim about what can be proved from such experiments would make feel too stupid to keep living (only a slight exaggeration).
 
Last edited:
Were your tests 'calibrated'? Level matching? Trained listeners? Length/type of music samples? I ask because others may, as they have for the OP's test.

I regard your tests as just another data point, to be placed alongside Jon's data point. I find it neither surprising nor unsurprising that you found the opposite to him. That is the nature of tests near a threshold.