DAC blind test: NO audible difference whatsoever

DF96 absolutely on topic. He is simply pointing out how ridiculous and nebulous these posts are becoming.

Jon shared an experience, so far he has been crucified for this. 🙄

Jon made what appeared to be overreaching and exaggerated claims regarding audibility to humans in general from very limited and less-than-professionally-conducted listening experiences. The claims in question have been rebutted. Rebuttal and Crucifixion are two very different things. Going forward, probably best not to stray too close to prohibited topics while we are at it.
 
Last edited:
[...]Jon shared an experience, so far he has been crucified for this. 🙄
He got very knowledgeable and valuable comments and propositions by Jakob2 (as his posts about these topics always are, btw.), it's just that some peoples receiving anttenas are very underdimensioned compared to their transmitting antennas.


[...] Jon shared an experience, so far he has been crucified for this. 🙄
If you can't stand the heat, get out of the kitchen....
 
Interesting that Jon's tests have been thoroughly 'gone over' by self-proclaimed 'experts' on the grounds that they were seriously flawed.

Markw4 did vaguely similar tests yet for several days now there has been complete silence from the test 'experts' about his tests. No enquiries about details. No criticism about statistics. It is as though he never mentioned his tests.

Could it be because they found opposite results? No, that can't be the case because we have been assured more than once that the basis of criticism of Jon has been purely his test procedure, not his results. I am glad I am not holding my breath.
 
Rebut: : to contradict or oppose by formal legal argument, plea, or countervailing proof.

There has never been any countervailing proof by the golden ear community. Once the curtain comes down, they can't tell the difference.

I think Markw4's claim of testing was sarcasm lost on our international members.

I not surprised that the most vocal critics of Jon's observation have a commercial interest in selling DAC's.

These discussions alway end the same way with no one changing their mind.
 
Interesting that Jon's tests have been thoroughly 'gone over' by self-proclaimed 'experts' on the grounds that they were seriously flawed.

Markw4 did vaguely similar tests yet for several days now there has been complete silence from the test 'experts' about his tests. No enquiries about details. No criticism about statistics. It is as though he never mentioned his tests.

Could it be because they found opposite results? No, that can't be the case because we have been assured more than once that the basis of criticism of Jon has been purely his test procedure, not his results. I am glad I am not holding my breath.

You must have noticed the argument that closer examination (or level of examination) of experimental design and execution depends on the conclusions drawn from test results.
The more categorical and far reaching the better an experiment has to be.

Maybe i missed something but imo Mark4 didn´t claim to have found the "ultimate truth" about audibility of every difference in DACs instead he just stated that he has observed that "in his tests" ........

So, why do you insist that the reason must be something different (in all cases? )?
 
Maybe i missed something but imo Mark4 didn´t claim to have found the "ultimate truth" about audibility of every difference in DACs instead he just stated that he has observed that "in his tests" ........

It was not my intention to claim any sweeping proof of what all humans can or cannot hear. I did briefly describe a summary of some experiences I have had listening to different DACs with different people, and a few observations from trying to see what kinds of things may seem to help people start learning how to listen for things they usually ignore.

It would be great to see some new publishable scientific research using state of the art test equipment, and informed by modern cognitive psychoacoustics and neuroscience, move forward in this area. Apparently, there is no funding for the needed cross-disciplinary teams of qualified scientists. Earl Geddes says no one cares what the answers are enough to pay what it would cost.
 
Last edited:
It would be great to see some new publishable scientific research using state of the art test equipment, and informed by modern cognitive psychoacoustics and neuroscience, move forward in this area. Apparently, there is no funding for the needed cross-disciplinary teams of qualified scientists. Earl Geddes says no one cares what the answers are enough to pay what it would cost.

I built a tool last year that allows a person to demonstrate skill in hearing different types of distortion. They can crypographically 'sign' their result and have it verified by others. This of course is a testament to the listener + equipment combination. But assuming most equipment is good enough, it would allow a repository of data to be easily built on distortion types (amplitude, harmonic, crossover, etc).

I'd be very surprised if someone could hear 0.8 dB of amplitude distortion in an ABX test on solo piano music (not tones)--I can't. And in fact, the first person that could prove they could hear this using their equipment and the tool demonstrated below I'd paypal them $50. Let me know if anyone is interested to try and I'd outline the terms in another thread. I really think the key to understanding is massive sample sizes, and the only way that will happen is if people can audition and "prove" their skill at home using preferred equipment. That is why the crypto piece is so important.

YouTube
 
It was not my intention to claim any sweeping proof of what all humans can or cannot hear. I did briefly describe a summary of some experiences I have had listening to different DACs with different people, and a few observations from trying to see what kinds of things may seem to help people start learning how to listen for things they usually ignore.

It would be great to see some new publishable scientific research using state of the art test equipment, and informed by modern cognitive psychoacoustics and neuroscience, move forward in this area. Apparently, there is no funding for the needed cross-disciplinary teams of qualified scientists. Earl Geddes says no one cares what the answers are enough to pay what it would cost.

Yes, so one has to read existing modern research into auditory perception/cognitive neuroscience & psychoacoustics & extrapolate as best one can, at the risk of being accused of over-reaching. Unfortunately, that is the position & will remain so. Those looking for specific research which answers their specific question will always be disappointed as a result.
 
We will have a 16yo female participant in the coming days, maybe it will give us different results?
Among the general population, people of that age (and sex) have the lowest dB HL, especially 8khz+. Not relevant to the typical audiophile but interesting nonetheless...
 
I built a tool last year that allows a person to demonstrate skill in hearing different types of distortion.

Would you consider modifying the tool to possibly make it more usable for small difference discrimination verification?

Specifically, even Foobar ABX allows the user to select start and stop times for the file comparisons, so they don't have to start at the beginning of a file. It's can be difficult to use in Foobar however, so there is room for an improved tool offering a better implementation of that feature.

Also, once a region of a file can be selected by the user for comparison, there really ought to be a checkbox to loop, or repeat, the selection over and over without repeated button pushing. Looping, or auto repeating, helps a lot in my experience using Reaper for listening. Makes it easier to learn what is to be discriminated, and easier to perform discrimination during testing. The thing is, I only know it is helpful for me. Nobody else has tried it, so far as I know.
 
The problem as I see it is that nobody is willing or is afraid to test the Foobar ABX tool itself with the setup & participants.

In other words use it to ascertain id people can hear known audible but small differences but don't tell the participants what difference is being tested for - this is important as that is the same condition as is being examined in this ABX test - spot the unknown difference!!

I believe this would be the most eye opening test that might still emanate from this thread
 
Would you consider modifying the tool to possibly make it more usable for small difference discrimination verification?

Yes, that's a good idea I'll add that to the next version. It think if you are looping a small section then it gets much easier to hear differences. But it still requires skill. For example, a group might be able to only hear 1.5 dB difference in solo piano performance on a whole-song basis, but perhaps that thresh drops to 0.7 dB if you are able to loop. Piano is tough because it's very hard to tell the difference between how hard the notes are struck versus amplitude changes in playback. The threshold for chamber music would be greater because there are long regions of constant amplitude tones.
 
The problem as I see it is that nobody is willing or is afraid to test the Foobar ABX tool itself with the setup & participants

I think what makes foobar tough is the managing of the files. Someone has to prepare the files, host them, etc. What I aimed to to with the QA ABX tool is to start with a single wave file, and then the tool injects a prescribed type of distortion on-the-fly. The user can change the type and intensity of the distortion easily without needing audio tools.

This also allows training because you can acquaint yourself with what crossover distortion sounds like by applying excessive amounts, and then slowly reduce it until you can no longer reliable hear it. Ditto with quantization, amplitude, etc

What really illustrative (to me) is how easy these distortions are to hear with tones. But once you move from tones to music, forget it. It's nearly impossible to hear differences (to me, anyway)
 
Yes, that's a good idea I'll add that to the next version. It think if you are looping a small section then it gets much easier to hear differences. But it still requires skill. For example, a group might be able to only hear 1.5 dB difference in solo piano performance on a whole-song basis, but perhaps that thresh drops to 0.7 dB if you are able to loop. Piano is tough because it's very hard to tell the difference between how hard the notes are struck versus amplitude changes in playback. The threshold for chamber music would be greater because there are long regions of constant amplitude tones.
Yes & this is why I wanted jonbocani to post the details on the 0.2dB that he reported were being discriminated. I don't believe this is correct & if he posted details we would see that they are not correct.

On the other hand, if he could show that this level of amplitude difference was actually being differentiated in this test (without the participants knowing beforehand that this is what was being tested, it would prove the test's sensitivity.

His failure to post such details tells us something?
 
I think what makes foobar tough is the managing of the files. Someone has to prepare the files, host them, etc. What I aimed to to with the QA ABX tool is to start with a single wave file, and then the tool injects a prescribed type of distortion on-the-fly. The user can change the type and intensity of the distortion easily without needing audio tools.

It would also be nice to have an option to compare files, instead of only add distortion. Then you would have the most full featured tool available. Since you already have some kind of encrypted verification system, your tool might become the standard everybody uses.

The reason I think it would be good to allow file comparison in some cases would be for something like PMA's hi-res Toccata violin files. They are audibly different to me, and maybe to some people if they could loop and compare blind. But, I sort of doubt they if the way the sound different is exactly same as adding calculated distortion. That's because they were made by a somewhat complex process and not all the file differences are due to only one type of distortion.

Also, it could be that Pano and Mooly might have some other possibly useful ideas for you. I think one of them mentioned wanting a better training mode with immediate feedback as to how one is doing. I use Reaper for that kind of training, but for people who don't have that or don't use something like that, maybe the test program could be of more help.

Last thing I would mention is that Windows and OSX both resample audio sent to the normal sound system if the system sample rate and bit depth is different than what an application program uses or requests. For Windows at least, using ASIO drivers can overcome the problem. But for any testing applications that use the OS default sound drivers, users should be advised to make sure the OS sound system sample rate and bit depth is the same as the test files, otherwise sound quality the listener receives will be compromised and that could cause testing errors or lack of ability to discriminate in some cases.
 
Last edited: