DAC blind test: NO audible difference whatsoever

It's noteworthy that this question has been fiercely contested online from the moment the first discussion forums opened. It continues unabated. The question is bedevilled by knotty issues:

1. Every self-identifying 'objectivist' lives with the contradiction of having a clear internal sense of the 'character' of each component in their system. Most spend time and money fettling their audio systems to achieve 'improvements' they can't measure, but enjoy hearing. After all, that's the point.

2. Every self-identifying 'subjectivist' knows that null tests and a long history of failed ABX trials support the conjecture that most things sound the same. Yet that doesn't match their experience – so, like the objectivists, they accept the contradiction and live with it.

3a. Measurement is difficult: you can only measure – within tolerances of mechanical accuracy and proper methodology – one thing, in one way, at one time.
3b. Listening is unlike measurement – it's both cruder and more sophisticated – depending on how you measure such things. Your body is different to a microphone; your brain is different to a digital processor; and your listening environment is unique.

4. 'Impressions' are distorted by bias (ie, foreknowing the cost of a component), but that's a symptom, not a cause: the problem is to perceive our perception. Blind testing for audio makes almost as little sense as blind testing for sight: it's a reporting issue. It takes time and subconscious processing to accurately model external auditory cues, and the process relies on pre-existing frameworks: visual information, experience-based expectation, etc. Blind testing audio is a parlour trick designed to expose the weakness of human perception. It says nothing about audio equipment: the subject of the test is the listener, not the equipment: usually it proves they are merely human.

5. When shade is thrown by objectivists about 'snake oil', it's the same as flat earthers calling out The Grand Conspiracy: it's not relevant. They're angry about something unrelated.

6. Every piece of audio equipment is fundamentally non-identical. That exact combination of parts – numbering in the hundreds – is unique. At some level, however detectable the difference may be, it is not the same as any other 'product'.

7. Consensus. Is widespread agreement around elements of a manufacturer's 'house sound' no more than customers being gulled by branding? Discuss.

8. Science moves on. Improved understanding of auditory perception, and more holistic measurement, will likely harmonise (or at least cast more light on) the present conflict.

Meantime – beyond these generally applicable truths – I vote for the primacy of human experience: if a piece of equipment 'sounds' like something in my system, what happens in my head when I listen to it is what matters. That's why I bought it. Beyond being dumb, however: listening is a faculty that can be trained to a high level of musical discernment. AI – and machines in general – don't care about music: why would I let one dictate a definition of 'excellence' they can't experience? Questions around frequency response are germane: flat seems good to the rational, information-processing part of our brain striving for correctness and low distortion, but you may find it's not how you want to listen – your ear/brain is differently sensitive. Do you tune a system flat it it sounds wrong? Or do you try to adapt to – enjoy? – what looks correct on a graph? Is it a question of taste? Is there such a thing as 'good taste' or 'correct form'? Doubtless discussion will continue . . .

I've had long conversations with knowledgeable engineers and equipment designers and many of them share a sense that sometimes they adopt an approach that works without knowing how it works. Always, too, there's a need to design equipment that measures well, but ultimately is tuned. The idea that machines have an animus, or 'soul' derives from artistic decisions made by those designers. A Ferrari is a Ferrari because of a hundred little decisions that communicate something between the designer and the user. It's not about top speed or braking or G-forces generated, or any of the things that are easily measured. It's the driving experience. Having said that, when they blind-tested Ferrari and BMW drivers to see if they could tell the difference, it also ended badly. Apparently you need to see to drive, otherwise humans just crash – proving that cars are all the same.
 
Last edited:
  • Like
Reactions: analog_sa
Blind testing for audio makes almost as little sense as blind testing for sight: it's a reporting issue. It takes time and subconscious processing to accurately model external auditory cues, and the process relies on pre-existing frameworks: visual information, experience-based expectation, etc. Blind testing audio is a parlour trick designed to expose the weakness of human perception. It says nothing about audio equipment: the subject of the test is the listener, not the equipment: usually it proves they are merely human.
Seems you misunderstand what blind testing of audio refers to. The listener is not "blinded" or deprived of visual information but only unaware of which device is being listened to. E.g. in blind A/B test both devices can be shown but listener does not know which device is actually reproducing the audio. Blind testing audio is definitely not a parlour trick but necessary means to mitigate unavoidable perception biases.
 
  • Like
Reactions: Gulo_Gulo and Sonce
Seems you misunderstand what blind testing of audio refers to. The listener is not "blinded" or deprived of visual information but only unaware of which device is being listened to. E.g. in blind A/B test both devices can be shown but listener does not know which device is actually reproducing the audio. Blind testing audio is definitely not a parlour trick but necessary means to mitigate unavoidable perception biases.
Seriously? Please re-read.

Blind trials, generally, eliminate expectation bias. Useful. Specifically, medical trials gauge somatic response to drugs by filtering out psychosomatic responses that might respond equally well to a placebo. Medical trials operate on the basis that psychological interference is statistical noise. They seek what is 'actually' happening, not what a patient's brain might perceive is going to happen. Essential, for a drug trial.

Results would be compromised if trials tested only children, or one ethnicity, or those with pre-existing conditions.

Similarly, perception tests - because blind ABX assesment of audio equipment tests the listener, not the product – require a baseline that doesn't alter the perceptive state of the subject. Our auditory system triggers early warning responses: unfamiliar sounds elevate adrenal levels. In the dark, all senses are heightened – on high alert – to compensate for the loss of sight, probing the darkness for remote hazards. Being metaphorically 'in the dark' about what you're listening to certainly eliminates expectation bias, but along with the bathwater goes the baby: the perceptive state changes – shifting to emergency mode, deprived of a framework to hang sense-data on. At the moment you need to patiently marshall rational analysis, the limbic system dominates: "But is this noise an immediate threat?'

Are differences real and large? Real and small? Or all a figment of our branding-sozzled imaginations? Most likely the middle option – depending on your definition of large and small. But what is detectable? What if you can detect a difference, but find it hard to detect or describe that you've detected it? What if you detect the difference but can't detect at all you've detected it? The reporting issue is a problem: words don't translate impulse responses and patterns of brain activity very well. You may as well dance about architecture. All that persists are vague impressions, but that's brains for you: remembering smells better than facts.

To circle back to the relationship between listening and seeing: I strongly suspect that a black speaker painted white would be interpreted (by listeners able to see them) as sounding brighter.
 
blind ABX assesment of audio equipment tests the listener, not the product
Wrong. You really don't know what the blind ABX test is.
If listener P constantly and accurately identify products A and B in blind ABX test, then it is accurate test of the product, not the test of the listener. That listener transcendent the "listener test" because of his own impeccable results.
In testing different products C and D in blind CDX test, if the same listener P fails accurately to identify C and D ("flipping the coin"), then it means products C and D have the same sound quality, so again - it is accurate test of the product, not the test of the listener, because that listener P proved his skills before.
If listeners Q fails accurately to identify the same A and B in blind ABX test ("flipping the coin"), then that listener has "cloth ears" and is "sow's ear material", so he should never again participate in blind ABX test (although, his skills may be improved with training). That listener fails both tests - for the listener and for the products.

In the dark, all senses are heightened – on high alert – to compensate for the loss of sight, probing the darkness for remote hazards. Being metaphorically 'in the dark' about what you're listening to certainly eliminates expectation bias, but along with the bathwater goes the baby: the perceptive state changes – shifting to emergency mode, deprived of a framework to hang sense-data on.
You really don't know what the blind ABX test is. Please read some basic, introductory texts about blind ABX tests. After that, you should read this: https://www.harman.com/documents/audioscience_0.pdf
Listeners participating in blind ABX tests are not in the dark - literally or metaphorically. The framework is the same as in any subjective "audiophile" sighted test - your audiophile friend invites you in his room to test two different DAC filter settings ("1" and "2") from his high-end DAC, switching between them while you are listening. You can see everything in his room (including his high-end DAC) and you can see what he is switching ("1" or "2"), so you are not deprived from visual clues. You just don't know what types of filters are "1" and "2" (bias is eliminated), you have to determine if they are sounding the same, or maybe one of them is better than the other.

Are differences real and large? Real and small? Or all a figment of our branding-sozzled imaginations? Most likely the middle option – depending on your definition of large and small. But what is detectable? What if you can detect a difference, but find it hard to detect or describe that you've detected it? What if you detect the difference but can't detect at all you've detected it? The reporting issue is a problem: words don't translate impulse responses and patterns of brain activity very well.
You really don't know what the blind ABX test is. Your questions above are answered in many science papers, including this short overview: https://www.harman.com/documents/audioscience_0.pdf
 
Last edited: