DAC blind test: NO audible difference whatsoever

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Mainly OT

<snip>

I don't expect to save anyone from buying the wrong speaker. It's like wine, too subjective.

Of course subjectivity plays a role but i meant it to help others to get what they subjectively prefer and not due to fooling themselves due to bias.

When I advice a friend, my position is: don't waste your money on cables (above what's needed for something solid and reliable), be aware of the crazy diminishing returns, if any, for amps and DACs once you're above 1000$ (more for high power amps) and you're on your own for the speakers. So listen, listen, listen (to the speakers, not the salesman). Once you've found something you like, we will find the rest.

Nothing wrong with that. Imo to listen to the salesman isn´t wrong too, but you should not believe him per se.

It is imo important to remember that intersubject variation can be surprisingly large (means between listeners), normal stereophonic reproduction is a very lossy version of reality (leaving binaural aside for the moment) and therefore depends strongly on learned abilities. Our brain tries to create a reasonable illusion (reasonable means compatible to our experience) from the cues it gets during the reproduction.

And we should keep in mind that it is a system that we are listening to. Composed of the listener the technical stuf and the room.
I often think that it is amazing what carefully selected/combined stuff is able to do (it´s the holistic approach).

To get a bit back to our "test topic" i´d be interested to know what kind of experimental evidence would be sufficient to change your belief about questionable stuff/effects.
 
You mis-understand my point, within the amateur astronomy community all these trade offs are discussed as technical issues there are no magic tweeks that are not understood by conventional theory. There are very expensive eyepieces (for instance) that give you better eye relief, etc. but they are simply something that takes a lot more effort to produce i.e. cost more money.

I understand what you are saying.

The problem is that you are assuming there is an objective (pardon the pun) and obvious demarcation between magic and science with which everyone agrees.

You have your view of what the line is, but others are likely to disagree.

With regard to telescopes, I recall a long debate regarding whether or not larger apertures can be at a resolution disadvantage to smaller telescopes under some conditions of poorer atmospheric "seeing".

The debate split between two camps: one calling the premise a "myth", others believing that there were conditions under which a smaller telescope could outperform a larger one of otherwise equivalent quality. The latter camp had more theoretical support, the former was more empirically-driven. Sound familiar?

If there is a subjective element to an endeavor, you will have debate, and people at all points along the continuum will be just as convinced that they are right and others are wrong, as matters of fact.

I understand that in the field of optics there is no obvious equivalent of "room dots" or whatever, but if you look at debates regarding the efficacy of apodizing masks to improve perceived resolution, for example, you will see that the sorts of arguments are not that different.
 
Last edited:
I understand what you are saying but these issues are relatively far and few between and hardly in the league of Tice Clocks and Shatki Stones.

My current understanding here is that somehow when manufacturers hold demos and/or "blind" tests they are innocent and their "science" is to be held true until someone can prove it wrong. At least their agenda can't be hidden.
 
BTW it is not hard to find a highly trained MW (Master of Wine) that can identify wines blind individually with great accuracy, no ABX needed. This is no parlor trick, where is that audio equivalent?

Where's the extensive training? Where is the incentive to train? Audio isn't taken seriously enough.. Even the professionals in hifi are kind of just amateurs who are extra confident in their abilities and opinions (and some are full of self-convinced BS). In professional audio, I'm sure there are engineers who might be considered highly experienced in listening in order to fix problems but the culture is not there to be interested in audio differeces in the same way it is in the wine world.

My point being, a lack of the equivalent of a highly trained MW can be for many reasons. No conclusions can be taken that mean there are no differences that can be identified with great accuracy.
 
Last edited:
Thank you for the most important post of this thread.

That is the point.

One doesn't just happen upon these skills. They must be learned.

As Pete Townshend said, "the simple things you see are all complicated". It seems simple to decide something sounds better than another with just switching back and forth but there is lots more to it and to think "average audio guy" has developed his hearing acuity to the level that can discern minute differences without any effort is absurd.

If anything A/B testing should be used as a tool for training not for assessment.

As with the wine taster, they know the wine. They do not have to compare it to another to know what it IS.

Excellent point. The gist of the matter.
 
Thank you for the most important post of this thread.

That is the point.

One doesn't just happen upon these skills. They must be learned.

As Pete Townshend said, "the simple things you see are all complicated". It seems simple to decide something sounds better than another with just switching back and forth but there is lots more to it and to think "average audio guy" has developed his hearing acuity to the level that can discern minute differences without any effort is absurd.

If anything A/B testing should be used as a tool for training not for assessment.

As with the wine taster, they know the wine. They do not have to compare it to another to know what it IS.

Excellent point. The gist of the matter.

It's obvious what the difference between a wine master & participants in a forum ABX test - so obvious, in fact, that it's strange someone could conflate these issues.

The other interesting thing is what training is needed? I know people train in recognizing & sensitizing themselves to various distortions but what if the difference between two devices is not to be found in a specific distortion that is easily A/Bed?

It's often mentioned that blind testing using (trained) participants can "show fabulous sensitivity to tiny changes in level, frequency response, interchannel timing and localization"

These are typical indicators used in tests where audio snippets are used to identify these minute, subtle differences yet some 'obvious' differences heard in sighted listening may not be of this type - they can be more often related to holistic, differences with 'realism' of the sound, connectedness to the performers, etc - the sort of differences that are the result of the analysis function of auditory perception & how this analysis results in an auditory scene with auditory objects & dynamic auditory streams within it.

If you want to see what's actually involved in a real blind test for such differences then search for Ultmusicsnob's posts on head-fi & Gearslutz, where he walks through ABX testing 16/44 Vs 24/192 (he also did blind tests for jitter differentiation - worth searching out his posts).

Many things to note in his posts:
- he already has identified his preference for 24/192 audio files from his sighted exposure to them as a sound engineer & is using Foobar ABX to verify if this is based on sighted bias or not
- he has to take a very different approaches for ABXing RB Vs high res as opposed to jitter differentiation. This requires a great deal of flexibility & commitment to finding the 'tell' or aspect that will allow differentiation
- in the case of 16/44 Vs 24/193, there is no specific 'tell' i.e. distortion artifact or freq/amplitude difference that can be isolated down to an audio snippet​
.


What is evident from his approach is that most people wouldn't have the motivation, be likely or able to go to the trouble that this recording engineer has done!!

For instance - here's an example of a real-world ABX test which gave a non-null result.

Note in this excerpt of his post, that he could only get positive ABX results by focusing on the differences in 3D soundstage depth - all his other attempts at differentiating between these audio files failed. In order to find that this was the necessary 'tell' which he could differentiate between audio files took him a lot of attempts with other 'tells' - a long & exhausting exercise.

Also not that he states these differences are small & difficult to detect in Foobar ABX yet he had no difficulty & didn't need to strain in developing a prefernce for 24/192 files during his long-term sighted exposure to these files.

Note also that he says the choice of source material is crucial in uncovering these differences

And, finally, note that he says "training my ears to find a difference very difficult"

So here is a sound engineer getting positive ABX results & describing the difficulties in achieving this result - I would call them EXTREME difficulties. Now does anyone really believe that ABX testing is as simple as certain people try to make out.

"Keeping my attention focused for a proper aural listening posture is brutal. It is VERY easy to drift into listening for frequency domains–which is usually the most productive approach when recording and mixing. Instead I try to focus on depth of the soundstage, the sound picture I think I can hear. The more 3D it seems, the better."

Program material is crucial. Anything that did not pass through the air on the way to the recording material, like ITB synth tracks, I'm completely unable to detect; only live acoustic sources give me anything to work with. So for lots of published material, sample rates really don't matter–and they surely don't matter to me for that material. However, this result is also strong support for a claim that I'm detecting a phenomenon of pure sample rate/word length difference, and not just incidental coloration induced by processing. The latter should be detectable on all program material with sufficient freq content.

Also, these differences ARE small, and hard to detect. I did note that I was able to speed up my decision process as time went on, but only gradually. It's a difference that's analogous to the difference between a picture just barely out of focus, and one that's sharp focused throughout–a holistic impression. For casual purposes, a picture that focused “enough” will do–in Marketing, that's ‘satisficing’. But of course I always want more.
It took me a **lot** of training. I listened for a dozen wrong things before I settled on the aspects below.

The difference I hear is NOT tonal quality (I certainly don't claim to hear above 22 kHz). I would describe it as spatial depth, spatial precision, spatial detail. The higher resolution file seems to me to have a dimensional soundstage that is in *slightly* better focus. I have to actively concentrate on NOT looking for freq balance and tonal differences, as those will lead you astray every time. I actively try to visualize the entire soundstage and place every musical element in it. When I do that, I can get the difference. It's *very* easy to drift into mix engineer mode and start listening for timbres–this ruins the series every time. Half the battle is just concentrating on spatial perception ONLY

I initially found training my ears to find a difference very difficult. It's *very* easy to go toward listening for tonal changes, which does not help. I get reliable results only when trying to visualize spatial detail and soundstage size, and I tend to get results in streaks. I get distracted by imaginary tonal differences, and have to get back on track by concentrating only on the perceived space and accuracy of the soundstage image.
 
Last edited:
You grasping at straws? He never says that the differences in sighted listening were obvious. And his pre-established preference for 24/192 might be pure pre cognition bias.

The only thing he established is that he was able to train to distinguish his hardware working under very different conditions. Everything else is speculation and ad hoc rationalization.

What he demonstrates is complete ignorance of the hardware chain though. I must admit he writes well otoh.
 
You grasping at straws?
I'm grasping at straws?? Irony meter on red!!
He never says that the differences in sighted listening were obvious.
Who's grasping at straws, again?? Establishing a preference in sighted listening you now consider is difficult?
And his pre-established preference for 24/192 might be pure pre cognition bias.
Please explain or am I correct in thinking that you want your cake & eat it too? First, you contend that sighted preference might be difficult; then you contend that sighted preference is because of bias. Do you know that an ABX positive result 'proves' that it is not bias?? I really can't fathom the pretzel like logic that is continually displayed here?

The only thing he established is that he was able to train to distinguish his hardware working under very different conditions. Everything else is speculation and ad hoc rationalization.

What he demonstrates is complete ignorance of the hardware chain though. I must admit he writes well otoh.
What his posts show is the difficulty involved in doing some ABX tests & he answers all questions & gives plenty of detail about how he performed the test & what he eventually settled on as the audio cue that allowed him to differentiate between the two audio files

ractice improves performance. To reach 99.8% statistical reliability, and to do so more quickly (this new one was done in about 1/3 the time required for the trials listed above in the thread), I mainly have to train my concentration.

It is *very* easy to get off on a tangent, listening for a certain brightness or darkness, for the timbre balance in one part, several parts, or all--this immediately introduces errors, even though this type of listening is much more likely to be what I am and need to be doing when recording and mixing a new track.

Once I am able to repeatedly focus just on spatial focus/accuracy--4 times in a row, for X & Y, and A & B--then I can hit the target. Get lazy even one time, miss the target.

This is a REAL ABX test, showing what's involved - not opposing opinions of 'ABX is stressful' Vs 'no it isn't' 'it's just removal of a bias' Vs 'no it isn't' etc, etc.
 
Last edited:
Simple: I know I'm listening to 24/192. I know it must be superior. I'm feeling good about it. I'm more relaxed listening to it. I thus prefer it.

The ABX effort is nice and indeed proves he can differentiate between various operation points of his gear. It is however unproved that it has any correlation at all with his earlier preferences.

Edit. It also shows that it's real hard to differentiate small differences leading to no obvious artefacts (surprised?) and that one can rationalise something as better when it might in fact be less accurate. The jury is still out on whether this is due to a more pleasing distortion pattern or the need to harmonise our knowledge.
 
Last edited:
Simple: I know I'm listening to 24/192. I know it must be superior. I'm feeling good about it. I'm more relaxed listening to it. I thus prefer it.

The ABX effort is nice and indeed proves he can differentiate between various operation points of his gear. It is however unproved that it has any correlation at all with his earlier preferences.

As I edited my previous post while you were writing this, I'll just repeat what I said in my edit:

am I correct in thinking that you want your cake & eat it too? First, you contend that sighted preference might be difficult; then you contend that sighted preference is because of bias. Do you know that an ABX positive result 'proves' that it is not bias?? I really can't fathom the pretzel like logic that is continually displayed here?
 
Not when multiple factors are entangled.
Please detail the "multiple factors"

Let me just reproduce a recent post of yours from another thread "What kind of evidence do you consider as sufficient?" about blind test evidence which you consider sufficient

First of, a claim must be made. It means the testing starts with someone who claims to be able to hear something in sighting listening with a particular system.

If we're speaking "evidence", then the factor under test has to be isolated as much as possible. It has to mean blind testing, keeping the system under test as similar as possible to the one that allowed a claim to be made.

I'm not very strict on how "blindness" is achieved. But if the test has to convice people, then it must involve a fair third party (with no particular interest in the outcome) to control the process.

And finally, the test has to be documented, so it can be reproduced.

It has been made clear that blind testing is stressful but I don't see how to avoid its use. Restricting the test to a very particular claim (allowing for training if needed) and allowing the person taking the test as much familiarity with the test setup as he wants might help.
 
The main problem is that the formation of the preference is unclear (it probably wasn't made on upsampled files though) and that the test system likely creates a difference (typically higher distortion) that is detected by the ABX but might not be the factor at the origin of the preference (might it be bias or other more valid reasons).
 
The main problem is that the formation of the preference is unclear (it probably wasn't made on upsampled files though)
Please quote the evidence you have for this?
and that the test system likely creates a difference (typically higher distortion) that is detected by the ABX
It's the same hardware used for both sighted listening & ABX testing - so your contention is that Foobar ABX is creating distortions that aren't there in sighted listening through the same equipment? Got some evidence?
but might not be the factor at the origin of the preference (might it be bias or other more valid reasons).

So, you've built a strawman with "probably", "likely" "might be" & you talk about "multiple factors being entangled"
 
I have followed this thread from the beginning - While progressing on a digital audio journey of my own.
I started with a 18 year old Arcam CD player with extensive modifications.
I built a Cheap Chinese Ebay DAC kit with as many tweaks as DIYaudio could provide and liked it so much that It gave me the motivation to go further.
I now have a Cyrus CDT digital only CD player, and a Chord 2 Qute DAC.
The Cyrus CD source does not use the normal error correction algorithms used with CD, and the Chord is based on a Field Programmable Gate Array so does not use a normal commercial DAC chip. The DAC also takes the digital output from the TV and a PC.
I'm very happy with the performance of the of the digital sources.
And the acid test? My family don't really give a damn about audio BS but I know that they are happier with the audio environment that they now have - What else matters?
Cheap digital sources are a godsend - they have made a radical difference to audio. They are so good that far more consideration needs to be made to the amplification and speakers.
But I am convinced that differences do exist in digital and they are audible!
 
......

Edit. It also shows that it's real hard to differentiate small differences leading to no obvious artefacts (surprised?) and that one can rationalise something as better when it might in fact be less accurate. The jury is still out on whether this is due to a more pleasing distortion pattern or the need to harmonise our knowledge.

I see you edited while I was posting
I posted this real-world ABX example to show what's involved in ABX testing - it's not the simplistic 'ears only' scenario that is usually portrayed. You argue everything else but this central point

The usual defense for ABX testing before preference testing is to ascertain IF a difference can actually be detected - you now try to turn this on its head by attempting to assign to ABX testing something it was never claimed for it.

As I said the pretzel-like convoluted logic on display is breathtaking
 
....Cheap digital sources are a godsend - they have made a radical difference to audio. They are so good that far more consideration needs to be made to the amplification and speakers.
But I am convinced that differences do exist in digital and they are audible!
Yes, it is quite amazing what digital SQ can be had for very little nowadays.
You bet there are differences in digital....oscillator stability/spectrum is the first thing to address.


Dan.
 
I'm not reading all of a 190 page topic. The specifications of the test system are unknown to me.

But I can tell you that you need to have a good enough test system in order to get a valid test on the product under scrutiny.

To draw an analogy in the field of photography, you can't expect to appreciate a 5000 dollar camera lens's better performance if you're attaching it to a 300 dollar camera body. In such a scenario, the cheap kit lens that came witih the 300 dollar camera may give just as good an image.

It may be simply that a superior performing DAC does exist but the system it's being checked out on doesn't have the transparency required to hear it.

As both someone who knows something of electronics and how to measure them, and also as an audiophile, I can tell you that when audio systems are extremely good, they become more transparent to changes made within them. I HAVE heard the difference between an expensive DAC (20,000 dollars) and a STUPIDLY expensive DAC (50,000 dollars) in a $200,000+ system and no I'm not buying it. Not the DAC or the system. While I could hear differences when A/B switching between them, that difference isn't worth it to me and it's totally out of my budget even if I wanted it. In truth I am happy with my existing system which is far less expensive. The upgrade bug isn't even licking me.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.