DAC blind test: NO audible difference whatsoever

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
What i see is a market that is collapsing.


Audiophiles are getting older (and they eventually die)... Younger people are less interested than we were in 70's, 80's, 90's... Portable devices, headphones and soundbars are the leading products now... HiFi boutiques are closing or are struggling...
.
Some will become audiophiles same as some ex ghetto blaster carriers did
 
This. The burden of proof is on those who claim to hear a difference

The burden of proof is indeed on those who claim to hear a difference.

I sure will make a 3rd and final attempt -for the sake of curiosity- but after that if not positive results follows, i will close the book on that topic.

Unless proven otherwise, i already made the demonstration that no day & night kind of difference could be expected in regards of digital-to-analog converters.

It's also very close to what the thread's title says: NO audible difference whatsover.

Now, if you disagree, the burden of proof is waiting on your doorstep.
 
I think it's promising. Wouldn't bet on the outcome, but it's promising.

I still wonder about your amplifiers. They look pretty good on paper so long as speaker Z is low. But, specs and graphs don't always tell the whole story, that's been my experience anyway. Would be nice to have some way to double check any piece of equipment that is always in the signal path.
 
mmerrill99, i honestly take things here with a grain of salt and a smile. I'm dedicated and serious about the tests i'm conducting but i will not bang my head on the wall for it.

I'm driven by a scientific curiosity that extends even outside the acoustic sphere, but i sure hope i didn't hurt anyone's feelings here in the process...

That being said, my opinion is that you didn't bring any evidence on the table. And the harder you try to convince, the less it seems to work. Call that human bias, if you wish, but that's how i feel it right now.

The whole ''training'' thing: sorry, cannot compute.
 
Few notes about ABX testing:

It's not true that ABX testing are only providing results that shows no difference. As long as you reach a threshold where identification is possible, you'll get positive identifications.
Yes you are testing at the thresholds of perception (as far as this sort of test is concerned) & this needs particular considerations

1ml of vodka diluted in 2 liters of Coca-cola might be difficult to catch but pure vodka v.s. pure Coca-cola might get you 100% positive results from every participants (if not, call for help).
That example is way below threshold so let's bring this back to some sort of reality & something you already stated you had an interest in, wine - what is the just noticeable difference (JND) between the % alcohol in wines in blind tasting? Is it 1% difference, 2% difference i.e is 12.5% wine noticeably identified as higher alcohol strength than 13.5% wine in such blind tests? Would this JND difference become noticeable after longer term consumption of these two wines? I would suggest one would feel the effects of the alcohol in one wine sooner than the other & would therefore be noticeably different

I think the same applies here, in audio. The problem is we have our heads stucked in devices comparisons that are equivalent to spotting 0.01ml diluted in 100 liters. That's just not fair for our human capacities.
Yes, auditory perception is not 100% reproducible or accurate particularly at threshold - hence the need for using a statistically significant number of trials in ABX testing & the subsequent statistical analysis needed to determine the likelihood of random guessing. Btw, even with signal differences which are not at threshold, blind testing with forced choice test return results which are NOT 100% in favor of one choice - such is the nature of auditory perception & the particular test being used

But you are using an example which is way below threshold "0.01ml in 100 liters" & equating this to the differences between DACs - this is a ridiculous claim.

If your statement were true then how would training reveal such a sub-threshold difference?

You really need to stay realistic if you are to give analogous examples - so tell us about JND of alcohol % in blind wine tasting, please.

We need to lower the bar. We need to accept that we do not have superpowers. Also, we need to understand that in case of doubt, the brains are programmed to create differences. Differences perceived are real for us, but not sensorial-real. That is why we all do perceive differences but fail to prove it.
No we perceive differences because we are physiologically affected by the devices but this isn't so available to consciousness which is what is being focused on in such forced choice blind testing. Over longer term listening we become more clued into how this device is affecting us - just like we would with the higher alcohol content wine!

-----------------

One other thing: it's easy to make an error with the ABX test methodology or set-up. Making two different things adjusted, calibrated, equal is much harder than making two things different perfectly equal. Therefore, positive identification is in advantage, not the other way around.

That's why POSITIVE identification needs to be questionned much more than negative identification.

That's why i'm very focused on SPL-matching while making such test. A difference of 0.3-0.5db can be identified and that would spoil the test without anybody knowing. Just because of that.
Also, the switching procedure. Very important. NO noise. If noises are unavoidable, you must make the procedure so there is the same noise all the time, even if no switch are being made. Participants, consciously or not, will try to find ANYTHING to grasp, including noises, other people's comments, etc.. That's not cheating per se, but great care must be taken.

Bottomline: it's much harder to prove a positive identification than the other way around.[/QUOTE]
 
I still wonder about your amplifiers. They look pretty good on paper so long as speaker Z is low. But, specs and graphs don't always tell the whole story, that's been my experience anyway. Would be nice to have some way to double check any piece of equipment that is always in the signal path.

Yes, i could change the amplifier as well.

Might give it a try if i have the time.

Main suspects so far was:

1. Tweeters
2. Source (iTunes/MacMini)
3. Music files
4. Listening distance from the speakers

All changed (for the better, i hope). I will even try to reduce the noisefloor by doing the test sessions in the evening with the ventilation system temporary stopped.
 
mmerrill99, i honestly take things here with a grain of salt and a smile. I'm dedicated and serious about the tests i'm conducting but i will not bang my head on the wall for it.

I'm driven by a scientific curiosity that extends even outside the acoustic sphere, but i sure hope i didn't hurt anyone's feelings here in the process...

That being said, my opinion is that you didn't bring any evidence on the table. And the harder you try to convince, the less it seems to work. Call that human bias, if you wish, but that's how i feel it right now.

The whole ''training'' thing: sorry, cannot compute.

Ok, your refusal is noted but please don't demean the term "scientific curiosity" - curiosity, yes; scientific, no

As you were!!
 
I'm sorry to hear about your lack of sense of humor, mmerrill99, but in all the previous pages i see nothing that would make me think ABX testing method is flawed.

What people refer to as "short-term ABX" may have limited sensitivity for detecting small differences. There are reasons to think it might have not to do with ABX itself, but rather have to do with humans and their perceptual complexity.

Regarding training, one can do it or not. The purpose is just to learn what to listen for. JBL uses trained listeners to evaluate loudspeakers because they have found they more quickly get to reliable speaker evaluations that way.

A possible downside of not training is that one could buy or choose equipment based on untrained listening, then have a guest listener point out some defect that was previously unnoticed. Once known about, it can be hard to not hear some defects, so a previously happy purchase might turn into an unhappy one.

On the other hand, trained listeners can get very picky. It can make it harder to enjoy music if attention keeps getting drawn to relatively minor defects.
 
Last edited:
My son can't taste the difference between $10 Italy and $200 Burgundy, just "bitter", of course I can

A person who is satisfied with free smartphone headset can hardly hear the difference among hi-end systems, I believe
That $200 Burgundy is just snake-oil - you only think you can tell the difference - you need to prove such extraordinary claims with extraordinary evidence.
 
I'd like to point out something important about the ''Only 4 participants'' argument.

Yes, on a purely statistical basis, 4 participants is a weak sample. There is no doubt about it. And the 3rd attempt will sure involve more participants.

But the thing to understand is: we were 4 seasoned audiophiles, healthy, with no history of auditory problems of any kind, aside the usual high frequency sensitivity degradation over the time... We would've know if we had such problems. And, IF NOT, what are the odds that ALL FOUR were unaware of auditory problems, impacting the very same way that very ABX test ?

Bottomline, i'm very confident that even if 4 participants sample is statistically weak, chances are very high that we are good and accurate ''healthy human samples'' that reflects the average population... AND because of the ''seasonned audiophile'' thing, with better chances to spot differences than average John/Jane would.
 
Last edited:
Ok, your refusal is noted but please don't demean the term "scientific curiosity" - curiosity, yes; scientific, no

As you were!!


I sure don't demean the scientific term, not more than you have the exclusivity of deciding what has scientific value or not.

I sense ego, mmerrill99. A lot of misplaced ego. Be careful if you want to continue the discussion with me, in an intelligent constructive fashion, my patience is not in unlimited supply.
 
Calling something snake oil can be used in a thinly veiled way to go after someone without technically violating the forum rules. The moderators might see through it though.

Look, trying to persuade someone to change their beliefs can be frustrating. People often don't accept other people's proof. It can take time and patience, and maybe some luck, to influence other people's beliefs at all. Expect some failures, take it easy and relax, it's just an internet forum, nothing to get too worked up about.
 
.... funny about the 200$ Burgundy... :)

Two weeks ago we did an ABC blind test with two 25$ Canadian Pinot Noir and a 180$ 1er cru Chambolle-Musigny Les Charmes 2010.

Again, 4 participants (two couples), the two Canadian wines were mixed but the Chambolle was spotted, without a doubt, by everyone.

I did numerous wine blind testing, much more difficult than a simple ABX, and i'm sure an ABX would be very easy with most wines, by most participants, in many different not-so-controlled context.

I cannot even imagine an oaky Californian zin being confused with a French gamay.

But, hey, again... Life is full of surprises, uh? :cool:
 
frugal-phile™
Joined 2001
Paid Member
We would've know if we had such problems. And, IF NOT, what are the odds that ALL FOUR were unaware of auditory problems, impacting the very same way that very ABX test ?

Perhaps someone can point to a reference to the very large European test on audio compression where, just before it was about to be approved, one new listener came in and pointed out a very audiable whistle in the compressed files. Once pointed out, everyone could hear it.

dave
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.