DAC blind test: NO audible difference whatsoever

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I honestly don't know what you are talking about except that it seems rather to be exactly the logical fallacy of Bulverism.

Bulverism seems to be about debating to win rather than truth-seeking, and appears similar to ad hominem but with some circular factor thrown in.

In the interests of truth-seeking, I don't see how everyone is trying to undermine arguments by attacking motives for using circular reasoning. If you really think so, maybe you could prove it, rather than just throwing out a term which in effect attacks the motives of other people so as to undermine their arguments.

Look, maybe it would help if you ask if anyone else sees a problem with Bulverism here. If not, then you might ask yourself if you are somehow misunderstanding what others are trying to say.
 
Bulverism seems to be about debating to win rather than truth-seeking, and appears similar to ad hominem but with some circular factor thrown in.

In the interests of truth-seeking, I don't see how everyone is trying to undermine arguments by attacking motives for using circular reasoning. If you really think so, maybe you could prove it, rather than just throwing out a term which in effect attacks the motives of other people so as to undermine their arguments.

Look, maybe it would help if you ask if anyone else sees a problem with Bulverism here. If not, then you might ask yourself if you are somehow misunderstanding what others are trying to say.

Bulverism is a logical fallacy. The method of Bulverism is to "assume that your opponent is wrong, and explain his error.”

How I see it being used here is in the example I gave. A 'competently designed' DAC is defined as one which sounds the same as all other 'competently designed' DACs. In other words any DAC which doesn't sound the same is declared colored. So we get the logical fallacy that anything which doesn't match the definition is wrong - according to this logic, you can't have a DAC which is more accurate & sounds different to other 'competently designed' DACs

Here's the exchange that you reference:
Originally Posted by mmerrill99
Yea, I don't know why this idea of preference = colored is being promulgated - it's the exception I believe?
I don't know where the idea of designing an amplifier to do anything other than "output = X times the input no matter what you hang on the output" came from. This is verifiable in the lab.
My post above was made agreeing with you that Olive's research shows preference does NOT = colored - it actually = accuracy in terms of their tests
I really can't understand the logic of Scott's reply to my post above & consider it Bulverism - maybe I'm wrong but that's how it seems to me?
 
Last edited:
Transparency is often used as a generic "goodness" factor so for instance someone here found a Bryston as the only transparent amplifier for him. If the SET guy subbed the Bryston he could find the SET more transparent otherwise there could be a conundrum or simply he likes the less transparent sound.

If transparency does not equate (a least a lot) with preference I fail to get the whole point of the exercise. Same goes for transparency and accuracy, I don't see how to have one and not the other.

Of course different definitions/usage of terms generates misunderstandings; i forgot to make clear that i referred to transparent as indistinguishable if included in the chain.
By definition an audio device that is transparent (at least to a specific listener) is sufficiently accurate (at least to this specific listener).

We are usually struggling with the a priori definition of accuracy that will ensure that every listener would accept it as transparent (in the sense mentioned above).

Percepted differences might not led to preference (equally liked/disliked) and there is no easy way to define which of two (different devices perceptionwise) should be considered to be the "gold standard" as long as it not a transparent one.

Measurements are fine, but if which set of measurements ensures transparency is discussed since the 1960s and of course measurement gear got better and better and subsequently the audio gear did too.

But it is a fascinating fact that in each decade "nongoldenears" argue that the measurements are well below the thresholds of hearing and therefore no difference could be perceptable while the next genereation of "nongoldenears" thinks that yesterdays devices weren´t good enough but todays devices are surely better so that no differences could be perceptable.

Of course if the equipment gets better and better the conclusion must be true sometimes.....
 
That's not the only difference. Bulverism is also used in attack of the motivation of an opponent, like ad homenim.
Ad hominem - Wikipedia

I may be mistaken, but i think there is an important distinction between bulverism and "ad hominem" .

Ad hominem is a tool from eristics used to discredit the person, because you can´t disprove his arguments. So to speak used because you know that he has an argument.

Bulverism otoh is based on your strong _belief_ that your opponent is deliberately using wrong arguments/facts and therefore he must have certain other hidden motives (although you might still be unable to disprove his argument/fact).
 
Yes, I am very much in agreement with you that DOE is critical. However, it seems to me it is in some ways hard to talk about a lot at a general level.

When it comes down to designing a particular experiment someone with expertise in that area should be on the research team. I also think they need to have someone with expert listening skills, for want of a better term, to help validate the test setup.

I have always held that engineers undertaking biomedical research on their own are liable to make mistakes. Even people trained to work in that area make mistakes, it's a very complicated area in which to do research.

As far as publishing hearing research, I think enough information should be included for someone else to be able to replicate the experiment. Types of equipment used, measurements of test system performance, and how test subjects are recruited (randomly?) should be included. Probably a lot more than that as well. How about video taping the testing of a few people (with their consent, of course).

I totally agree, as it is obviously a complex task.

Bech/Zacharov wrote an accurate summary of the situation:
"Almost everyone listens to sound most of the time, so there is often
an opinion that the evaluation of audio quality must be a trivial matter.
This frequently leads to a serious underestimation of the magnitude of
the task associated with formal evaluations of audio quality, which can
lead to compromised evaluations and consequently the poor quality
of results. Such a lack of good scientific practise is further emphasised
when results are reported in journals or at international conferences
and leads to a spread of scientific darkness instead of light."

Soren Bech, Nick Zacharov; Perceptual Audio Evaluation - Theory, Method and Application, John Wiley & Sons, Ltd. , 2006

I assume that part of the problem is the current system of education even on university level. As i know from first hand experience study of scientific reasoning/thinking/working isn´t/wasn´t required in engineering, situation seems to be similar in medical and biomedical fields.
 
I totally agree, as it is obviously a complex task.

Bech/Zacharov wrote an accurate summary of the situation:
"Almost everyone listens to sound most of the time, so there is often
an opinion that the evaluation of audio quality must be a trivial matter.
This frequently leads to a serious underestimation of the magnitude of
the task associated with formal evaluations of audio quality, which can
lead to compromised evaluations and consequently the poor quality
of results. Such a lack of good scientific practise is further emphasised
when results are reported in journals or at international conferences
and leads to a spread of scientific darkness instead of light."

Soren Bech, Nick Zacharov; Perceptual Audio Evaluation - Theory, Method and Application, John Wiley & Sons, Ltd. , 2006

I assume that part of the problem is the current system of education even on university level. As i know from first hand experience study of scientific reasoning/thinking/working isn´t/wasn´t required in engineering, situation seems to be similar in medical and biomedical fields.

Yes, familiarity breeds contempt - in this case our perceptions are such everyday functions that we seldom stop to think about their workings & it leads to situations where when people declare "night & day" differences in what they hear, they are often criticized/challenged to "prove" their claim in a blind ABX test which invariably results in a null result.

There are many mistakes in this scenario - most of which arise from a lack of understanding of auditory perception. The first being the reporter of "night & day" differences being carried away somewhat by their newly discovered improvement in sound & not realizing that it is the playback illusion that has improved but if they try to isolate the improvement to a specific frequency/timing that they can perceive, they will find it elusive. It happens in every hobby - people get very enthusiastic about their new discovery or improvement but in this hobby, it is even more the case because what we are interacting with is the playback illusion & when this becomes more realistic, it can cross some internal measure of realism which results in a jump in how we relate to the illusion - it no longer is a series of notes in the right order, it now becomes a much more interesting piece of art with some depth.

I know some people reading this will think it's poofery & arty farty rubbish & they want to measure the exact difference that causes this jump in realism of the illusion - it needs to be a frequency, amplitude or timing difference. I agree, there is some change within the soundfield causing this change in perception but & here's where I differ from some, we have to work out how auditory perception does it's thing before we can measure what this change in the soundfield is. We are not at that point yet!

The same applies to blind testing - it is trying to determine the sensitivity of auditory perception & this is not a trivial task. Sure, we all use A/B listening at times when we feel sound differences are close but these are just quick tasting & not definitive. Doing blind tests which are more than just quick tasting requires rigor as defined by the ITU & other recommendations for blind testing. The reasons for these blind testing guidelines is because of a certain understanding of the sensitivity of auditory perception.

I would agree with this quote "spread of scientific darkness instead of light." even more so as it applies in audio forums.
 
I assume that part of the problem is the current system of education even on university level. As i know from first hand experience study of scientific reasoning/thinking/working isn´t/wasn´t required in engineering, situation seems to be similar in medical and biomedical fields.

Some researchers say universities don't teach thinking, they filter out people who can't think. The better universities filter more aggressively.
It seems that no one really knows how to teach people how to think, it appears to be more associated with some property of being clever, much like IQ.

However, many other things can be learned, and experience can be a good teacher. Nothing like learning from mistakes.
 
<snip>

However, many other things can be learned, and experience can be a good teacher. Nothing like learning from mistakes.

Of course, but i was thinking more along the line that (at least in my case) there was so much stuff marked as mandatory - like laboratory practice, industry practic, amount of mathematics, physics, mechanics, electrical engineering etc. etc. - and occasionally our professors emphasized the fact that we were working/studiying at a _university_ where scientific work is done and we were supposed to be able to do scientific work as well.

But no involvement in any lecture/course in philosophy of science was required no points needed, no examine to pass.

As if students would learn doing sound scientific work just by attending an university like that. Imo it is no wonder that students weren´t attending those course by themself (although nearly every university with a physics department offers those) and mostly never thought about epistemology.

As said before the situation seems similar in medicine/biomedicine and that is the reason for a lot of bad/mediocre science done and published. (beside all the other factors that might influcence the work)
 
No opposition here to making people take some classes. They can learn as they do in other classes, and absorb as much useful information as they can. My only point was that whatever benefit comes from it, it will not be that people have learned how to become better thinkers. They would, however, presumably become more informed thinkers, and do so in a way helpful for doing research should they ever decide to go in that direction.
 
Not having read all of this thread I have some opinions:
Differences between dacs are generally subtle. I you REALY want to find differences between them you must use amps with minimal distorsion, a speaker system that can reproduce sounds over a very large bandwith with very low distorsion on very high acoustic levels. I would never have chosen the open baffle speakers with ribbon tweeters that are used in the test described in this thread. On the other hand I would certainly like them for casual listening with som wine. You could also design testsignals to use in the test to magnify the differences between the test subjects. You can't kick back and listen to some cool jazz to find the differences.

And as Dave states, statistics can only take you this far. You can't say there is no difference between 2 subjects, but you CAN say that at the time of the test you could not hear any differences. You can never prove the abscence of something. Science does not allow this. Not all musiclovers are honed in this kind of thinking but it's a fact anyway.
 
closed account
Joined 2007
Indeed. One needs the right examples to be able to notice the differences later in "normal" recordings. I make one little example: the extraordinary LSO live recording of Nielsen's 5th under Sir Colin Davis. It comes with a blu ray from which one can even just copy out the DSD master - no need to rip the SACD.

Now, listen to the three notes played by the oboe just before beginning of the second part (adagio) of the first movement. The sounds of the keys being pressed is quite loud ion this passage – and this is in general a difficult soun to reproduce, but between a set up with a normal DAC and/or not very resolving downstream chain, and setting with a very resolving DAC coupled with very transparent amp and speakers one can clearly hear a noticeable difference in the rendering of the sound. No need for a blind test: just listen to different setups of different quality.

After that simple and quick test, the brain is already trained to perceive these differences everywhere. But the difference in rendering this type of sound is just one parameter (rendering fast transients with very irregular spectra - connected to speed), and there are many others. Some DACs may differ in rendering some sounds and not in others.

Roberto
 
Not having read all of this thread I have some opinions: <snip>

I would argue that most sources have high enough jitter that all DACs sound similar. This is why people using CD transports often say they sound the same. Transport jitter is just too high. See this study I did:

How much jitter from a typical Transport?

Even a poor S/PDIF cable can make all the difference. See this study I did:

Can S/PDIF cable jitter be measured?

With a really low jitter source, you will hear more differences.

Steve N.
Empirical Audio
 
That´s what we tried to emphasize when describing the differences between statistical analysis and the analysis of experimental conditions.
To get real/correct positive test results one has to have
-) an audible difference between the DUTs
-) a reproduction system able to resolve the difference
-) a detector (aka listener) that is able to detect the differences under the specific conditions to a sufficient degree
-) a priori calculation of effect size and sample size needed to ensure that probability of errors of the second kind will be appropriately low

(leaving aside for simplification the difficulties in defining the "audibilty of a difference") :)
 
I hoped for some disagreing, but everyone seem to have the same opinion I have set aside some small details.

The reproduction system needs to be of another magnitude than used in the test. Forget about bookshelfspeakers on stands and medium sized bassreflexspeakers not to mention dipoles like my Orions, my only speakers.
You can even forward the notion that bassreflexspeakers tuned higher than 20 Hz are not up to par for this kind of investigations as they always get into trouble below the tuningfrequency and a bit above it because of different reasons.
You typically end up with a big studiomonitorish speakersystem with big amps that will not clip spiky transients, (microclipping) Possibly headphones can be an alternative as they solve many problems, but I have not participated in tests with those.
You may understand that this kind of tests are boring and painful to perform and can seriously damage your hearing. At least you get very tired of it. It's hard work but someone's gotta do it :)

As mocenigo says it's also valuable to train yourself with familiar test material openly so you can identify what to listen for before the real blindtest.

In the spirit of mocenigo i can tell a little about a test made by "ljudtekniska sällskapet" The swedish society of music and audio engineering of which I'm a member. We are not that interested in comparing two different pieces of equipment. We want to compare a DAC to "no DAC" really zooming in on and magnifying its caracteristics..
To do this we hook up a test rig in such a way that we can shift between 1) an analog audiosignal and 2) the same signal played through a chain consisting of a good ADC for professional use followed by the DAC we want to test. Then its up to the participitants to work out what is what.

The latest test was the Cord Hugo2. As only one listener could hear any differece continued testing only involved him in hope of getting useful statistics. Then he had to perform a much longer listening session to make up for the fact that the rest of the guys couldn't answer att all or produced inconclusive answers that could never give statistically relevant proof of detection.
After a while he could not take it anymore having pain from the ears so the test had to be continued the day after. The only signal he could use to hear the differences was an artificially designed metronome sound used in studios for recording. (Pro Tools) The testers managed to produce proof of detection (the DAC does something to the music).

Performing theese tests producing results is not easy and sometimes we don't manage to get them. We did some hard work on the Benchmark DAC2, but at that occasion we had to bite the dust and accept the fact that we couldn't detect it.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.