World's Best DAC's

See the following example of response which still doesn't cite actual example of audio DBT.

It remains still unclear, why my statement would qualify as a strawman argument.
The deeper meaning of this was, that it is totally irrelevant, if you know about a specific test (with undetected differences), or if you believe in it; what i have statet is simply given by the framework of statistical hypothesis tests.
It doesn´t matter if you like it or not.

<snip>

I wasn't talking about audio DBTs done improperly. Ones that are done properly have shown consistently that competently designed DACs are audibly indistinguishable, excluding very small percentage of the ones designed to "color" the sound.

Please cite a few of these "properly done" tests (fully documented so that an analysis is possible), methodologically sound listening tests are always welcome.
 
There you go extending the results beyond their capability (ie improperly). You cannot prove a negative.

Any DBT that produces a null case (the DUT are the same) is only applicable to that test, any extention beyond that is erroneous. It might have some meaning to the person taking that test, but it is entirely possible that the next time a difference is heard.

An ABX test, in particular, can only prove that DUT are different.

dave
So I report the results and that automatically becomes "extending the results beyond their capability"?

I hope you can see yourself being right back at your generalization of audio DBT, properly or improperly done.
i'm not anti DBT. I am anti BAD DBT, and people improperly extending the results of a DBT. DBT are far too often badly executed.

dave

Question for you, Dave, what is their real capability (not the hyped up marketing ploy created by Robert Harley and the shills like him)?
 
the example of missing a real difference due to misplaced focus is a fine point to be aware of, design the experiment for
and should provide motivation for listening training with emphasis on multiple dimensions - perhaps in the case cited just looking at difference file, spectra would have helped

but its not "cover" for those claiming they already can hear a difference when the specific claimed difference is DBT ABX and they get null results

I think we can make some inferences from accumulated null results happening with the people who assert they "know" the "signature" of the difference - can hear it through other confounders like level or frequency response variations
 
It remains still unclear, why my statement would qualify as a strawman argument.
Two possible reasons,
1. You don't understand the term, in which case you can look it up.
2. You intentionally not get it, also known as "play dumb". Likely because you have an agenda to pursuit such as the ones that Robert Harley and the shills like him are doing, the business interest.

The deeper meaning of this was, that it is totally irrelevant, if you know about a specific test (with undetected differences), or if you believe in it; what i have statet is simply given by the framework of statistical hypothesis tests.
It doesn´t matter if you like it or not.
What you've stated is a speculation.
Please cite a few of these "properly done" tests (fully documented so that an analysis is possible), methodologically sound listening tests are always welcome.
So, you do acknowledge that you can't cite a single example of level matched audio DBT that failed to detect the audible difference that was really there.

Since you don't want to search for it, here you go.
http://webpages.charter.net/fryguy/Amp_Sound.pdf
Prueba Ciega: DAC1 vs DAC Pioneer

By the way, there have been level matched audio DBT that detected audible difference.
Documento sin título
 
focus has also been cited in test where "golden ears" were getting null results and a "skeptic" discovered a "leak" in the blinding that let him AB/X with no test signal present at all

so it can go both ways - diverse ears, attitudes, preconceptions, deliberately steering focus are needed to make listening tests more robust
 
you havent looked at the video I posted.
any test a individual do, even in dbt testing is still open to biases. furthermore, the way someone listen to the sounds when under testing vs when he only listen to music have a huge effect.
Two possible reasons,
1. You don't understand the term, in which case you can look it up.
2. You intentionally not get it, also known as "play dumb". Likely because you have an agenda to pursuit such as the ones that Robert Harley and the shills like him are doing, the business interest.


What you've stated is a speculation.

So, you do acknowledge that you can't cite a single example of level matched audio DBT that failed to detect the audible difference that was really there.

Since you don't want to search for it, here you go.
http://webpages.charter.net/fryguy/Amp_Sound.pdf
Prueba Ciega: DAC1 vs DAC Pioneer

By the way, there have been level matched audio DBT that detected audible difference.
Documento sin título
 
Last edited:
you havent looked at the video I posted.
Oh really?
http://www.diyaudio.com/forums/tubes-valves/240043-what-tube-sound-22.html#post3584496
http://www.diyaudio.com/forums/lounge/279659-best-set-amp-design-4-watts-12.html#post4452969
any test a individual do, even in dbt testing is still open to biases. furthermore, the way someone listen to the sounds when under testing vs when he only listen to music have a huge effect.
And that has what to do with audio DBT I've talked about?
 
It should be mentioned that Planet10 has some pretty major subjective biases and theories in loudspeakers (at least in speakers, see DDR) that no one else seem to be able to validate, nor has much in terms of priors. So if no blinding protocol ever seems good enough for him, it has a definite (I won't say in the least that it's absolute) risk of being from motivated reasoning. Then again, there have been times where psuedo-studies on DIYAudio have been called laughably statistically significant and we agree that it's rubbish to make those assertions.

Disclosure, I'm less likely to carefully look at the methodologies of studies that find null results than I am ones that find non-null.
 
It should be mentioned that Planet10 has some pretty major subjective biases and theories...

Disclosure, I'm less likely to carefully look at the methodologies of studies that find null results than I am ones that find non-null.

It is fair to say "Mr X has theories that are not widely shared..." but it is unacceptable to say "Mr. X has biases..." except when prefaced with "...in my subjective opinion...."

There are lots of ways to produce screwy results both positive and negative. I think many posters are annoyed when their pet beliefs seem to be unsubstantiated even by a good test. Besides picking holes in the method, participants, etc., it is legitimate for them to ask if the test had, what statisticians call "power" - the power to actually demonstrate a convincing (statistically solid) positive result if any were present.

Sadly, it is bad science to devise an experiment without power and where the results are a mess and then declare you can see no positive detection.

But when you do have "negative" results, some good scientists (like Toole) still dig deeper. For example, he'll describe his sample and method carefully. And he might point out, "the sub-sample of professional reviewers were as poor as....."

Ben
 
Last edited:
It should be mentioned that Planet10 has some pretty major subjective biases and theories in loudspeakers (at least in speakers, see DDR)

I do have strong opinions informed by 45 years or so of experience (including 10k + hours of listening training), many of those in the industry which has allowed me to sample a large breadth of stuff and talk to many informed (and uninformed) people. Call those biases if you like.

I do speak up when i see people make the same mistakes i have.

Many people here have strong opinions, and whether they are purely subjective, or the subjective choice to believe that objective measurements that have not been correlated with what we hear have a scientific meaning wrt to sonics. Measurements are important, but like purely subjective tests they have to be tempered by there shortcomings. They are a useful tool.

The only real tool we have are hearing tests, and to remove bias they should be blind. Far to many are executed poorly.

DDR is a term from Allen Wright (RIP) that he used primarily for electronics to act as a superset of many "floffy" terms used by reviewers to describe the ability of a system to transmit small pieces of information even in the presense of much larger information that might mask. Having high DDR is the difference between good and great hifi.

It is particularily applicable to speakers because they are so bad, with many necessary compromises given today's technology.

More people should use it, and it would be a real advance if we could figure out how to measure it.

In the end all that matters is whether your sound system connects you emotionally to the music. With even the best hifis still in the "dark ages" (ie hifi can & will get a lot better) and people with their own set of listening biases & experience there are many different solutions to achieve that.

dave
 
There are lots of ways to produce screwy results both positive and negative. I think many posters are annoyed when their pet beliefs seem to be unsubstantiated even by a good test. Besides picking holes in the method, participants, etc., it is legitimate for them to ask if the test had, what statisticians call "power" - the power to actually demonstrate a convincing (statistically solid) positive result if any were present.

Sadly, it is bad science to devise an experiment without power and where the results are a mess and then declare you can see no positive detection.
KenTajalli who contributed valuable comments earlier but has since withdrawn has sent me a few comments. Here are my distillations of his efforts.

1. Sample size matters. [I agree only that representativenss of the sample matters. A clear result with a small sample is more convincing and more meaningful than the familiar tiny differences that are statistically significant with a big sample.]

2. It is particularly meaningful for an experimenter to show they can prove their sample is not brain-dead an that their method has enough moxie to demonstrate differences when true. It is esp. helpful to have that kind of cross-power with a meaningful test; for example, to show their sample CAN tell 2% distortion who it is present.... but not, say 1%.... but still can't tell one DAC from any other one.

3. Ken also tells me that he looks forward to the day when engineers and techies do the testing, no fools at home. [I couldn't disagree more. Decades of stage magicians have revealed the tricks of hucksters where scientists and engineers have been really duped. More exactly, I'll say something that will bring howls from all the wannabee engineers on this thread: leave human testing to the properly trained and experienced professional... applied psychologists.]

Ben
 
FWIW the Klippel test told me that I can just about hear 0.5% added THD through my speakers.
As I said before the differences I think I hear between DACs are less than the differences I think I can hear between power amps and if pushed I'd put them down to differences in the analogue circuitry surrounding the actual convertor.
 
In the end all that matters is whether your sound system connects you emotionally to the music. With even the best hifis still in the "dark ages" (ie hifi can & will get a lot better) and people with their own set of listening biases & experience there are many different solutions to achieve that.
I'm not sure how one can use phrase like "connects you emotionally to the music" and hi-fi in the same paragraph... :scratch: You do know the meaning of hi-fi, don't you?
 
You do know the meaning of hi-fi

A sound system that provides an illusion of a musical performance. A device that provides entertainment. A sonic information transmission system. The end product providing pleasure to the listener. Hi-Fi usually reserved for those that lose as little information as possible. Given current state-of-the-art and the broad swath of target humans, a whole lot of wiggle room.

dave