I don't think DF96 is a bad guy. Also, I don't think he necessarily has an agenda to dismiss every claim of hearing something outside of what existing research has shown for about 95% of people.
He may have an agenda to get people to admit that there have to be some limits to human hearing, whatever they may be. It would seem odd to have people who never admit to failing to hear a difference for no reason other than that they have probably reached the limits of what they can detect.
If we want to be like scientists and we are interested in what we can hear, then we should be interested in trying to accurately pin down our own limits too. To do that, at some point it becomes necessary to do some kind of blind testing with ourselves, but no saying it has to be ABX. It just needs to be honestly blind.
If we think we can do better non-blinded, fine. But, that doesn't mean we shouldn't want to find out and know our own blind limits under some particular conditions. No reason for embarrassment whatever the outcome. Doesn't bother me if somebody else can hear things I can't, good for them.
Absolutely there are limits to what human's can hear which weren't established in such haphazard blind ABX tests as seen here.
Sure, personal blind tests are interesting to perform & probably everyone has done many along the way to try to differentiate a particular aspect of the sound that seems close enough to be iffy. And agreed that we are somewhat influenced by the results we obtain in these informal blind listening but it's just another one of our listening experiences, & due to the unstructured nature of these types of tests - certainly not anywhere near to the absolute claims being made for this test.
Also, what's the definition of long-term here?Do you have evidence of that specific scenario? I.e. (presumably blind) long-term listening tests with (presumably) repeatable preference expressed, but where there is no consciously detectable difference? It'd be a very interesting result.
That's a fair question, but for now let's say that whatever definition mmerrill99 was using for "longer term" is fine with me!Also, what's the definition of long-term here?
As I said, it's a fascinating result if listeners can consistently display the same preference (effectively, identifying the same device) via long-term listening, while being unable to consciously detect any difference in actual sound in any short snippet. I haven't read documentary evidence for this (which isn't to say it doesn't exist).
As I said, it's a fascinating result if listeners can consistently display the same preference (effectively, identifying the same device) via long-term listening
Comments heard all along the test so far showed that preferences were also different, in a totally random way.
And i sure don't think any ''long-term'' listening would change that, whatsoever.
Earlier in the thread you said this, which caught my eye:
Do you have evidence of that specific scenario? I.e. (presumably blind) long-term listening tests with (presumably) repeatable preference expressed, but where there is no consciously detectable difference? It'd be a very interesting result.
That's a fair question, but for now let's say that whatever definition mmerrill99 was using for "longer term" is fine with me!
As I said, it's a fascinating result if listeners can consistently display the same preference (effectively, identifying the same device) via long-term listening, while being unable to consciously detect any difference in actual sound in any short snippet. I haven't read documentary evidence for this (which isn't to say it doesn't exist).
So, first let me ask you this - if I show you blind ABX tests which have failed to identify unquestionably audible & measured differences between devices but these devices haven't undergone blind preference testing because it would be ridiculously obvious - is this going to satisfy your request?
Everything, and i mean everything, points out so far for a total random selection among all participants. And i'm starting to collect comments, now.
Comments that will include ''i don't know'' and ''i'm really not sure, but...''
One observation shows consistency on something: the level of confidence (for a positive identification) drops very quickly. Audiophile or non-audiophile.
Comments that will include ''i don't know'' and ''i'm really not sure, but...''
One observation shows consistency on something: the level of confidence (for a positive identification) drops very quickly. Audiophile or non-audiophile.
The only attackable angle of this test, at the moment, is the small number of participants.
Which will be adressed.
Which will be adressed.
A question with three(?) negatives?So, first let me ask you this - if I show you blind ABX tests which have failed to identify unquestionably audible & measured differences between devices but these devices haven't undergone blind preference testing because it would be ridiculously obvious - is this going to satisfy your request?
Comments heard all along the test so far showed that preferences were also different, in a totally random way.
And i sure don't think any ''long-term'' listening would change that, whatsoever.
You are mixing up comments during ABX blind testing (this was allowed by the particpant group, was it?) with a blind preference test without the need to try to identify if X is A or B - two very different tests as you would know if you were interested in understanding what you are trying to do.
Identification prior Preference...
Everything points out toward a totally random selection, it's 51% at this moment.
Everything points out toward a totally random selection, it's 51% at this moment.
Comments heard all along the test so far showed that preferences were also different, in a totally random way.
Did you do something like record all the comments, assign numerical scores according to some predefined metric, then perform statistical analysis on the results? If so, we would love to see all the details.
Or, perhaps you mean something more like you were left with some impression from the comments you heard that there was no consistent pattern that occurred to you?
Its just that when somebody says that something was "shown" to be "random," it can mean different things to different people. To scientists it would have very particular meaning, and casual humans wanting to employ scientific-sounding language, it would be expected to have very different meaning.
If a scientific claim, it should be defensible from notes, collected data, and other records.
If a casual claim, it would be helpful if a little qualifying language were used, maybe such as, "It seemed to me..." Or something to that effect, whatever would be accurate, fair, and with an appearance of reasonably trying not to be misleading (no need to go way overboard with disclaimers).
EDIT: If going scientific on this, it would be help to have some info on average participant demographics. If only a few people, group demographics might not say much about your particular cohort, so it might be extremely useful to ABX test them with few of the test file available in the forum or get some new ones. Right now they are totally uncalibrated to us, so having a better idea of what they *can* hear might provide some helpful context. Especially, if using the same reproduction system that would be used for other tests.
EDIT 2: If you want to make your own test files, a freeware utility is here: Freeware
Last edited:
Insofar as I understand the question, "no". 🙂So, first let me ask you this - if I show you blind ABX tests which have failed to identify unquestionably audible & measured differences between devices but these devices haven't undergone blind preference testing because it would be ridiculously obvious - is this going to satisfy your request?
You made an empirical statement about human audio perception / physiology - that over the long term we experience (presumably repeatable?) preferences where we are in the short term consciously unable to detect any difference.
When you also said that "all of what [you] say is verifiable in the perceptual testing research" I was hoping you could point to it. As I said, it'd be a fascinating result.
Markw4, of course i'm collecting the data.
Once the test concludes, i will draw conclusions and will expand on the details. I will provide enough information so the whole test can be repeated by someone else. That will be shown on post #1.
Once the test concludes, i will draw conclusions and will expand on the details. I will provide enough information so the whole test can be repeated by someone else. That will be shown on post #1.
Did you do something like record all the comments, assign numerical scores according to some predefined metric
No, i didn't assign any numerical scores related to comments. The main goal of our test is plain A/B identification. That being said, it might be interesting for someone else to conduct such a test...
No, i didn't assign any numerical scores related to comments. The main goal of our test is plain A/B identification. That being said, it might be interesting for someone else to conduct such a test...
Jon, thanks for the reply. I did edit my post on that with some thought about calibrating your listening group. If you think it would be interesting to try, it would certainly be interesting to hear about what you come up with.
If even cheap DACs are so near-perfect that only near-ideal (i.e. expensive) other components can show this then they are sufficiently near-perfect that one might expect many reasonable tests to show indistinguishability.kinsei said:when testing a dac, you simply want ALL other components to be as ideal as possible
otherwise you loose the ability to reveal the difference
Earl Grey has asked an interesting question. We wait mmerrill99's reply with interest.
Earl Grey refuses to accept perfectly acceptable evidence which answers his question so not worth my effort presenting such evidence.
It's out there if anybody wants to find the many examples where audibly obvious & measurable differences exist but blind ABX testing fails to identify the differences - is there really a need to state this?
It's out there if anybody wants to find the many examples where audibly obvious & measurable differences exist but blind ABX testing fails to identify the differences - is there really a need to state this?
Last edited:
I will accept it, I hereby ask the same question he did
You will accept 'it' being what?
- Home
- Source & Line
- Digital Line Level
- DAC blind test: NO audible difference whatsoever