The double blind auditions thread

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
How is the switching done usually ? Is it by means of a relay ? If so, is it a special type of relay designed for audio, or a regular relay suffices ?

I believe that there is no problem for an audio amp to suddenly have its output switched from 8 ohms (or so) to no load ("infinite" impedance).
Are there problems with that ?

What other details are important for the power switcher ?

Relays have been used. Manual cable swapping has been used. Heavy duty rotary switches have been used.
 
Hi,

So... with all the hub-bub as to the invalidity of most (if not all) objective efforts to characterize reality from illusory perceptions proffered by a vocal minority here, one wonders what we are to make of the subjective evidence presented by many non-believers as to the superiority of their claims??? Seems all the objections to the ABX/DBT protocols and multivariate analysis schema are exponentially confounded in subjective claims of superiority based on pronouncements made w/o evidence other than personal observational skills... how's about putting some numbers and such up, eh?

John L.

You may wish to occasionally make use of the return key.

Bad science is bad science.

And in this debate one side makes claims that are not supported by credible evidence but with idle and empty (and may I add unnecessary) claims that are bad science, while the other counters those claims with equally bad science, but carefully cultivates a false appearance of being scientific.

In fact one side is bad as the other and jointly they have been a major force against rational evaluation and research in audio.

My objection to ABX as promoted the ABX Mafia is purely based in statistics. They use very poor statistics. Poor science. Repeatedly criticied, but never corrected. But that is to be expected from those who have little use for truth.

I referenced several JAES (hence peer reviewed) articles dealing with this. In addition there are further major issues around experimental desiign for the specific tests promoted by the ABX Mafia.

Of course, that it just me and my views do get into the way of a jolly good scrap.

Ciao T
 
First of all:

I'm sorry, I think this is profoundly wrong. Replicability is true for a limited number of things, but there are many things for which it's not.

This was your response after my post (excerpt) in the blowtorch thread:

"And normal scientific practice would require sucessful replication of the experiment by other groups, which is of course a bit more difficult compared to other fields as we need a human detector."

So, please answer my question if you mean "replication" or "repetition" .


Citing a paper and implementing multiple comparisons are two different things. They didn't do the latter. Perhaps HF content can work for brain waves of dead fish?

Let me correct it a bit, the occurence of multiple comparisons is the problem, the cure is to incorporate a _correction_ for the multiple comparison problem.
On page 3552 the authors mentionend to take countermeasures for the multiple comparison problem:

"To account for multiple non-independent comparisons,
the significance of the activation in each brain region detected was
estimated by the use of distributional approximations from the theory
of Gaussian fields in terms of spatial extent and/or peak height
(Friston et al. 1994)."

For multiple independent comparisons they could have used simply the Bonferroni-correction.
From where did you get the impression that they did not use a correction at all?

The "sensory" comparisons were a model of bad statistics. Take a set of results that average out to 50%, separate them into two piles with one larger than 50%, one smaller, then claim significance.

Maybe you should read Scheffe´s article and some other papers on this topic to avoid this sort of statements.
And the other cheap shots don´t make it better.

The results of the replications at NHK and KEF were not "inconclusive," they were null.

You should know that there were several other papers dealing with this; the results of two papers from different authors were (for example) :
-) HF should never be reproduced through an additional tweeter
-) HF should always be reproduced through an additional tweeter

That´s why i wrote "inconclusive" .

The guys at KEF (btw we do not know really a lot about this one, do we? ) and NHK did perform really different experiments and therefore i am not that surprised by diverging results.
Especially because due to Oohashi the sample length could have an important impact.

That´s why i wrote "inconclusive" .

If nobody tries to repeat Oohashi´s experiment this will remain in the dust.
Thorsten_l. is absolutely right, Oohashi presented extended follow up research which confirmed earlier results and gave some hints for possible explanations for the results .
 
A couple of comments:
I saw this in one of this thread's references -
"Yet another interpretation of the first story is that the anxiety produced by listening to the unknown decreases the sensitivity of the listeners. That anxiety can raise sensory thresholds is well-proven."
I wouldn't dismiss the article outright, but I have to wonder why they wrote that. "The unknown"? We're not talking poltergeists bumping around in the attic here. It's some headphones. This seems like they've designed a milquetoast detector, not a DBT. I mean, come on.
So later on I spy this article on subconscious, involuntary thought. Something brought up in discussions here. Again, I wouldn't cast it all aside, but the article states this:
"People will respond to questions more conservatively if they're standing near a hand sanitizer, which signals a possible threat."
Maybe I'm missing something, and I'm not the troll. But this sentence is total nonsense to me. Is mysophobia that common? Is this how you gild a turd? Should I feel threatened by hand sanitizer? This unknown anxiety is killin' me..
 
All this talk about anxiety affecting results, training the listeners being required, etc would make more sense if the common response were "I can't hear any difference; they both sound the same."
However, isn't it usually the case in ABX tests that the listeners think they hear a difference, believe that they hear differences and can identify known components, BUT that the data shows they cannot tell the difference at all?
 
However, isn't it usually the case in ABX tests that the listeners think they hear a difference, believe that they hear differences and can identify known components, BUT that the data shows they cannot tell the difference at all?

Usually? No. Listeners are able to hear many differences in those tests- when the differences exist and are audible. When the differences do not exist or are not audible, they don't. See my earlier bibliography. What Stereophile (who has a strong commercial interest in avoiding controlled tests, ABX or no) never explains is why this "anxiety" doesn't seem to inhibit listeners from recognizing subtle changes like level, frequency response, phase, distortion, clipping, noise, polar pattern, dynamic range, data compression...
 
When the differences do not exist or are not audible, they don't.

That's the point I was trying to deal with. In an ABX test (with an ABX switch) is there an option for 'I don't know'? I thought that the subject had to state whether X was A or B.

When you say: "they don't" , do you mean that 'the data analysis proves that their ability to identify the source cannot be distinguished from chance', or that 'the subjects stated that the sources sounded the same'?
 
When you say: "they don't" , do you mean that 'the data analysis proves that their ability to identify the source cannot be distinguished from chance', or that 'the subjects stated that the sources sounded the same'?

Could be either. It's usually the former (I would reword that to say "demonstrates" rather than "proves") because if a person can't tell A from B sighted, he will not suddenly gain an ability to hear a difference when listening blind.:D
 
Listeners are able to hear many differences in those tests- when the differences exist and are audible. When the differences do not exist or are not audible, they don't.

To avoid reasoning in circles, the criteria for 'exist' and 'audible' must be via instrumentation measurement?

i.e. 'We can see the differences in the measurements and the listeners can hear the differences'.


But, isn't the whole area of controversy where the hypothesis is: "the listeners can hear things which don't show in the measurements" ?
aka "My golden ears are better than your lab instruments".......
 
See Benjamin and Hochberg 1995 J Roy Stat Soc B 57 289 to see how the analysis should have been handled.

I´m sorry, but your argument was that the authors didn´t use any method to correct for the multiple comparison problem, which was obviously a false claim according to the quote from their paper.

Are you now arguing that they should have used another correction method?

I'm quite familiar with Scheffe. Their data handling is not correct.

That is your assertion, but it is not backed up with any argument.
 
But, isn't the whole area of controversy where the hypothesis is: "the listeners can hear things which don't show in the measurements" ?
aka "My golden ears are better than your lab instruments".......

No, the whole debate started roughly 40 year ago, because people claimed to hear differences although all measured data was below the known thresholds of audibility for every single number.

That is quite a big difference, mainly due to two facts, first, whenever a serious effort is made to research a certain hearing mechanism again, chances are high to get new (and mostly lower) numbers for the threshold.
Second, it is widely accepted today that the normaly measurements do not reflect the human hearing abilities in a sufficient way.

Therefore we still rely on subjective evaluation of sound quality.

P.S. it is normally just a matter of effort to measure a difference between two DUTs
 
Last edited:
Second, it is widely accepted today that the normaly measurements do not reflect the human hearing abilities in a sufficient way.

Therefore we still rely on subjective evaluation of sound quality.

'widely accepted'..?
'we'...?

This just sounds like a re-statement of "Golden ears better than instruments".....

It is peculiar, if you look at the biology/neurology involved. Human vision uses a lot more brain than hearing, but you never see claims that we can see things with our eyes that cannot be detected with instruments...far from it.
But with hearing- where the biology predicts that our unaided senses would be even worse, the opposite argument is routinely made.
How many tests of hifi topics require an audiologist report on the subject's hearing before the tests begin?
How many silver-maned audio salesmen post their latest hearing test on the wall of the showroom?
 
All this talk about anxiety affecting results, training the listeners being required, etc would make more sense if the common response were "I can't hear any difference; they both sound the same."
However, isn't it usually the case in ABX tests that the listeners think they hear a difference, believe that they hear differences and can identify known components, BUT that the data shows they cannot tell the difference at all?

All this talk about test conditions etc would be indeed completely unnecessary if we would test listeners without setting up any artificial conditions.

The people that deserved the merit for adopt the idea of controlled testing did forget unfortunately to get some experts in experimental design and statistics and psychology in their team. That is, i think mainly the reason audio test often do have so much flaws although the underlying mechanisms are well known for a long time in other fields.

An additional factor might have been (what Thorsten_L. most probably would argue) that most of those people already were strongly biased, for various reasons, against audible differences.

See for example SY´s answers to your (this) question and compare it to his arguing in the case of the paper of Oohashi et al.
Suddenly it was not so easy anymore, was it? ;)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.