I don't believe cables make a difference, any input?

Status
Not open for further replies.
Cancer, COPD, heart, epilepsy, almost all immune diseases and antibiotics. There are more. Just think of any situation where harm could be done by giving patients a placebo instead of a known medicine.

I know what you mean, but that's not a positive control.

It's more like an escape path, involving either subjects exiting the study when negative results are obtained, or administering a "rescue medication" in parallel.
 
Last edited:
Nope, here's what I asked for:

And this is what i answered to your question:

"Given the measured differences i´d strongly recommend that positive controls should be for example one of the cited points out of P. Frindle´s catalog.
Of course it´s not possible to include something like a positive control in the same run if you´re using an ABX protocol.
That could raise the question if an ABX-test is appropriate for the task, but you could do different runs.
One for example testing the positive control, and another one for testing the real hypothesis.

Of course it complicates the routines, but the alternative (to avoid a positive control) is not acceptable.

But it could be better to avoid a discrimination test like ABX and better choose a preference test like MUSHRA or ABC/HR or another protocol."


Not that it makes more sense, but I can somehow persuade myself to agree with everything you said, yes, current protocols are crap, studies are flawed, no positive controls, Meyer was not throughout, yada... yada... yada...

If you want, i´ll search some earlier comments of yourself about the quality of the Meyer/Moran article. I got back than the impression that you wasn´t too happy with it?!

The comment that Oohashi et al. was "totally flawed" was posted by yourself and i would renew my question up to which standards this one can be "totally flawed" if the Meyer/Moran article is acceptable?

The methods do exist, a wide range of usable positive controls do exist (at least you´ll find the results at the ABX website, where they detect a difference between a philips and a sony cd player; would you dare to place a bet how many listeners would pass an ABX on that one if not trained? ), they have to be used by experimentators.

BTW, the requirements for a scientific acceptable test could be found in nearly every textbook on tests; do you really recommend to do without these requirements just because you don´t like the idea that audibility test quite often were not up to par with these standards?


The only point you missed (or avoided) is about who's supposed to do the right (and quite complex) work, to your satisfaction. Why should anybody else but the manufacturers and the associate salesforce invest in such a massive study. And why is this not happening?

First, everybody who does test has to meet the requirements, at least if he´s trying to generalize the results.

Again, sitting on the fence and pointing fingers doesn't prove anything.

It might surprise you, but sitting on the fence and pointing fingers is fundamental part of the scientific methodology; normally it is however called (peer) review or analyzing. 😀

And sometimes it proves that a test wasn´t/isn´t valid.

Wishes
 
And of course it takes just one person to establish audibility.

se

That depends on the factual number of participants in a listening test.

AFAIR in the paper rdf was referring to, the number of participants was quite high (means around 150), but they opted for a higher count of trials in the ABX (20 instead of the usual 16).
They found 4 listeners with 16 and more correct answers and given the low probability for guessing for these results, this was well above the expectancy value.

Wishes
 
It might surprise you, but sitting on the fence and pointing fingers is fundamental part of the scientific methodology; normally it is however called (peer) review or analyzing. 😀

Yep, how convenient :rofl:

You would certainly know that one will never be appointed as a peer reviewer without a consistent and accepted (by peer review) contribution in the field.

OTOH, you are still avoiding questions and running into circular logic arguments.
 
I know what you mean, but that's not a positive control.

It's more like an escape path, involving either subjects exiting the study when negative results are obtained, or administering a "rescue medication" in parallel.

A known effective medicine is given to one group. This is called the control group because the effectiveness of the drug is known. Another group gets the the study drug.
 
They found 4 listeners with 16 and more correct answers and given the low probability for guessing for these results, this was well above the expectancy value.

In a body of 150 subjects at least four outliers would be expected at random. Doesn't confirming they are random outliers require either targeted testing of those subjects or repeating the entire study multiple times with new subjects?
 
Of course with many tubed components cables may influence the sound, as much tube stuff has a rather high Zout which is sensitive to capacitive and other loading effects. So if you want to demo cables differences select your equipment with care.😉

It is not that simple. A hiZ amp will ignore RCL effects that a lowZ doesn't and visa-versa. Best treatise on the subject is at FirstWatt, and here.

In recent times, amplifiers have been lowZ, but neither is right or wrong (althou you will find many closed minded to think that hiZ is a flaw). An amplifier should always be considered as part of a system with the speaker & cable in between (and the room they are all in). In the right system a hiZ amp can have significant advantages.

dave
 
In a body of 150 subjects at least four outliers would be expected at random. Doesn't confirming they are random outliers require either targeted testing of those subjects or repeating the entire study multiple times with new subjects?

You are totally missing the semantics of "statistics".

In an isolated enclosure containing a gas, the gas molecules have an average speed Vo. This speed can easily be calculated knowing the temperature, etc... and using the thermodynamics laws.

Is it relevant to know that exactly 236 molecules have a speed of 10*Vo while exactly 315 have a speed of Vo/10?
 
The 'semantics' of statistics? You have heard of the Bell Curve, right? If the horizontal axis is labeled 'Number of Correct Answers' and the measured parameter truly random, you do understand the implications of the Bell Curve? Right?

Yes. The speeds of the gas molecules have a known distribution as well. So what?

BTW, you have to prove a Gaussian distribution, it's not obvious, there are statistical tests to do that.
 
I'm not sure what point you're making but it appears to hinge on the belief that polling/testing of 150 samples provides enough degree of certainty to describe an overall population approaching 7 billion.

You may be surprised how small a sample has to be to get statistically significant results, with a certain degree of probability. They key is "a representative sample" which pretty much means randomizing enough the poll/test subjects. Think elections exit polls.

There's an entire industry around these concepts, Ipsos Reid and Nielsen are a couple of quick examples.

If you are expecting 100% results from such a statistical test, then that's not going to happen. Otherwise, you tell and prove how much is enough to conclude that a hypothesis is statistically acceptable. Hint: there are tables with sample sizes and error probabilities as a function of the sample size and total population.
 
At the end of the day, those having a vested interest in promoting cable differences (and I mean cable manufacturers and the associated salesforce) should pay for such testing, as much as the drug manufacturers do for testing their products.

Indeed.

Three thoughts:

Results aren’t dramatic

It seems reasonable to me that the results of listening tests need to be examined in the context of the claims being made. What strikes me is that the claims being made by cable manufacturers and the retail chain are not claims such as “well there might be a little bit of a difference but sometimes it's hard to perceive it and this cable might make improvements to your system but it may not”. The claims being made are that there is an easily audible and dramatic improvement if you use cable A as opposed to cable B. The subjective reviewers also don't hedge, but write their reviews as if there is a clear audible difference between cables, that anyone should hear if they put their mind to it.

Therefore we should expect clear audible differences from listening tests, commensurate with the claims being made.

Even if there are some technical problems with how the double-blind tests are structured (and I am not sure that is true), the results from such tests should easily show clear differences between cables. For instance, almost all of the people on a listening panel should find themselves immediately and without effort able to distinguish cable A from cable B if the manufacturer's claims are correct. This never happens.

It seems almost inarguable that the differences, if they exist, must be so slight and so minor that any claims that cables can make a consistent and worthwhile difference to audio reproduction must be false.

Remember physics and engineering

I think there are two components to analysing the claims being made about cables. First, whether there is an audible difference between cables - that is something that we have been discussing concerning double-blind tests.

Second, is the insight we can gain from physics and engineering. And it seems to me that if we draw on what we know about physics and engineering over the last 200+ years that there is nothing inherent in similarly constructed audio cables which would give rise to dramatic differences to their audio performance.

If you can't provide a rational, clear, soundly based explanation from what we know about physics and engineering about why cable A should sound clearly different from cable B then you've got a problem at the intellectual and scientific level, as well as a good explanation about why you are not finding audible differences.

Why care?

There are those who look at the heat generated by the debate about cables and ask, perhaps with some justification, “Why is there so much rancour about something as trivial as bits of metal connecting two pieces of equipment?”

The answer is a combination of two things. The technology of cables is inherently simple. But the claims being made and the price being charged is completely at odds with what we know about the underlying technology.

It seems clear that there is a consumer fraud going on - people are simply being ripped off. And some of us don't like the smell the snake oil. I suppose others would argue that in a marketplace it is “buyer beware". That seems to me to be a quite inappropriate response when you see people who don't have as much experience in an area being ripped off and paying more money than they need to get decent sounding audio.

And the rip-off is not just by the manufacturers but the whole chain of players from the manufacturers, the subjective magazines and their reviewers and through to the retail sector.

Why should people stand by and watch a fraud being perpetrated? Why should crooks prosper?
 
In recent times, amplifiers have been lowZ, but neither is right or wrong (althou you will find many closed minded to think that hiZ is a flaw). An amplifier should always be considered as part of a system with the speaker & cable in between (and the room they are all in). In the right system a hiZ amp can have significant advantages.


Those Kleinhorns are a good example of the workaround required to get an adequate full-range response from a driver that can mate with a high z-out amplifier.

I guess one must be "closed minded" if they see flaws in that design approach. 🙄
 
If an individual scores above chance in one legitimate test, I would like to keep retesting that person until it was obvious that he/she was actually detecting a real difference that others can't. I would be very interested in what a really gifted listener can hear. To offset the hassle of testing, an entry in the Guinness Book of Records might help.
 
Status
Not open for further replies.