Do speaker cables make any difference?

Status
Not open for further replies.
serengetiplains said:
Dumbass, as to "audibility" I do mean conscious noticing by the person listening. I assume the range of what is noticed by different people varies according to their experience, much as in so many spheres of life (wine tasting is a good analogy).
It is a good analogy. California wines were widely considered inferior to French wines until they started winning blind tastings.
 
soongsc said:
Looking at this pic posted, it make me wonder what changes to ABX testing needs to be modified to determine other aspects that make cables "audibly distinguishable"?
Interesting logic: ABX testing fails to find audible differences between speaker cables, therefore the testing regime must be faulty.
 
Spend an afternoon cleaning house and rewiring the workstation and miss a very active thread! Quick off topic:


SY said:

What should I listen for?

Ancient history now, but since frequency response and tracking ability have been removed from the equation the last frontier is catridge compliance / tonearm+cartridge mass resonance. Deep organ pedals and high solo vocals present the best opportunity to generate modulation effects. If I recall you use Acoustats, full range drivers might not be the best tool because the drivers potentially contribute to the effect. Best shot is probably a 3-way with a wide band mid. I would listen for the character of the voice to alter during heavy low pedal stomps and differences between cartridges. Some cartridge/arm combos had wildly different resonance frequencies and Qs.

Regarding ABX, I'll toss out an objection out I've raised over the years and never got a satisfactory reply. I honestly don't know if it's a valid objection, the reasoning behind the protocol could be buried in unfamiliar math. If anyone knows, an explanation is greatly appreciated in advance.
I understand the ABX procedure to be defined as per this excerpt from the 'Secrets of Home Theater and High Fidelity' website (http://www.hometheaterhifi.com/volume_11_4/feature-article-blind-test-power-cords-12-2004.html) discussing a cable ABX test performed at a meeting of the Bay Area Audiophile Society:


"You listen to product "A", then to product "B", then you are presented with "X", which is either "A" or "B" and you indicate which product you think it is."

The objection is simple. What is the object of the test? To determine if a device causes an audible difference. What is the ABX test asking participants to do? Compare two DUTs and identify one against a mystery 'X'. Why this structure? We're hunting for a delta here, identifying the DUT is irrelevant. What is added by this extra load over and above 'did you hear a change'? To me it makes far more sense and is far less stressful to randomly and blindly switch a DUT in and out and ask the participant if a difference is heard. This also has the benefit of not adding a similar-nature devices in line with the DUT and assuming it won't have an impact. Seems air-tight to me.
 
Tom, the point I've repeated endlessly (it seems to me in my grumpy mood) is that one cannot prove a negative. That is not peculiar to ABX or any other type of testing, it's a logical statement. So don't bother asking if ABX can prove a negative or that sighted testing can or testing via scientological psychic emanations or graviton detectors or any other test. No experiment can prove a negative.
 
But I can prove weight A is not equal to weight B by placing them on scales.

I can prove there is not a significant audible difference between cables by doing a double blind test and checking the statistical significance of the findings

This is not rocket science
 
...and for Popper & Kuhn's sake - sit down and enjoythe music.

This is straightforward hypothesis testing. If your hypothesis, H1 is the Cable B sounds different than Cable A, we would write the null, H0: there is no sound difference between the Cable A and Cable B, on average.

Science can never accept the null hypothesis. We would

'reject H0 in favor of H1' or 'fail to reject reject H0'.

We never conclude 'reject H1', or even 'accept H1' - because every test is subject to error - there is never absolute proof - only evidence.

If we 'fail to reject H0', this does not necessarily mean that the null hypothesis is true, it only suggests that there is not sufficient evidence in this test, under these conditions, with this sample to favor H1 over H0.

Rejecting the null hypothesis then, means that scientifically collected and evaluated evidence suggests that H1 is likely true, that Cable B may sound different than Cable A. Whether this finding generallizes to conditions other than the experimental one is an entirely different question.

And whether the difference is better??????

Chose one you like and enjoy the music.
 
I can prove there is not a significant audible difference between cables by doing a double blind test and checking the statistical significance of the findings

No, all that does is proves that there was no significant difference in that particular test. You cannot prove the negative statement. Now, with enough tests with no significant differences, you can provisionally accept the hypothesis, but it still cannot be asserted as a general statement- someone conceivably could run a test that gave a significant result.

Now at what point does one say, OK, enough different tests have been run with enough different setups and enough different experimenters and enough different test subjects to reject a null hypothesis? That's subjective (ouch!). In this particular case, even the credophiles cannot point to a single verified result of "different" that cannot be explained by L, C, and R. Some interesting and fascinating hypothesizing, but as yet, no experimental evidence.

That's why I'm using Home Depot orange extension cords to hook my electrostatic speakers to my tube amplifier- my irrationality has its limits.
 
rdf said:
The objection is simple. What is the object of the test? To determine if a device causes an audible difference. What is the ABX test asking participants to do? Compare two DUTs and identify one against a mystery 'X'. Why this structure? We're hunting for a delta here, identifying the DUT is irrelevant. What is added by this extra load over and above 'did you hear a change'? To me it makes far more sense and is far less stressful to randomly and blindly switch a DUT in and out and ask the participant if a difference is heard. This also has the benefit of not adding a similar-nature devices in line with the DUT and assuming it won't have an impact. Seems air-tight to me.
It is not, because there is no control.

It might be a valid test if, half the time at random, instead of switching units you "switch" between the same unit. Then you compare the frequency of a "difference" call between true switches and fake switches.

Incidentally, the ABX protocol allows the subject to "hunt for a delta" in exactly the manner you describe. The subject listens to the A unit, then the X, and if he hears a difference, he calls it B, otherwise A. So the experiment you describe is actually a subset of ABX.
 
SY said:
Now at what point does one say, OK, enough different tests have been run with enough different setups and enough different experimenters and enough different test subjects to reject a null hypothesis? That's subjective (ouch!).
Well, yes and no.

You can still do a power analysis (power being the probability of detecting a difference at a certain p-value in a given experiment).

Power is a function of the size of the difference. That is, it would obviously be much easier to detect a 70/30 difference than a 51/49 difference. It would take a very high sample size indeed to detect a 51/49 difference with high probability.

The mitigating factor, of course, being that you don't really give a crap about a difference you can only hear only 1 time out of 100, unless you just dropped hundreds (thousands?) of dollars on speaker cables. 😀
 
This is a case where a subjective element is all-powerful so yes we cannot prove this categorically one way or another, by its very nature - the problem here is limiting the question to something that will resolve an answer 'beyond reasonable doubt'. This is what statistical analysis is for.

In the case of speaker cables we need to justify that paying the extra money does make a difference. The question we ask cannot be anything simpler than 'does a double blind test reveal a difference' (asking for absolute proof just muddies the issue) be it with 1 or more people. This should be enough to satisfy most reasonable people, however, giving a clear answer as whether:

a. an individual can reproducibly hear a difference
b. a group of people can on average hear a difference

Maybe some people can hear such differences - but unless this test is done it is purely hearsay (pardon the expression).

Now, this is all very interesting as you may ask 'if it is so simple why have not such tests been done or results made available'. It is only a matter of swapping hidden cables and testing a group of people.

The answer I suspect is that it will lose the audio profession alot of money in the dispelling all the myths and pretense that feeds on the subjective nature of this matter.
 
pinkmouse said:
John, you assume that the ear can localise that accurately in the absense of external stimuli such as sight. That is the first hypothesis you should be testing. In my knowledge of the literature that hasn't yet been proven in humans.
That is correct. It is within the test protocol. I also have not found this within the literature, nor from the researchers I asked..

I have designed (not assembled yet, but have the materials) a setup which has two fixed speakers for the stereo source, and one moveable one. All three will be behind a visually opaque screen, so the subject is unable to determine the location of the moveable one. The location of the source as believed by the subject, I have not decided yet. If the signal is a cowbell, I was thinking of a amplitude modulated laser pointer, to give the appearance of the light providing the audio signal...the subject moves the light until they believe the pulsing light is exactly where the sound is emanating from... btw, head must be locked into position for the tests, it is not allowed to turn while the eyes are. And, the rail for the mover must be curved, as initial protocol is not designed for depth perception. Much care must be taken to NOT alter ITD or IID enough to alter the perceived image depth.

The system allows testing of:

1. Human capability to discern absolute location, using the moveable speaker only. This should provide a gaussian distribution for baselining of human localization capability. Single freq, across the band.. (this be where the martini comes in..I am curious as to the degradation of localization capability under the influence of alcohol, as compared to sober). Plus, ya gotta have fun sometime, eh?

2. Testing of the apparent two source image with respect to the moving, real source. Single freq stimulus across the audible band. The strength of this test is the fact that differential localization is far stronger than absolute localization.

Data from these will be used to determine the correct relationship between perceived angle, frequency, ITD and IID. Perhaps also absolute spl..

It will also fix the ITD and IID variance that will provide an area of localization uncertainty...for example: (not actual numbers in this example), allow ITD to shift 3 uSec and IID to shift .05 dB, expect an uncertainty of 6 inches in image location for an image 15 degrees off center and ten feet away.. This allows one to set the system parameters that must be controlled (ITD and IID) to fix the image uncertainty (1 or 2 sigma) to some arbitrary point in space.

It is assumed that ITD and IID for a specific spacial location will be frequency dependent....that assumption significantly complicates signal processing that would have be applied to a spectrally rich source. If it is a heavy function, I will not have any clue as to how to accomodate it.

Cheers, John
 
keladrin said:
The answer I suspect is that it will lose the audio profession alot of money in the dispelling all the myths and pretense that feeds on the subjective nature of this matter.

If the myths are rigorously dispelled, high end vendors would suffer. If audibility is proven via scientific method, the mass market engineers will simply apply the new concepts to existing product...again, high end vendors lose.

And some wonder why the high end vendors do not go to great lengths to prove audibility?

Cheers, John
 
Dumbass said:
jneutron, I would be very interested in the results of your experiment.

When results are obtained, I will either post or publish.

Tis "unfortunate" that this is a hobby, as work and honeydo take away from the effort. Honeydo takes precedence, of course, beyond all. Building a bdrm at the moment, then a workshop, then a guest bdrm, then a master bath, then landscape. Life is never dull.

The motion control stuff will be fun, as I've some exp. in that both using a pc and metal/wood working. That progresses slowly, but at least in the positive direction. The dsp stuff will be rather difficult for me I think, and most likely will involve forum guys.

The martini aspect, of course, I have garnered a significant database of experience there...😀

My expectation is years. And, I have the patience for that, as my work enviro has goals a decade away.

Cheers, John
 
Dumbass said:
It is not, because there is no control.


The control is a short circuit. It's the equivalent of a placebo in a drug test. Drugs aren't tested by administering two samples and asking the subject to first learn to differentiate, and then guess what's in the mystery cup.


It might be a valid test if, half the time at random, instead of switching units you "switch" between the same unit. Then you compare the frequency of a "difference" call between true switches and fake switches......


Exactly what I propose, allow the subject to press a button that switches the DUT 'blindly and randomly' out of the circuit and tabulate when a difference is reported against the system's state.


So the experiment you describe is actually a subset of ABX.


Again, exactly what I'm saying but from the opposite viewpoint. ABX is an expansion of the test I propose which adds complication and difficulty (for subject and tester) yet yields no additional information of relevance.
 
rdf said:
The control is a short circuit. It's the equivalent of a placebo in a drug test. Drugs aren't tested by administering two samples and asking the subject to first learn to differentiate, and then guess what's in the mystery cup.
There are many different protocols for clinical trials of drugs, one of which is called a crossover test, where half of the sample is given the drug for a certain amount of time then crossed over to placeba for an amount of time, and the other half is given the opposite. So this is a protocol where a change between two consecutive treatments is tested. Subjects aren't asked to "guess" which is which, instead the endpoint of relevance (mortality, reduction of acne, frequency of headaches, whathaveyou) is measured.

In an audio test, however, the endpoint of interest is what the listener perceives.

I am still trying to grok what your exact criticism of the ABX protocol is. The protocol you proposed is a subset of ABX, but ABX gives the listener a greater opportunity to detect the difference, if one exists.

On the original ABX website, they even describe how one listener did the EXACT thing you propose, namely listening to only the A channel, then switching to X, and listening for a difference.
 
Dumbass said:
I am still trying to grok what your exact criticism of the ABX protocol is.

Due diligence. Eliminate every possible objection. Incidentally, unless I'm reading it wrong in a crossover test the reference is still always a placebo. ABX as implemented lacks this 'null' reference.

I don't agree ABX provides 'more opportunity' but instead adds more variables. Instead of switching a single DUT in and out we now have three: DUT 'A', DUT 'B' and the potential effect of the ABX switcher itself. No one has yet mounted in my view a coherent argument to justify adding (in some instances) dozens of solder joints, extra bits of wire, switches and connectors in series with a test intended to reveal the effect of one piece of wire, two connectors and two/four conductor joints. 'Oh come on' isn't good science. The added variables are also an added stress on the test subject, more things to mentally juggle which, once again, add nothing to the test. Your example demonstrates someone cleverly trying to work around the limitation but I think the effort to simplify the protocol actually supports my contention. Most subjects wouldn't think of it.

Consider instead an integrated amp with an active buffered input and a tape loop, the latter controlled in a 'blind and random' manner. Let the test subject take it home and live with it in familiar, comfortable surroundings, for as long as they please. No group testing, no peer pressure to perform. Make the results anonymous. Let them try any combination of interconnects between devices, subject to practical engineering limitations regarding impedance and shielding which reduce differences to below any standard for audibility. The market's drowning in applicable samples. When the subject is convinced of an audible difference the interconnect goes into the tape loop circuit and the test begins. Note in this case the extra connectors and internal wiring of the tape loop, far from obscuring potential differences, adds to them and should theorectically assist in hearing a difference. The tape loop control circuitry selects a state, either DUT in or out. The test subject is permitted to switch between the two states as long as they like and either make a determination or pass. Determinations are tabulated against the system's state and a audibility assessed statistically.

This topology emulates the effect of inserting an interconnect between a pre-amp and amp. It eliminates every objection I can think regarding unfamiliar surrounding, peer pressure, time limitations, etc.. Some very high end manufacturers make integrated amps suitable to this purpose. A null result is still open to the objection the amp didn't have sufficient 'resolving power', but that's as much as valid test result. If a Krell, Musical Fidelity, etc. haven't the resolving power to differentiate between interconnects, well.....
 
Status
Not open for further replies.