speaker cable myths and facts

Status
Not open for further replies.
You can minimise the 'error' in the test by making a low resistance low inductance cable, which will almost inevitably turn out to have high capacitance. Let's say you manage to get cable resistance down from 0.1R to 0.01R - then you might find the 'error' is reduced by 20dB. Another way of looking at it is that the attenuation caused by the cable has gone from -0.108dB to -0.0109dB. Does this mean it sounds better? Can we hear 0.1dB differences? I very much doubt it, as the real issue with most speakers is (electro)mechanical and 2-3dB differences are normal.

As well as optimising something which probably doesn't matter, the high capacitance introduced as a side-effect can upset the stability of some amplifiers and will demand higher current draw at higher frequencies which may increase distortion. A high feedback amp may be upset by this. A low feedback amp will have an output impedance much higher the 0.1R anyway so the cable achieves nothing.

On balance it seems like a bad idea, even though minimising difference across the cable sounds like a laudable aim until you think about it.
 
Hi,

As well as optimising something which probably doesn't matter, the high capacitance introduced as a side-effect can upset the stability of some amplifiers and will demand higher current draw at higher frequencies which may increase distortion.

Well, normally Amp's should be tested into a simulated ESL LOad (8 Ohm// 2uF), so if a few nanofarad cable capacitance upset such an amp one may conclude that the amplifier is faulty by design. However, such amplifiers indeed exist, so one needs to take some care. Such Amplifiers usually have appropriate warnings in the manual as well.

As for more distortion at HF, 5m of my cable have around 10nF Capacitance. At 20KHz the impedance of this around 800 Ohm, at 200KHz it is 80 Ohm, if an amplifier fears such loads, what would happen if we attached a loudspeaker to it?

I would suggest the RLC parameters of speaker cables as far as real world behaviour can be subject to a wide range of variation without appreciable effect on the speaker, of course there are limits that it would unwise to exceed, though even doing so does not necessarily a cable from working well in a system.

However, in this thread had an insistence that the "cheaper" or "normal" cables where a better choice because they where objectively better, something that I did not particularly address at the time.

However here we can see that cables with non-standard construction and features (I prefer the term to 'exotic") can perform better than those of standard construction and features in objective terms.

Does that mean they will reliably cause audible differences? No.

Does it mean the measurements matter? Maybe, maybe not.

At any rate, clearly there is more here than immediately meets the eye, though normally neither voudoun, the bogeyman or non-conventional physics need to be roped in...

Ciao T
 
what if the test premise is fundamentally flawed?

I don't think the ESL test load of 2uF without any series impedance is realistic - any ESL stepup xfmr is going to have substantial leakage L and series wire R

plots I've seen show resonant peaks below 40-50 KHz - above the peak they no longer look capacitive

in fact it looks like some xfmr step up ESL would benefit from several Ohms series R at the power amp output for 20 KHz flatness
 
Thorsten L said:
As for more distortion at HF, 5m of my cable have around 10nF Capacitance. At 20KHz the impedance of this around 800 Ohm, at 200KHz it is 80 Ohm, if an amplifier fears such loads, what would happen if we attached a loudspeaker to it?
OK, maybe less of a problem than I suggested but capacitance causes phase shift just where you don't need it. Also output devices may, in some cases, be suffering internal phase shifts. There won't be any input signals at 200kHz, but the amp/NFB doesn't know that so the whole system still has to cope.

My main point, which I think you may agree with, is that optimising a particular 'error' might not do anything useful except enrich the cable vendor. In a few cases it might do harm too.
 
Hi,

OK, maybe less of a problem than I suggested but capacitance causes phase shift just where you don't need it. Also output devices may, in some cases, be suffering internal phase shifts. There won't be any input signals at 200kHz, but the amp/NFB doesn't know that so the whole system still has to cope.

You have heard about "build out networks", right? If they are absent on purpose, then you may have a problem. It mentions it usually in the manual, or if you build it yourself like that you should know... :)

My main point, which I think you may agree with, is that optimising a particular 'error' might not do anything useful except enrich the cable vendor. In a few cases it might do harm too.

Well, I have no idea where the cable vendor comes into that. Optimising a cable for a specific set of RLC parameters, as well as characteristic impedance is one way to attain certain performance parameters. It also has the effect (intended or not) to minimise the error observable in the test described as a result.

As I happen to know the Vendor and also his main Designer I know that the test was developed AFTER the cable, likely in attempt to explain why, at least in the systems in use by the Vendor it sounded better. Given that the Amplifiers are Switched Mode I may have my own and different ideas here, but that is neither here nor there.

I do however agree that I prefer to make such alterations to geometry and LCR parameters intentionally and because I want to attain certain results, not as result of trial and error, however I have found the results of enough trial & error also can throw up interesting stuff.

So perhaps you may agree with my main point, which is that we have more complexity than "all cables sound the same, unless different in DCR" allows for and that we need to evaluate and understand these to draw conclusions and improve things, rather then trying to talk them away...

Ciao T
 
ThorstenL said:
So perhaps you may agree with my main point, which is that we have more complexity than "all cables sound the same, unless different in DCR" allows for and that we need to evaluate and understand these to draw conclusions and improve things, rather then trying to talk them away...
There is more to a cable than DC resistance, but if it is a short cable then not much more. Loop area (for which inductance is a reasonable proxy) will affect interference pickup, and some amps don't like having junk delivered to their output terminals. In a very noisy environment interference could be delivered straight to the speaker - all the amp has to do is maintain a lowish output impedance. So, yes, there is more to it but I don't think we have to invent new/barmy physics (as some seem to, whether they realise it or not).
 
I'm reading this thread from the beginning but so far I'm only on page 5.
Me being curious, please pardon me for asking a question which may already have been answered.

Barring all the snake oil and focusing on the technical merrits. Is there any reason why 18AWG multistrand wire with high strand count shouldn't be used as speaker wire?
I know solid core 13AWG is a popular choice but really, it's such a chore to bend this stuff that I'd rather go for soft pliable cables if there's no particular reason why the solid core is better?

Oh yeah, real actual measurable and audiable differences is way better than teoretical musings why one should be better than the other in a controlled lab environment with $$$ test equipment and all personel a minimum 30' away so not to interfere with the measurements.
 
Hi,

I am not sure I know what "build out networks" are. Would you mind briefly explaining them?

Real amplifiers with large amounts of negative feedback are inherently unstable once load capacitance exceeds a certain value (actually because it introduces additional phaseshift and erodes the amplifier designs stability margin, but thta may be too much information).

As a result it is common to introduce a resistor between the output of the feedback amplifier and the load. This resistor limits capacitive loading directl on the Amplifier. In RF systems it often also helps matching the cables characteristic impedance.

In some low frequency (e.g. audio) applications a fairly large value resistor in series with the output may not be acceptable. In this case the resistor may be "bridged out" for low frequencies using an inductor. It may be also necessary to add additional RC circuits (Zobel) in part to avoid additional resonances.

It should also be noted that traditional build out networks rarely protect the amplifier against RFI pickup by unshielded cables etc.

A more complex build-out network may actually resemble more a complex load-matched RF Filter, which can then help with a number of issues, but this is rarely seen in consumer or high end equipment.

Equally, a crafty designer may elect to not use a visible inductor at all, but a low value wirewound resistor with known and predictable inductance (and if the the precise resistor is not used the Amp becomes unstable - great copy protection device that) and ensures no microphonics and other effects can alter the inductance, which may or may not have any audible effects.

Ciao T
 
Speaker cables are transmission lines, and we must be careful not to rely too much on the lumped LCR model.
..............
In my book "Designing Audio Power Amplifiers" I have in Chapter 18 results of impedance measurements of a typical 10-foot loudspeaker cable when the far end is unterminated, terminated in its characteristic impedance, and terminated in a short. The transmission line effects are strongly visible in the 1 - 100 MHz frequency range. Different loudspeakers often act as vastly different terminating impedances at these frequencies. Note that the characteristic impedance of a length of audio-grade cable of ZIP-like construction is typically in the neighborhood of 100-120 ohms.
...............
Cheers,
Bob

At radio frequencies sure, but at audio frequencies both Jim Brown and Dynaudio might disagree.

http://www.audiosystemsgroup.com/TransLines-LowFreq.pdf

http://www.dynaudio.com/eng/pdf/DYN_...Grundl_INT.pdf
 
Hi,

Well i will take 30 ft .........:)

Sorry, non left except what is in my personal system and that is not for sale.

Some dealers do have cut length available, I have no specific examples.

Look for SCSI III Cable, FEP/PTFE Isolation, Silverplated 30awg solid core copper condcutors. There are at least two factories, one makes it with red stripe the other blue.

In a pinch, 3M #3749-80 will also do, this should be readily available.

This uses TPE insulation and tinplated conductors. Some may feel that these are inferior to the spec I use, in my own tests I quite liked the 3M cable but I never compared them side by side. Certainly the grey TPE looks way less cool as the clear Teflon over silver...

Ciao T
 
Last edited:
Hi Thorsten

I dont want to get into the pros and cons of speaker cable construction that
could be for another thread if you like. What Im interested in is whether the
test method is valid. This is to help answer the original question.

If it is indeed valid ( I think it is) them I think it proves that there is a measurable
reason why different cables could sound different. Its capable of further development
for instance its fascinating to hear the "error" when the test signal is actual music.

Do you think its valid?
 
Hi,

If it is indeed valid

It is valid in so far, as it allows us to measure the actual cable's contribution, which we can also do in other ways.

for instance its fascinating to hear the "error" when the test signal is actual music.

Yes, indeed.

I seem to remember that Ben Duncan had an article in the old pre Miller HiFi News that used a similar methode with multitone tests and measuring the resultant distortion, which in some cases exceeded -60dB.

I found the reference I think, if someone still has these issues, a scan would be appreciated...

Duncan, Ben & Harrison, Andrew; The Great Cable Test (part 1 to 3), Hi-Fi News and Record Review, July 1999, p.30-33, August 1999, p.32-41, September 1999, p.40-53,

Ciao T
 
Zeta4 said:
its fascinating to hear the "error" when the test signal is actual music.
What you are hearing is not 'distortion' but the impedance curve of the loudspeaker translated into signal level. You will get a similar effect by feeding the speaker through a small resistor (with very short cable) and listening to the voltage across the resistor. If done with a dummy load instead of a speaker you are listening to the impedance curve of the cable. Not very interesting, unless you have a bad contact somewhere - then you may get genuine distortion.
 
Status
Not open for further replies.