Speaker Cable Capacitance and Amplfier Stability

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hi All,

Sorry this isn't really a DIY question but I think a good few people on here really know there stuff so here goes.

I have some Kimber 8PR cables that, according to the Kimber website, has a (Cp) parallel capacitance:742.0 pF @ 20 kHz for a 2.5 metre pair. I'm a bit worried about using them with my little Lavardin amplfier as they recommend (like Naim etc) avoiding high capacitance cables. I know this isn't very high capacitance like a Goertz cable for example but just what is a 'high' capacitance cable ? And does the speaker impedance (8ohms in my case) play a part in this ?

Thanks for any advice in advance.
 
I think that this is for speakers. Yes, there are (too) many amps which act oddly or even destructively when loaded with even moderate values of capacitance. I'm unfamiliar with yours- have you asked the manufacturer directly for advice? Does it use an output coil?

You can risk the amp but spare your speakers by hooking up an 8 ohm dummy load, then paralleling that with 1000pF or so and see if the amp screams and smokes. Or you could just chuck the high capacitance cables and use something less glamorous but more sensible.
 
A rather late reply as I haven't visited threads widely lately.

742pF (if that is correct) is an extremely high capacitance for 2.5 meter loudspeaker cable - what misguided design route did these fellows use?

The lowly ripcord (1 mm) measures only 400pF for 4 meters. Of the more professional types measure < 200pF for 3 meters. Also I am very amazed that amplifiers exist which could not tolerate this sort of thing. (If Sy did not acknowledge that, I would be very hard-put to believe it.)

JCLNV, I was hoping that you might have a scope and square wave generator at hand or available. The best test is to hook the lot up and watch the scope for accentuated overshoot or oscillation (and shut down immediately in case of the latter). Sy mentions an 8 ohm resistor and 1000pF load; fair enough, but one must remember that at 20 KHz most loudspeakers are way above 8 ohms - more likely 20 - 30 ohms inductive, and increasing. My best thoughts would be that those amplifiers that do object, do so because of resonance with the loudspeaker. impedance/cable capacitance at some supersonic frequency rather than just cable C.

But explanations aside, you have what you have and I would suggest the above square wave test if feasible, to put your mind completely at rest.
 
Johan Potgieter said:
742pF (if that is correct) is an extremely high capacitance for 2.5 meter loudspeaker cable - what misguided design route did these fellows use?


Why is that high?? 2.5 * 39 = 98.5 inches / 12 = 8.2 feet

742 / 8.2 = 90.4 pf per foot. Sheesh, that's not bad.

Why is it misguided? If it's fully constrained with foam dielectric, the inductance will be 11.4 nH per foot, cable Z of 11.25 ohms....if it is a typical dielectric (2.7), it will be 31 nH per foot, and 18.5 ohms cable Z. If unconstrained, L and Z will be higher, but without L or the geometry, constrained is the lower limit.

It would appear that their design guidelines were to get the cable impedance down to within the speaker impedance realm, but not all the way down to 8 ohms, either due to capacitive concerns, or geometric constraints.

But choosing a design impedance that low is not misguided, it's an attempt to reduce the inductance as much as possible, to keep the cable energy storage minimal.

Cheers, John
 
Looked at the kimber site.

L of 56 nH per foot.

LC/1034 = 4.87

Not very well constrained...geometry defined effective dielectric constant of 4.8..

Z = 25 ohms.

Well, it's better than zip spec wise..but I still don't know what their design criteria was.

Cheers, John
 
Hi John!

Glad to hear from you; lately I am not here that often.

By high I meant relatively. Not to mention brands: For 2,5m length, a quite expensive cable I measured showed 180pF. RG58/U (a co-ax cable) had about 253pF. The lowly rip-cord had 250pF. Other moderately expensive types (not exotic) were 60 - 80pF. That makes the Kimber capacitance 3x higher than the highest out of about a dozen I checked.

In that sense they must have thought the use of whatever di-electric material/conductor diameter/inter-cable distance they used justified despite the popular concept that even loudspeaker cable must have low capacitance (not saying I share that). I simply expressed wonder at the rejection of such a "selling point" - in favour of what?

I am not checking or disputing your analysis (I know better!), but I think both you and I know better than to involve "speaker impedance" (another "selling point" with some) for cable lengths of the order of < 0,1% of a wavelength, and cable storage effects in the presence of loudspeaker impedances .... But yes, those are design goals with some; still, excuse my use of the term "misguided".

Not to re-re-re-open this can of worms here; just a reply to yours.

Regards
 
According to NP the impedance mismatch at shortwave frequencies can lead to amplifier instability, in his opinion more so than the influence of the cable-capacitiy on phase-marging.

He recommends the use of a Zobel at the speaker end of the cable in order to achieve at least some impedance matching at high frequencies.

Regards

Charles
 
Johan Potgieter said:
Hi John!

Glad to hear from you; lately I am not here that often.

Good to hear from you also... I've also been other places. Just got back, and trying to find my pulse..

Johan Potgieter said:

By high I meant relatively.

Ah, ok. No prob.


Johan Potgieter said:
I am not checking or disputing your analysis (I know better!), but I think both you and I know better than to involve "speaker impedance" (another "selling point" with some) for cable lengths of the order of < 0,1% of a wavelength, and cable storage effects in the presence of loudspeaker impedances .... But yes, those are design goals with some; still, excuse my use of the term "misguided".

If one is concerned with SWR concerns, then matching line to load does indeed appear to be of no consequence to audio. That's what they taught us all those many years ago.

When one analyzes the speaker run as a transmission line anyway, it is not as clear as one would like. Mismatching line and load does change the response in a fashion not alluded to by our schooling.
Johan Potgieter said:

Not to re-re-re-open this can of worms here; just a reply to yours. Regards

Again, not a problem..



phase_accurate said:
According to NP the impedance mismatch at shortwave frequencies can lead to amplifier instability, in his opinion more so than the influence of the cable-capacitiy on phase-marging.

He recommends the use of a Zobel at the speaker end of the cable in order to achieve at least some impedance matching at high frequencies.

Regards

Charles

Who is NP?

Cheers, John
 
My guess would be Nelson Pass. Much nicer than our old pal JC

1.) Yes
2.) Maybe, but keep in mind that there always people on this forum who are deliberately or carelessly p****ing the old experienced guys off. And JC might be more susceptible to this than NP. But that should not keep us from being grateful that there are exeperienced people sharing their views and thoughts with us.


Regards

Charles
 
AJinFLA said:
But is this audible with a music signal in a room with a typical multi (largely uncorrelated) driver loudspeaker? Enquiring minds want to know.


Unknown. What suprised me at all was the fact that the line to load ratio had the effect I calculated. I had not suspected the difference between T line theory and lumped element. Course, it's logical to me now since I figured it out.

While in theory, the two speakers have to be absolutely identical to guarantee soundstage, we do tend to adapt to the nonconformities as we listen.

Originally posted by AJinFLA My guess would be Nelson Pass. Much nicer than our old pal JC ;)
AJ

Ah, makes sense. JC ain't that bad, he just needs better on line people skills. He is a worthwhile information source for much I do not understand.

Cheers, John
 
jneutron said:


Unknown. What suprised me at all was the fact that the line to load ratio had the effect I calculated. I had not suspected the difference between T line theory and lumped element. Course, it's logical to me now since I figured it out.


I was not going to expand on this, but perhaps one thought more. What you said above (also previous remarks) is of course true - nobody ever said cables don't make a difference per se. But in audio, I can hardly entertain the logic that what one does calculate for very short T lines makes an iota of difference. What I found could perhaps be 2% of the possible LCR parameters of an average loudspeaker. Meaning, how many loudspeakers with substantially different equivalent diagrams all sound good? According to Dunlavy at least (also Danish tests), models of even the same loudspeakers can differ by up to 15% of one another, eq. diagram wise. I find it a little demanding to accept that cable parameters contributing much less than normal loudspeaker spread can suddenly lead to cries of "see, cables make a difference!" (not John's utterances).
 
When I built a MOSFET amplifier (ETI5000) in my first year at tech back in 1985, I could not at first figure out why the overload protectors on those B&W speakers kept tripping, even at moderate levels. A few years later I discovered a 4MHz oscillation present at the output of these amps. I resolved this by installing drain-gate capacitors (IIRC) on those output transistors.

Now, I'm thinking: how many other amps out there employ electronics that can interact with out-of-band signals or loads? We mostly limit amplifier bandwidth by installing an input filter, and neglect what happens at the output side. We convince ourselves by proving that significant cable effects do not exist within the audio band. But what if cables do indeed induce (certain) amplifiers to become unstable outside the audio band, or admit RF signals? And what if these effects manifest themselves as reduced fidelity of the audio signal?
 
But what if cables do indeed induce (certain) amplifiers to become unstable outside the audio band, or admit RF signals? And what if these effects manifest themselves as reduced fidelity of the audio signal?

I remember about 25 yrs ago that certain Phase Linear power amplifiers would literally burst into flames when driving some speakers through something called "Cobra" cables. That certainly reduced their fidelity!

The Cobra cables had very high capacitance, trying to get to a low characteristic impedance I guess. The PL amps would go unstable from the lowered output pole frequency formed with the high capacitance. The HF oscillation would fry the amplifier because the bipolar output transistors in the PL couldn't turn off as fast as they could turn on, and so caused massive current to go through the transistors (same would happen if you drove the amps full scale at very high frequencies).

The problem with trying to use cables matching the impedance of loudspeakers is that speakers are only *nominally* 8 ohms (or whatever). It's not unusual to have a 10:1 difference in speaker impedance over the audio range. At even higher frequencies, cone or dome speakers usually have a very high inductive impedance, so all the amplifier sees is the speaker capacitance that is in parallel with that high impedance.

The benefits of using cables matching the speaker impedance are doubtful (IMHO).
 
Again, I am surprised.

I was of the opinion that one of the basic requirements of an amplifier design was that it should be stable with normal operating output loads - i.e. including whatever cable capacitances and load variations would normally be encountered. In my own designs I was never quite comfortable unless they were stable even with open load (perhaps a little vanity ... who knows). I mostly seem to be able to achieve this.

Coming to the point: Yes, obviously the sort of thing mentioned in the previous 2 posts (Bvaslo and Shaun) would cause havoc, and should not occur. Nowadays the easy way out of that sort of problem is a small series inductor in the output circuit.* (One never had those problems with tube designs because there was always a large series inductance - the OPT leakage reactance.)

So, I would risk saying if an amplifier is prone to that kind of instability - redesign! Perhaps bold words, but I am an amplifier designer and cannot agree that such a state of affairs should necessarily exist. If one is the owner of such an amplifier without technical expertise to rectify the situation, hopefully a knowledgable person could be found to assist without having to throw the amp out. (A signal generator and scope would be required for a proper analysis.)
_____________________________

*This was the subject of a drawn-out discussion elsewhere.
 
Johan Potgieter said:
Again, I am surprised.

I was of the opinion that one of the basic requirements of an amplifier design was that it should be stable with normal operating output loads...

...So, I would risk saying if an amplifier is prone to that kind of instability - redesign! Perhaps bold words, but I am an amplifier designer and cannot agree that such a state of affairs should necessarily exist...

Exactly my sentiments (though I would hesitate to call myself a designer).

What is worrying, though, is that mostly these effects are reported by users of high-end equipment(!)... of which the expectation is that it is amongst the best in the world (the equipment, that is ;) ). And I believe that this is what makes cable-tweaking of systems such a widely accepted practice.
 
Johan Potgieter said:
But in audio, I can hardly entertain the logic that what one does calculate for very short T lines makes an iota of difference.
I personally agree that the prop velocity based concerns for audio are misused..saying that a prop velocity of 95% vs 85% reduces smearing or time delay is ridiculous... if one assumes the signals transit the "pipe" only once.

If one were to contrast two ten foot cables, the first 1nSec per foot and 4 ohm Z, vs another at 2nSec per foot and 100 ohm Z, both feeding a 4 ohm load, what would the step response be?

For cable 1, the load voltage and current are that of the intended signal, exactly 10 nanoseconds after the amp produces it. For cable 2, 20 nano out (remember, it's 50%c), it is 1/25th of the final signal. Reflections reach the load every subsequent 40 nanoseconds, each reflection that hits the load increases the volt/current at the load 1/25th of the difference between the intended signal, and that currently at the load.

That is a discrete decaying exponential. And it is double the response time of the lumped element analysis of the cable inductance would indicate.

And, the first cable has ZERO response time (other than the transit time).

So clearly there is a marked electrical difference between a zip cord, and a matched cable.

Audible? Don't know, suspect maybe, nobody's proven such.

Johan Potgieter said:
What I found could perhaps be 2% of the possible LCR parameters of an average loudspeaker. Meaning, how many loudspeakers with substantially different equivalent diagrams all sound good? According to Dunlavy at least (also Danish tests), models of even the same loudspeakers can differ by up to 15% of one another, eq. diagram wise. I find it a little demanding to accept that cable parameters contributing much less than normal loudspeaker spread can suddenly lead to cries of "see, cables make a difference!" (not John's utterances).

Assume two full range drivers, 10% difference in sensitivity.

Plug em in, play music, sit in the sweetspot.

Would we know if the sweetspot shifted a really small amount due to the diff? I wouldn't, as I've no head vice.. The point being, we adapt to this small shift by moving our head, even 100 mils...

Go three way now. Same deal?? Not exactly.

The sweetspot for the midrange may not be the same as that of the tweeters. Sibilance may be offset w/r to main vocals, especially if the drivers weren't matched.

Very good speaker vendors would be concerned with that problem..I know I would. I've made my own systems where the tweeter cap tolerance did shift sibilance in that way.

The equivalent amplitude or timing shift needed to make that shift in sibilance? The numbers are small enough to boggle the mind of any intelligent person. 2 to 5 uSec interchannel? Tenth dB??? Ridiculous numbers..but the equations are robust..and I cannot rule out the possibility of some cable effect causing problems at that level. Heck, I can't even measure that low..


bwaslo said:
The problem with trying to use cables matching the impedance of loudspeakers is that speakers are only *nominally* 8 ohms (or whatever). It's not unusual to have a 10:1 difference in speaker impedance over the audio range. At even higher frequencies, cone or dome speakers usually have a very high inductive impedance, so all the amplifier sees is the speaker capacitance that is in parallel with that high impedance.
Most try to get the match without the benefit of the equations. As a consequence, they usually have high capacitance, and equivalently high effective dielectric coefficients.

When cable z exceeds load z, the inductive energy storage dominates...when load z is higher, capacitive storage dominates.

While I do recommend matching load z, I understand the problem of speaker loads.. That's why it's best to compromise, get cable z in the middle of the range, while keeping (L*C)/1034 as close to 1 as is possible (L in nH per foot, C in pf per foot.). Foam based coax or foam dielectric spaced high aspect ribbons would be my choice.


bwaslo said:
The benefits of using cables matching the speaker impedance are doubtful (IMHO).

For my applications, I certainly agree. Even for my home systems, where I just listen but do not sit, I agree.

But for those sitting in the sweetspot of a high end system, trying to discern soundstage imaging, I cannot share your blanket "enthusiasm".


Johan Potgieter said:
So, I would risk saying if an amplifier is prone to that kind of instability - redesign! Perhaps bold words, but I am an amplifier designer and cannot agree that such a state of affairs should necessarily exist.

As a designer, you may choose to tradeoff stability for large signal bandwidth. That's a design call.

Some designers believe bats should be happy as well, and are willing to give up some stability.
Shaun said:
What is worrying, though, is that mostly these effects are reported by users of high-end equipment(!)... of which the expectation is that it is amongst the best in the world (the equipment, that is ;) ). And I believe that this is what makes cable-tweaking of systems such a widely accepted practice.
I distinguish the high end users as people who are more concerned with imaging..to wit, their end use specifications may be somewhat more critical.

Not a bad thing, really. Just not my cup of tea.

Cheers, John
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.