Balanced Cable Capacitance -- Conductor to Conductor or shield to conductor?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I have a question that I've been thinking about for a while and never actually found the answer to:

In a balanced cable, in regards to the cable's high frequency attenuation, which capacitance is relevant -- the capacitance between the two conductors, or the capacitance between each conductor and the shield? Or both?

My assumption has been that the circuit would best be modeled by two capacitors in parallel (one for the conductor to shield, and one for the conductors to each other) but I'm not at all sure that this is correct.

Thanks
Michael
 
Last edited:
What do you mean by attenuation? Do you mean cable losses - usually insignificant at audo frequencies and not due to capacitance. Or do you mean the effect of the low pass filter formed by the cable capacitance and the output resistance of whatever is driving it? I assume you mean the latter - which is a filter, not an attenuator.

Second, what do you mean by balanced? Do you mean a type of cable, or a type of cable use? If you are driving the cable in a balanced way then it is core to core capacitance which matters, although this will include a contribution from the core to shield/screen capacitance. If you are driving the cable in an unbalanced way then presumably one of the cores is grounded so capacitance from it to the shield/screen is irrelevant. If you are driving the two cores in parallel (why?) then the capacitance between them disappears but you get two lots of core-shiled capacitance.

So it all depends on what you mean.
 
Yes, I mean the RC low pass filter formed by the circuit.

And I'm referring to a balanced input and output stage, using a standard two-conductor plus shield cable, with the shield grounded at both ends.

Why does the capacitance between conductor and shield not matter? Wouldn't this also provide a pathway for high frequencies to short circuit, thus avoiding the resistive load? I know I must be going astray somewhere, but I'm not sure where it is.
 
OK. In a balanced cable driven in a balanced way there are two capacitances which matter:
1. core to core - Ccc
2. core to shield - Ccs
If you simply measure the core to core capacitance then you will include two lots of core to shield capacitance in series, in parallel with the core to core capacitance. So you need to know whether the capacitance figures you have are simple measurements or derived from measurements to separate out the various contributions.

So Ccc(measured) = Ccc + Ccs/2

This is the capacitance between the two signal wires. Each will be driven from a source resistance R. So you have a low pass filter consisting of 2R and Ccc(measured).

Or you can look at it another way: treat the capacitance as two capacitances joined at ground. Each capacitance will be twice what you measured. So you have a low pass filter on each leg consisting of R and 2Ccc(measured). The result is the same.

Why does the capacitance between conductor and shield not matter? Wouldn't this also provide a pathway for high frequencies to short circuit, thus avoiding the resistive load? I know I must be going astray somewhere, but I'm not sure where it is.
Capacitance from a core to the shield does not matter if that core is grounded i.e. connected to the shield. This is not the case when you are using balanced drive.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.