TPC vs TMC vs 'pure Cherry'

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
post #93 was not meant as joke. I'm really wondering whether slow output devices that essentially go out of the picture at the ULGF may in fact improve the robustness of the system.
Matze, you are right about this and I know you were serious.

The simplest Lin type amp, eg Fig 1.5 in Bob's book, has 4 devices in the active path so we expect that it won't be stable without compensation. But in da old days, there were real examples which were stable without obvious compensation cos devices were so slow.

Most examples of this simple topology need some compensation .. even if it is just LTP degeneration working with the evil Ccb of the VAS.

But the practical effect in the case of post #4, is simply you need to use different compensation with faster output devices. For MJL3281/1302, R22 the VAS emitter resistor becomes 33 .. and C9 across it becomes 470p.

That's why I'm finding excuses not to begin work on 'pure Cherry' for Michael's 'supa VAS + triples'. This has 5 devices so I'm not sure the compensation I'm prepared to use (just R22 & C9) is sufficiently powerful.

The reason I asked him if he had some performance target is that if 1ppm THD @ 20kHz is good enough, :eek: then the simpler circuit in #4 (only 4 devices in the inner loop forward path) will do it with known stability.

Thank you for your comments.
____________________

I repeated the sims in posts #84 & 85 with a large inductor breaking the main loop between J1 drain and the input to the VAS. This is in line with Waly's exhortations to check each inner loop without the influence of enclosing loops. David, I hope this new schematic meets with your approval :)

I'm pleased to find that the stability results, R22=0 Bad, R22=22 Good, are unchanged. So at least for simple power amps like these, unstable inner loops also result in unstable main loops. Waly's evil unstable inner loops hiding behind stable main loops are not here. :D

I'm beginning to like Nyquist plots even though they are more work .. certainly for tweaking stability .. eg the changes required with different output devices. You get a better feel for what each tweak is doing.
 

Attachments

  • Cherry.gif
    Cherry.gif
    44 KB · Views: 312
The amplifier is stable also in practice, not although, but because the output transistors are slow.

At the ULGF of 10 MHz (http://www.diyaudio.com/forums/solid-state/235188-tpc-vs-tmc-vs-pure-cherry-9.html#post3480607), with largest device Ft of 6...8 MHz, the output transistors will not really amplify under any operation condition. At the ULGF, they just provide a cut-through via their base-emitter junction resistances and capacities.

I asked in DIYAudio earlier and had no response to the question "How close is a transistor to minimum phase past Ft?"
Later read Dennis Feucht's treatment of transistors past Ft and it is similar to your idea.
If it is possible to run the loop well past the Ft of the transistor then that increases the amount of distortion reduction possible.

...where Bode is sufficient ... and when the full Nyquist plot should be examined....

Bode and Nyquist show exactly the same information. Just a different perspective.
And thanks for the clearer pictures;)


Nice link. Thank you.

Best wishes
David
 
Can you show a very simple example of this, say [2x2]? I still remember matrix eigenvalue theory but would like to see how it is applied.

Do I need to spell again I'm a lazy ****?

Here's a good example.

Essentially, you start with the set of equations (in complex s) that characterize the linear system (use Kirchoff and consider the Ui(s) and Ij(s) variables in the diport, i,j=1,2), determine the linear system matrix, then you calculate the eigenvalues by solving det(A-LI) = 0 (they are in general complex numbers) then you plot the trajectory in the phase space. If it diverges, the system is unstable. If it converges, it's stable.

If you prefer, you can start from the differential equations set, and consider the s variable <-> differential operators isomorphism.

We are so far off topic it's not even funny.
 
Most examples of this simple topology need some compensation .. even if it is just LTP degeneration working with the evil Ccb of the VAS.

But the practical effect in the case of post #4, is simply you need to use different compensation with faster output devices. For MJL3281/1302, R22 the VAS emitter resistor becomes 33 .. and C9 across it becomes 470p.
Kgrlee, probably my reasoning is too strange. So, I try to explain my point ones more.

If we look e.g. on the Ft over Ic curve of RETs like MJL3281, than we will see large variations of the effective current gain at 10 MHz, depending on output current. This inevitably will influence the minor loop behaviour, as it modulates the VAS load impedance. If one has, on the other hand, output devices not amplifying at all at this frequency, then an important contribution to varying behaviour and thus to reduced robustness is just excluded. All what now influences the minor loop at 10 MHz are output network, driver, and VAS, and these are much simpler to estimate in their behaviour, as they operate in class A. Additionally, their Ft is even much higher than that of the MJLs. So, remaining current-dependent Ft changes will give much less changes of the minor loop's stability margins.

Short meaning of long text:
[strange] If one manages somewhow (e.g. with your C9) to achieve comparable minor loop stability margins with slow output devices and with these fast ones, then the result will be more robust with the slow devices. (All this of course only applies to the case that the ULGF is *above* the slow device's Ft.)
[/strange]

Best regards,
Matthias
 
Last edited:
Do I need to spell again I'm a lazy ****?

Here's a good example.

Essentially, you start with the set of equations (in complex s) that characterize the linear system (use Kirchoff and consider the Ui(s) and Ij(s) variables in the diport, i,j=1,2), determine the linear system matrix, then you calculate the eigenvalues by solving det(A-LI) = 0 (they are in general complex numbers) then you plot the trajectory in the phase space. If it diverges, the system is unstable. If it converges, it's stable.

If you prefer, you can start from the differential equations set, and consider the s variable <-> differential operators isomorphism.

We are so far off topic it's not even funny.
Hi Waly,

even to the eyes of someone not aquainted to control theory, it's not that much off-topic.
A practical question that has been raised recently in different threads here is: The theory you just explained distinguishes between stable and unstable. In practice, we would like to have an estimate, *how* stable it is.
With Bode diagrams, we have developped a feeling, how large the margins should be. Now, with multiple feedback pathes, the inner most loop often shows stability margins in the Bode plot that would make suspicious in a simpler system. On the other hand, performing transient simulations, all seems to be fine.

The question is: does the theory you mention give some help to evaluate the safety margins (knowing that in practice all will be worse than in analysis and simulation)? Are we e.g. to apply the same phase margin standards in a system with nested loop(s) as in a simple one?

Best regards,
Matthias

PS. Thanks for the links in your post #94
 
Last edited:
If it is possible to run the loop well past the Ft of the transistor then that increases the amount of distortion reduction possible.
Hi David,

as I see it, kgrlee consistenly insists on having done exactly that.
His 2sd1046/2sb816 output devices (post #101) have a typical Ft just a bit higher, but maybe with the VAS/OPS devices he used in practice, the minor loop's ULGF also was higher?

Kind regards,
Matthias
 
Hi Waly,

even to the eyes of someone not aquainted to control theory, it's not that much off-topic.
A practical question that has been raised recently in different threads here is: The theory you just explained distinguishes between stable and unstable. In practice, we would like to have an estimate, *how* stable it is.
With Bode diagrams, we have developped a feeling, how large the margins should be. Now, with multiple feedback pathes, the inner most loop often shows stability margins in the Bode plot that would make suspicious in a simpler system. On the other hand, performing transient simulations, all seems to be fine.

The question is: does the theory you mention give some help to evaluate the margins? Are we e.g. to apply the same phase margin standards in a system with nested loop(s) as in a simple one?

Best regards,
Matthias

PS. Thanks for the links in your post #94

Unfortunately I am not aware of any general method of consistently determining stability margins in multi loop systems. As I've mentioned elsewhere, something like the minimum phase margin is not correct, as this depends on the order the loops are individually analyzed (like 1-2-3 and 2-1-3).

In fact, stability margins are not directly a topic in the control theory, but more like an engineering quest. There are several more restrictive "engineering" theorems, on of these tells that if the loops have a common node at the output, then all loops can be analyzed in one step, by breaking them at the output node. Therefore the usual phase and gain margins can be determined. While being quite intuitive, the proof for this theorem is not trivial. E.g. this theorem may help in splitting the TMC analysis in a Miller loop and 3 loops at the output node (global, R-C1, R-C2). Because of the Miller ULGF being far away, it is possible to approximate the TMC stability margins by analyzing the 3 loops at the output node (and observe the TPC like phase dip).
 
Last edited:
... There are several more restrictive "engineering" theorems, on of these tells that if the loops have a common node at the output, then all loops can be analyzed in one step, by breaking them at the output node. Therefore the usual phase and gain margins can be determined. ...
Probably, this is the result we are using in simulation with a loop probe between output node and common node of all feedback paths. Exactly here, we now often see e.g. phase margins lower than we would like to.
In this extreme example (http://www.diyaudio.com/forums/soli...nested-miller-compensation-4.html#post3466552), the phase margin is around 60 degree, and even in a conceptual simulation model with only ideal circuit elements, it does not become much larger. On the other hand, transient simulation shows excellent (small-signal) square-wave response *at all circuit nodes*: no overshoots in the wrong direction, no ringing ... (margins of outermost loop are nearly ideal: around 90 degrees, more than 20 dB).

If one simulates, on the other hand, the conceptual model of a standard topology with idealized transconductance input stage, integrator as VAS and a single pole in the output stage, one also can create a phase margin of 60 degree in the global loop. In this case, however, the square-wave response does ring.

This leads me to the conclusion that the phase margins are not exactly comparable. (After all, I would expect that the complete transfer function has to be taken into account.)

Best regards,
Matthias
 
Last edited:
Stability & other stuff

No, I'm graduating in EE later this year.
A budding Baxandall/Blumlein/Nyquist/Bode ! :eek: We grovel at your feet, oh Guru Waly.

This stuff is exactly on topic. The main caveat with 'pure Cherry' is that some gurus, including Great Guru Baxandall, claim it is unworkable. At least one pseudo guru, ie me, says it can be made to work in real life. When theory and practice differ ... :)
_________________

Can you show a very simple example of this, say [2x2]?
I still remember matrix eigenvalue theory but would like to see how it is applied.
The most comprehensive example of this, as it pertains to us, is Cherry's Feedback, Sensitivity and Stability of Audio Power Amplifiers. I think you said you had a copy, David.

I note Cherry scorns my C9 across the VAS emitter :mad: and damns with faint praise Bob's favourite MIC (hope i got da TLA rite) :D

Whatever your prejudice, his matrix method is of great use to those versed in the art. I leave it as an exercise to the reader, to model his favourite TMC, TPC, apples or oranges. :cool:
_________________

matze said:
If one has, on the other hand, output devices not amplifying at all at this frequency, then an important contribution to varying behaviour and thus to reduced robustness is just excluded. All what now influences the minor loop at 10 MHz are output network, driver, and VAS, and these are much simpler to estimate in their behaviour, as they operate in class A. Additionally, their Ft is even much higher than that of the MJLs. So, remaining current-dependent Ft changes will give much less changes of the minor loop's stability margins.

Short meaning of long text:
[strange] If one manages somewhow (e.g. with your C9) to achieve comparable minor loop stability margins with slow output devices and with these fast ones, then the result will be more robust with the slow devices. (All this of course only applies to the case that the ULGF is *above* the slow device's Ft.)
Yes. Baxandall uses exactly your explanation in the Baxandall papers too.

But as I said, the practical effect is simply different compensation bits.
_______________________

STABILITY MARGINS and other Sh*t

If I may ask the question, 'Why are we obsessed with Phase & Gain Margins?

ANSWER : we want to see if the system
  • oscillates/overshoots or peak
  • might oscillate/overshoot/peak more if parameters/loads change
These are the real world reasons. The numbers are only important cos they have some bearing on these 2 points.

In da old days, Phase & Gain Margins were used cos other investigative methods might lead to the device (eg an amp) giving up the Holy Smoke.

But in da 21st century, we have much better tools like LTspice.

If we want to look at overshoot/peaking today .. JUST CLOSE THE LOOP and look at overshoot/peaking :eek: This has the same info as Loop Gain Bode plots & Nyquist .. except it gives us DIRECTLY what we want.

Remember, we only want to know how close Nyquist gets to (-1,0) cos this is inversely proportional to peaking.

For device, load variation, just CLOSE THE LOOP and vary your device parameters and load ... just as you would do in 'real life' for a production item. Doing this even crudely tells you useful stuff .. even if you can't do full Monte Carlo.

And of course 21st century LTspice has .TRANS which lets you look at stuff which was definitely Holy Smoke in da old days.

my usual disclaimer that Bode/Nyquist still useful if yus trying to improve stability .. even if the Phase/Gain margins dun haf much significance in themselves ..

Cherry 1982 talks a lot about 'insight'. Our models will never be exact. But good modelling gives us a better idea of what is going on and what to do if something is not quite right.

You know when your model is becoming useful when it simulates stuff and changes that you see in 'real life'.
 
(quote from post #109)
Yes. Baxandall uses exactly your explanation in the Baxandall papers too.

But as I said, the practical effect is simply different compensation bits.
Kgrlee, after reading the Baxandall paper you referred to, I resign. I do not know anymore what we are speaking about. Probably, my English is not good enough.

Thanks anyway for the interesting thread.

Best regards,
Matthias
 
Kgrlee, after reading the Baxandall paper you referred to, I resign. I do not know anymore what we are speaking about.
Matze, its the bits on pages 30 & 31 of The Baxandall Papers

from bottom of page 30, 'My recent approach .... - the power transistors at least will have ceased to transist at such a high frequency, since its ft may not be above 1MHz ...'

to two thirds of page 31, '... until it ultimately becomes approx. the 20R output impedance of tr.4'

As a Great Guru, Baxandall analysis is slightly more detailed than yours but he does agree with your assertion that at VHF, the VAS & driver may be dominant. Of course as a pedant, he has insisted on discussing every kink in the Loop Gain curve introduced as the Outputs cease to 'transist'.

Also, he is using really rotten outputs. :D

Edmond, I gather you have extended GGB's scribblings such that Bob Cordell has suggested you have the honour of naming this technique. May I ask where you consider resides the best exposition of this?
___________
Stand up and get real. What I've mentioned so far is textbook basic control theory, nothing original or outstanding.
Waly, I shall now make a prediction.

Of your large class (EE usually large), there will be perhaps 3 persons, probably less, who understand dis control theory stuff properly with your facility. :eek:

Not that I'm suggesting your 'knowledge' is complete .. but that your understanding might lead to practical advances in real life devices as you gain more experience. In case it isn't obvious, this is a compliment. :)
 
Disabled Account
Joined 2008
Stand up and get real. What I've mentioned so far is textbook basic control theory, nothing original or outstanding.

Right.
The beauty of control theory is that it works (every time). It is a great tool if you understand it and are using it properly. Control theory (and analog design) was my favorite subjects when I studied EE, in the 70's. :)

Sometimes it seem like that some EE don't understand complex mathematics. :eek:
 
I asked in DIYAudio earlier and had no response to the question "How close is a transistor to minimum phase past Ft?"
Later read Dennis Feucht's treatment of transistors past Ft and it is similar to your idea.
Hi David,

is Dennis Feucht's treatment available somewhere?

And a probably stupid question, but it is meant honestly: why are you exactly concerned about minimum phase behaviour in this case?


Thanks and kind rgeards,
Matthias
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.