Bob Cordell's Power amplifier book

Benefits of output triples

The benefits of using output triples are more than just that of increased loop gain. Indeed, in Miller compensated amplifiers, the loop gain is controlled at most frequencies by the Miller feedback. This is especially true when a good 2T VAS is used. As compared to the double output stage, the triple will just keep allowing loop gain to increase to lower frequencies.

The ability to deliver high current with little distortion under conditions of beta droop is one advantage. Although they might not be rated for operation at 2 ohms for thermal reasons, I like all of my amplifiers to perform well into 2 ohm loads up to the point where expected significant power supply sag or protection circuits limit power.

People are also often concerned about beta matching between the NPN and PNP output transistors, since the signal takes a different path on positive and negative half cycles. Such matching is difficult at best, and will usually not persist over a large operating range. The use of a triple greatly reduces the effect of beta mismatch of NPN to PNP devices in the output stage. Similarly, it mitigates distortion from Early effect in the output stage.

A 100-watt amplifier must deliver 40 V peak into its load. Assuming a stiff power supply, in principle it should be able to deliver that 40 V peak into 4 ohms, resulting in peak load current of 10 A. One can argue about transistor beta, but suppose in a double driver beta is only 50 and output transistor beta has drooped to only 20. That is total beta of 1000, and the 10 A into the load will require 10 mA from the VAS. That is just about equal to what we often bias a single-ended VAS at. Under worst case conditions, for very low distortion, the signal current draw on the VAS should never be greater than 1/10 the quiescent bias of the VAS. Even 10% of the bias current value is not particularly desirable.

Cheers,
Bob
 
Member
Joined 2016
Paid Member
Hi Bob, I looking at one of the higher power schematics that you have in your book. The 200 watt amplifier at the end of chapter 3. I was looking at the two resistors that set the open loop gain. In this case a 19K and 1K resistor which gives us a closed loop gain of 20 or 26db. My question is. I understand that a closed loop gain of 20 would be required for the 50watt amplifiers so that a 1v input can be used.
But if the same level of gain is used for the 200w amplifier which requires about 40v at the output for an 8 ohms load won’t we need a closed loop gain of 40 or 32db, making the resistors 39.2K and 1K if we still want to use a 1v input level
 
I think the idea is keep the gain the same, when comparing the different designs, to measure the effects. Once you change the gain, the measurement numbers change. More gain = less feedback = more THD usually. Think of gain as a constant, just like load or frequency.
But in the real world you are correct, you want that 1V input spec for at or near clipping output level.
 
Last edited:
Rick,

You are correct on all counts, and Stuart has raised a very reasonable question. In many commercial amplifiers, and in the BC-1 described in chapter 4, the closed-loop gain is more like 29 dB or so. Some commercial amplifiers go even higher, to perhaps 30-32 dB.

I am not aware of any standards for this. It is also true that it is not unusual to have preamps that readily produce 2V RMS or more with the volume control at a higher position. A power amplifier with a gain of 30 dB will have voltage gain of 31.6. With a 2 V input, we get 63.2 V into 8 ohms, corresponding to 499 watts.

Cheers,
Bob
 
Member
Joined 2016
Paid Member
Thanks Bob.

So I guess that it's best to set the closed loop gain as low as practical to achieve the desired amplifier output voltage while talking into account the available voltage from your pre amplifier and still leaving a bit of extra headroom. Maybe a value of 70% of the available pre amplifiers voltage would be a starting point if of course the pre amplifiers output voltage is a least 1 volt RMS.
 
Last edited:
In the old days of LP, tape, with low levels, gain was set by the industry more or less. MM Phono gain was set at 35/40dB 1.5mV -phono amp> 150mV -preamp> 1V -power amp> clipping.

With the CD the numbers all changed, where a CD max o/p was the 2V level.
You could reset the standard i/p sensitivity of the power amp to be 2V so a passive attenuator could work better. In many cases it comes down to where do you want the position of the volume control to be at.
Now with wealth of media players/phones the numbers are not so standard anymore.
 
Thanks Bob.

So I guess that it's best to set the closed loop gain as low as practical to achieve the desired amplifier output voltage while talking into account the available voltage from your pre amplifier and still leaving a bit of extra headroom. Maybe a value of 70% of the available pre amplifiers voltage would be a starting point if of course the pre amplifiers output voltage is a least 1 volt RMS.

I would not go so far as to suggest setting the closed loop gain as low as practical. I would almost never set it below 26 dB; I see no advantage in that. Also, there is little need to set it above 30 dB for anything but really high-powered amplifiers (e.g., > 500 watts at 8 ohms). So we are really only talking about a 4 dB range in gain for most power amplifiers, which is not really a lot.

Cheers,
Bob
 
I would not go so far as to suggest setting the closed loop gain as low as practical. I would almost never set it below 26 dB; I see no advantage in that.
Except, of course, when having to deal with the wondrous world of 100+ dB (horn) drivers, which are near enough headphone sensitivity. (Actually, I know of at least one headphone amp manufacturer who has had a model featuring an LM1876 chip amp with OLG drained away to support a gain of 8 dB for many years.) At some point, even passive attenuation ahead of the power amp is running into its limits.

The EEVBlog episode where Dave Jones narrows down the audible tweeter hiss in his KRK Rokit active speakers to the actual tweeter power amp illustrates the (potential) problem. Just a regular 1" dome tweeter with a modicum of waveguide. What in a passive speaker would have seen considerable attenuation by the crossover becomes a hiss buster in an active one.
 
Member
Joined 2016
Paid Member
Hi All.

I was looking for some information regarding the LTP current requirements to charge the Miller Capacitor.

Could find it in Bob's book. I know it talks about calculating the Miller Capacitor but it didn't relate back to output power. I guess that its simlar to slew rate requirements increasing with amplifier power. Anyway I found the following in the late Randy Slone's book. Page 116.

He says "In simple terms, the input stage must have a high enough quiescent current to supply the charging needs of CC at high frequencies. The equation for calculating the peak charge current of CC (CCipk) relative to frequency is as follows:

CCipk = 6.28 x frequency x CC (farads) x EPK(VA)

Let us walk through a sample calculation for demonstration purposes. Assume you want to determine the peak current requirement from the input stage for a CC value of 100 PF at 50 kHz driving an 8-ohm load at 200 watts RMS. The product of 6.28, 50 kHz, and 100 PF comes out to 0.0000314. The peak voltage of the VA [EPK(VA)] can be assumed to be the peak output voltage needed to drive an 8-ohm load at 200 watts RMS (the OPS can be considered as providing unity voltage gain for calculation purposes). Utilizing the common Ohm's law power calculations, the peak voltage across the load, which is assumed to be the peak voltage output of the VA, is about 57 volts. The product of 0.0000314 and 57 is 1.79 mA. Therefore, the input stage will be required to supply a peak charge current (CCmc) of 1.79 mA to accommodate CC at 100 PF at 50 kHz."

My questions are.
1. Its the an accepted method of calculating the IPS current source requirements?
2. Why did he choose 50khz was this just a arbitrary figure or was it possibly the gain crossover frequency?
3. If not what frequency would you use to ensure that you have sufficient IPS current?
 
The question of "maximum full-power output frequency" is exactly equivalent to "how much slew rate is enough?" (which happens to be the slope of a sine at that particular frequency), and that question you should find discussed fairly extensively in this forum, with clearly more than one opinion. But for unimpaired distortion performance at 20 kHz you should have a safety factor of at least 3, preferably 5, some might even say 10.

For 200 W / 8 ohms or 40 Vrms (56 Vp), the minimum slew rate required at 20 kHz would be 7 V/µs (= 56 Vp * 2 Pi * 20 kHz). So you'd prefer your amplifier to have at least 21 V/µs, better 35 V/µs, maybe even 70 V/µs. G. Randy Sloane with his 17.5 V/µs might even be considered a bit on the skimpy side. Then again, modern constructions would be more likely to use 47 pF than 100 pF Miller caps.

Note that this peak current must be delivered by one leg, so tail current would generally be twice that. Which in turn should explain why most everyone is using either JFET inputs (usually cascoded to offset their input capacitance, not to mention Vds rating - maybe even with a bootstrapped cascode for further reduction of nonlinear input capacitance) or degenerated BJTs.
 
Last edited:
Member
Joined 2016
Paid Member
The question of "maximum full-power output frequency" is exactly equivalent to "how much slew rate is enough?" (which happens to be the slope of a sine at that particular frequency), and that question you should find discussed fairly extensively in this forum, with clearly more than one opinion. But for unimpaired distortion performance at 20 kHz you should have a safety factor of at least 3, preferably 5, some might even say 10.

For 200 W / 8 ohms or 40 Vrms (56 Vp), the minimum slew rate required at 20 kHz would be 7 V/µs (= 56 Vp * 2 Pi * 20 kHz). So you'd prefer your amplifier to have at least 21 V/µs, better 35 V/µs, maybe even 70 V/µs. G. Randy Sloane with his 17.5 V/µs might even be considered a bit on the skimpy side. Then again, modern constructions would be more likely to use 47 pF than 100 pF Miller caps.

Note that this peak current must be delivered by one leg, so tail current would generally be twice that. Which in turn should explain why most everyone is using either JFET inputs (usually cascoded to offset their input capacitance, not to mention Vds rating - maybe even with a bootstrapped cascode for further reduction of nonlinear input capacitance) or degenerated BJTs.
Thank you.Really appreciate that. So like you said it just comes down to how much slew rate you think you need, which is of course a debatable thing. I'll try and find some further discussions on the topic.
 
AX tech editor
Joined 2002
Paid Member
It is customary to design for 5x or 10x the required slew rate to make sure you stay away from it at all times. The reasoning is that even when you don't have slew rate limiting per se, approaching slew rate limiting can already start to increase distortion.

In 1977 Walt Jung wrote a landmark paper on slew induced distortion, probably available on line. It was an article series in Audio Amateur.

It also appeared in the AES Journal, if you have access to that:

Walter G. Jung. Mark L. Stephens, Craig C. Todd, Slewing Induced Distortion and its effect on Audio Amplifier Performance, with correlated Measurement/Listening Results, AES Preprint no. 1252, May 1977


Jan
 
Last edited:
In 1977 Walt Jung wrote a landmark paper on slew induced distortion, probably available on line. It was an article series in Audio Amateur.

Old Colony sold a CD-ROM version of the entire series of 1970's "The Audio Amateur" --

(I have the paper copies going all the way back, but unless you work in an academic library, the paper copies can absorb moisture,etc.)
 
Member
Joined 2011
Paid Member
The required slew rate does indeed vary with the maximum output voltage swing. If the output voltage swing doubles (so the output power into 8 ohms quadruples), the required slew rate also doubles.

Fun fact #1: if the maximum output voltage swing is ZERO, the required slew rate is also zero.

Fun fact #2: if the maximum output voltage swing is INFINITY, the required slew rate is also infinity.
 
Won't it also depends on the amplifiers power rating as well 50 watt vs 500 watts?

Hi Stuart,

The maximum rate of change (slew rate) of a 1-V peak 20 kHz sinewave is 0.125 V/us.

The slew rate at 100 w into 8 ohms is thus 40 V peak * 0.125 V/us/Vpk = 5 V/us. With a generous factor of 10 for very low distortion, you get a need for 50 V/us. With the same margin, a 500 w, 8-ohm amplifier would require 3.16 times that, or 158 V/us. With a little less margin, 100 V/us would still be very good for a 500-w amplifier.

If you have a differential input stage with a tail current of 2 mA that is loaded with a current mirror, the full 2 mA of tail current is available to charge the Miller compensation capacitor in either direction. Achievable slew rate across a capacitor is simply I/C.

A modern amplifier with a 2T VAS might typically have a 30 pF compensation capacitor, so the slew rate will be 2mA/30pF = 2e-3/30e-12 = 0.067e9 = 67 V/us.

In this example, the input differential pair (if BJT) must be degenerated with emitter resistors to establish the gain crossover frequency (ULGF), which is often in the neighborhood of 1 MHz.

Cheers,
Bob
 
Member
Joined 2004
Paid Member
However, be careful about defining slew rate. You can get great numbers with the input and driver saturated but the output has little relationship to the input, especially if the recovery time is long. More meaningful would be slew rate at a particular distortion level indicating that the feedback system is still functioning.
 
The really interesting thing in audio would be to measure the slew rate
in any voltage and load current condition, for example measuring it,
from 1 to 1.1v, from 2 to 2.1 and so on with infinite possibilities, this is quite difficult
and if we could easily measure it, we would find surprises in most amplifiers.

BR
 
Last edited: