Amplifer clipping.....

Status
Not open for further replies.
I am wondering....

Why is it that the most common configuration is a voltage amplification-stage with higher supply rails than the output current stage...eg 60 V on the front-end and 50 V to the output stage....???

The reason for the question is that i made some simulations on a circuit.. well on many circuits in fact..and most often the clipping performance of the front end is much nicer and cleaner than on the output-stage...
So the question is .. why not run the front en on lower supplies and the let the voltage clipping if any occur there...(offcouse you must consider the SOA of the output devices ect ect...and accept the extra dissipation of the like 5V higher rails)

But It will give you the advantage of not having an extra supply with higher voltage.. instead you can use the voltage already there and then chop it down with a high quality shunt or some other kind of regulation...

Shure mr Salas...can come up with a good circuit....exactly for that..🙂
 
Hi,
start your amplifier proposal with the same voltage rails for all stages.

Can the performance of the amplifier be improved?

Would extra smoothing on the PSU help?
Would extra decoupling at the output devices help?
Would extra RC filtering on the drivers and/or pre-drivers help?
Would extra RC filtering on the voltage amp stage help?

The list of questions goes on and on and on .....
 
The answer to your question, I think, is that in order to drive the outputs fully for most circuits (due to the drive requirements, losses, and the follower configuration) one needs more swing in the driver. However, your question is valid imo, and it is easy enough to build an amp of this type where the driver will clip before the output will by using a lower rail voltage for the driver section. It might be worth trying and seeing how it measures and sounds. I've often thought about doing just this myself. 😀

_-_-bear

PS. keep in mind that the drive requirements are not static and depend in part on the load being driven - resistive loads NOT being totally indicative of an amps performance in the real world.
 
Hi MiiB,

the reasons are as Bear described above, firstly to provide ample voltage swing if the output devices are lateral mosfets, but the second reason is the higher the applied rail voltage, the higher both second and third order intercepts, thus both harmonic and intermodulation distportion decreases.

Lowering the supply voltage would infact increase the distortion, both spurious and harmonically related for the same input signal (provided gain remains the same).

Nico
 
An amplifier like a Leach can only come within about 7V of either rail. Using a higher voltage for the front-end helps, but can cause some 'sticking' in the slow output devices when coming out of clipping.

An easy fix is to add a Baker clamp like the APT model 1. A fast diode is connected from the Vas stage to the rail the outputs run from. Another fast diode is inserted in series with the Vas outputs to the bias network. The output stage is connected to the bias network.

The first diode pair conduct when the output voltage from the Vas exceeds the rail voltage for the output stage. The diodes in series with the bias drop the signal voltage applied to the driver transistors enough that the front-end clips before the drivers and outputs can saturate. The idea is that the faster low current devices in the front-end can recover from clipping (from the Baker clamp) in a better fashion than the slower high current outputs can.
 
the Leach I built and measured ran @ +-58.5Vdc when idling.
The PSU sagged to ~ +-56V when delivering full power to 8r0.
The peak of the sinewave voltage, when just short of clipping, was ~52.8Vpk, (174W into 8r0).
I wonder how much that peak voltage can rise, if the input stages are separately powered?
 
"I wonder how much that peak voltage can rise, if the input stages are separately powered? "

The peak voltage when driving a 4R load rises about 5V with respect to the 7V~8V loss without a high voltage tier, so the peak output voltage is only down about 2V~3V, not 7V~8V (three pair of outputs).

On an original Leach with two pair of outputs running on ±57V I was able to get 52V peak at the point of clipping into a loudspeaker using dynamic program material. The speaker was nominally 8R, and over 20R through most of the midrange.
 
Last edited:
In a semi-related example, the SAE MKXXXI and the GAS Son run on the same voltage, both look roughly the same as the Leach, but with no pre-driver stage. The GAS Son with its high-voltage tier will drive about 4.5V more (peak) into an 8R load (it does have a bigger VA transformer though).
 
For my money, the best way is to avoid clipping altogether, by increasing the rail voltage (front and back) and lowering the gain. This leaves enough 'headroom' above the max power output by input sensitivity. For example: Input sensitivity of 2Vrms drives the amp to output power 100watts. With increased rail voltage, amp is capable of 125watts (or more) - this leaves 25 watts headroom above max input signal.

I have put this method to work in several amp with excellent results.
 
Clipping is not particularly audible in the APT with its Baker clamp.

The red LED on the front panel is labeled 'distortion alert', and the amplifier has to be into clipping for about 40mSec before it lights up. This corresponds well with the onset of audible distortion, and on dynamic program material may be driven several dB into clipping.

I have heard other amplifiers with a similar output stage sound bad the instant they are driven into clipping.
 
For my money, the best way is to avoid clipping altogether, by increasing the rail voltage (front and back) and lowering the gain. This leaves enough 'headroom' above the max power output by input sensitivity. For example: Input sensitivity of 2Vrms drives the amp to output power 100watts. With increased rail voltage, amp is capable of 125watts (or more) - this leaves 25 watts headroom above max input signal.

I have put this method to work in several amp with excellent results.

I will vouch for this approach.
 
the Leach I built and measured ran @ +-58.5Vdc when idling.
The PSU sagged to ~ +-56V when delivering full power to 8r0.
The peak of the sinewave voltage, when just short of clipping, was ~52.8Vpk, (174W into 8r0).
I wonder how much that peak voltage can rise, if the input stages are separately powered?

"I wonder how much that peak voltage can rise, if the input stages are separately powered? "

The peak voltage when driving a 4R load rises about 5V with respect to the 7V~8V loss without a high voltage tier, so the peak output voltage is only down about 2V~3V, not 7V~8V (three pair of outputs).

On an original Leach with two pair of outputs running on ±57V I was able to get 52V peak at the point of clipping into a loudspeaker using dynamic program material. The speaker was nominally 8R, and over 20R through most of the midrange.
our experiences of the actual amplifier do not agree.

I used a common supply and the amplifier lost ~3V from rail to output at maximum power into 8r0.
At best it might gain an extra 1V if the front end were powered separately.

You are saying that taking account of PSU sag you would get a 2V loss through the amplifier and PSU.
This seems impossible for a Leach Triple EF.
and when the front end is driven from the common PSU, this 2V increases to 7V.
Something is wrong.
 
What makes you think that you can only make it 10%, a new general rule. You can make rail voltages wherever you want.

Makes me think? You assume too much! 🙄

The differential in a typical design of this type has the driver at about 10% higher rail voltage than the outputs - your comments please on the differential in distortion.

AND, what sort of voltage ratio do you use or suggest or are thinking of between the driver and the output... are you saying +/-400vdc on the driver rails and +/-50vdc for the outputs? Just asking what you are saying??

_-_-bear
 
Hi Bear,

This is not a rule though, this is just an observation during the development phase.

The pre-production ELD-2 pre-driver rails runs at 120V series/shunt regulated and output stage rails at 70V raw 800VA transformer and 80mF per amp.

The pre-driver follows John's earlier comment of never overdriving it. 2V input will provide the 68V output (4 dB headroom), just a little more than half rail.

Both THD and IMD is almost two orders of magnitude better that the old version running the whole amp at rails of 67V. Output stage consists of hybrid lateral MOSFETs and BJTs.
 
I agreee with the point that you can dimension an amplifier so it doesn’t run into clipping with 2V...in.
But what if…. you use a preamplifier eg a tube based one where voltage swing of more than 5V is entirely possible...So the question was not about avoiding clipping.. But how to make the amplifer when driven into clipping behave as gently as possible.. When simulating I have seen that the wave forms are very different..and rather dependant on topology but also where the clipping occurs…when clipping the outputstage one sometimes gets edgy corners and also vertical (almost) flaks on the curves. This is not very disireable…and can lead to damage….In my world mis- and over-use is something that you must expect and anticipate..
 
Status
Not open for further replies.