Some questions about amplifier classes, distortion and efficiency

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
The gain for a Class AB amp will slightly change with the amp turning from the small signal Class A mode to the Class B if the amp has emitter resistors which is the common case. For currents smaller than the bias setting one can regard the two emitter resistors (of a single pair output devices amp) as effectively in parallel but for higher currents , amp working in class B, in essence only one emitter resistor has to be considered.
 
You can't subtract off distortion by subtracting off a (supposed) distortionless output. If you have eliminated D in the output (as you claim) then you can't use this perfect output to cancel D in the input, as you have no D left to use. Hence feedback can only reduce distortion, not eliminate it.

A ---Amp---> 10A+D
A -1/10 D ---Amp---> 10A - D +D = 10A
^ ^
| |
input - ( 1/10 output - input )

- ( 1/10 output -input ) = feedback Update: i changed the position of the 2 variables.

If i have eliminated D, as i claim. Then i cant use it in input as i have no D left to use but then if there is no D in output then i dont need to use any D in input for correction.
It seems this way the amplifier o/p is auto correcting.
Although, some part of your argument may be right also, i believe then the system will switch between between with distortion output and distortion free output at the speed of electrons:D, which I can filter using a capacitor.

Also since my feedback is 1/10. This means if the amp amplifies by only 9X then the feedback would make the input 1.1A instead of A to adjust the output to 10X. Again switching o/p between 9X and 10X at speed of electrons.

What do you think?
 
Last edited:
No worries, DF96. I am aware of what "predistortion" (which is a network compensation problem) is used for in RF - and it works out pretty well too. But RF amplification is orders-of-magnitude less sensitive to straight signal non-linearity compared to audio. Just the nature of it ... using carriers and sub-band modulation. The "data" is impinged on far more gross signal characteristics such as "frequency" (modulation) and "phase" and "amplitude", and even "quadrature" (for modem/digital work) which averages out dozens-to-hundreds of waves of carrier ... compared to every twitch and wiggle of analog audio waveforms.

GoatGuy
 
Amit_112dB said:
What do you think?
I think you need to do some more reading, on electronics and algebra. Then you can do some reading on feedback.

GoatGuy said:
But RF amplification is orders-of-magnitude less sensitive to straight signal non-linearity compared to audio.
I'm not certain receiver designers would agree with that.
 
Actually I am certain that RF receiver designers would disagree with my statement, especially if young. After 4 decades of design and topology analysis, one begins to be able to separate what is factually important from the theoretical and philosophical chaff. RF "distortion" generates harmonics - which tend to leak all over the place. They are undesired, and careful layout (and good auto-leveling biasing), and good selection of amplification devices goes a LONG way to keep everything nice and linear, and relatively free from accumulating distortions (harmonics). But the point remains: the information impressed on the RF itself is strikingly independent of sinusoidal purity of the RF carrier. Just consider what heterodyning does to the signal! Input RF is amplified by a rather expensive (but typically single) very-low-noise, very-wide bandwidth device. It is then mixed with a subrate carrier through a nonlinear device (such as a lightly biased diode or small tube rectifier) in order to generate harmonics in "beat frequency" form. Tank circuits following then eliminate the original signal frequencies, leaving the IF intermediate frequency signal to be passed through tight filters, increasing passband selectivity. Often, the signal is again 'heterodyned' to another IF band, to further allow separation of closely spaced frequencies, and overall sensitivity. From this extraordinarily distorted (in frequency) signal, all the "data" impressed on it remains as a combination of amplitude, phase and frequency shifts. It is that which is then recovered (through a "discriminator" of varying complexity), as a waveform-direct of analog content.

This applies even to digital signals such as are used in cell-telephony and the like. Ultimately, a phase-amplitude quadrature signal (intrinsically an "analog" complex waveform, very much akin to a left-and-right stereo channel set of waveforms) is produced, which on being digitized and further 'sifted', becomes the streams of 1s and 0s that constitute a digital stream of data. It is clearly NOT the waveform purity of the carrier that is even marginally related to the data impressed onto the carrier. It was this magnificent abstraction that allowed the whole "radio revolution", including the frankly amazing work done in the 1940s and 1950s allowing for multiple sidebands of "data" to be impressed on an FM signal so that television could be broadcast in the frequency bands achievable with then-practical tube amplification circuitry.

GoatGuy

PS: the above, unfortunately, "dates me" back to when there were small, but active dinosaur populations running about the country. LOL ...
 
GoatGuy said:
But the point remains: the information impressed on the RF itself is strikingly independent of sinusoidal purity of the RF carrier.
That is not what I was thinking of. In RF the usual problem is not distortion of the signal affecting the signal itself, but distortion affecting everyone else on other channels (or the converse: incoming interference). That is why low distortion is needed. 'Spectral regrowth' is the nice name for a nasty phenomenon.
 
I agree [re: amplification distortion in the RF stages of a receiver, leaking back out the antenna, thus causing interference to other devices within proximity]. If this is what you were thinking, then ... we agree, and it is settled. My take on your original comment had to do with the "data" impressed on the carrier. Clearly... different topics!

Or as they say in some sports, "no harm, no foul"

GoatGuy
 
The gain for a Class AB amp will slightly change with the amp turning from the small signal Class A mode to the Class B if the amp has emitter resistors which is the common case. For currents smaller than the bias setting one can regard the two emitter resistors (of a single pair output devices amp) as effectively in parallel but for higher currents , amp working in class B, in essence only one emitter resistor has to be considered.

Yes, the intrinsic resistance of the transistor reduces as the stage enters large signal mode, as Gm increases. This resistance is in series with the emitter resistors, both contributing to the output function at small signal class A and only one considered during class B.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.