• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

Negative Feedback

Is this referring to the Putzeys "F-word" article?
Possibly, I can't find a link above. I just recall seeing a graph several times in the past that showed the higher harmonics got worse at low N Fdbk, but then got better with higher N Fdbk. My point was that this is just how the mathematics of the Discrete form FFT (taken over a fixed time interval rather than a continuous transform) show the signal with a less "bent" distorting transfer function in the amplifier. It doesn't sound worse, as is the usual presumption.

Take a 1.01 power law transfer function and it will look God-awful bad with the Discrete FFT, but it will sound indistinguishable from a 1.00 (linear) amplifier. The Discrete FFT needs some post-processing to determine the sharpness of any curvature or kink in the transfer function.

One could do the recursive N-Fdbk calculations for low N-Fdbk to indeed arrive at those same results, reality being a continuous version of that. But the sound will be better than without the correction. Re-entrant harmonic generation is just a shallowly "apparent" problem, not a real one. Notice that most tube amplifier builders have preferred the sound of low (10 to 13 dB) N Fdbk anyway. Even at 3 dB of N Fdbk, one is not going to find any new screeching chalkboard sounds added to the signal. A long tail of tapering off harmonics (usually with alternating signs) is just how the Discrete FFT math models a slight deviation from a simple integer harmonic. ( a continuous FFT would show the 1.01 case as something like 1.01, but that's not what the sound-card FFT's are using, since that would take a long observation time per display update. )
 
Last edited:
John Linsley-Hood published this figure.
Yes, that's the one I've seen. Makes low Fdbk look bad. (but it isn't, just a math issue with the Discrete FFT using a short time interval)

late edit to my above post #61:
FFT frequency resolution is like 1/time interval measured, so if you want good freq. resolution, you wait forever for each update. One -could- overlap many long duration, slightly delayed interval FFTs to get good resolution and rapid updates, but the processing power required would be exorbitant.

For example, if the Discrete FFT uses an interval that only fits part of some frequency's full cycle in it, you get a spectrum for a chopped off sinewave included, so a spray of higher harmonics comes up. Which explains the use of tapered off windows to minimize such end effects.
 
Last edited:
The John Lindsey-Hood figure looks like my memory of the P.J. Baxandall paper. This was done before FFT modelling was a thing, mid-1960s? Putzeys' "F-word" article was in Linear Audio Vol 1, and was certainly a modern modelling.

But the same results were predicted in the L.I.Farren of 1938, referenced above by MarcelvdG, and probably in the 1934 v R. Feldtkeller referenced in the Farren. A kind soul has given me a copy of the vRF paper, and I'll leave an English translation here when finished (It'll take a while, but might be worth it.) These came from root principles and not from measurement artifacts. Feedback is recursive, like so much of the living world, and therefore messy, like so much of the living world.

All the best fortune,
Chris
 
Feldtkeller calculates a normalized third-order Taylor coefficient, the one that causes third-order distortion, as a function of the loop gain when the forward path is linear, has a power of 1.5 non-linearity, a quadratic non-linearity or a third-order non-linearity.

The power of 1.5 is the theoretical curve of the voltage to current transfer of a valve according to Child's law, although practical valves don't always abide by Child's law. Loop gains around -0.7 then cause maximum third-order distortion.
 
1671434626802.png
 
Servo Systems in the natural world:
Exercise versus heart rate
Sugar intake versus insulin levels increasing
Batter's motion and adjustment to hit the ball
These natural servo systems are good to have.

Bad effects:
Open Loop Insulin levels
Open Loop heart rate

Perhaps you have seen a graph of a step-test:
The test starts with the heart rate at its resting state. After the steps are started, the heart rate increases. When the steps are stopped, the heart rate decreases;
But then the decreasing heart rate actually goes below the starting resting heart rate.
Then, slowly, it increases back to the resting rate.

Negative feedback response in many circuits is very similar, if you use a square wave signal, and look at the amplifier output in the time domain.
 
The result looks like a continuous process. On the time scale of the amplifier HF limit there is some small amount of circulating re-correction to the initial distortion correction. This makes the output converge quickly to what the feedback factor predicts, not worse distortions as usually assumed.
If you can back this stuff up, the Norwegian Nobel comittee for Physics will welcome you excitedly!

Jan
 
Perhaps you have seen a graph of a step-test:
The test starts with the heart rate at its resting state. After the steps are started, the heart rate increases. When the steps are stopped, the heart rate decreases;
But then the decreasing heart rate actually goes below the starting resting heart rate.
Then, slowly, it increases back to the resting rate.
This is normal operation for a feedback loop that is slightly underdamped. All well-known behaviour that can be modified by modyfing the loop time constants.
Any book 'Servo circuits 101' explains it.

Jan
 
The result looks like a continuous process. On the time scale of the amplifier HF limit there is some small amount of circulating re-correction to the initial distortion correction. This makes the output converge quickly to what the feedback factor predicts, not worse distortions as usually assumed.

System delay is well known in control system theory, and is distinct from propagation delay, and unrelated to it.
https://www.mathworks.com/help/control/ug/analyzing-control-systems-with-delays.html

An ideal low pass amplifier, with a wide bandwidth compared to the input signal (for example, a decade wider),
causes integrator delay (or rate error), of the input signal that is inversely proportional to the amplifier bandwidth.

The phase response of such an amplifier is very nearly linear, and so is identical with a time delay.
The signal waveform shape is also preserved, again due to the linear phase characteristic.
See section 9.4, and figure 9.16 (below) in Dostal, Operational Amplifiers (first edition), for discussion of this delay.

The ideal low pass phase response for an amplifier with bandwidth BW, is -arctan (f/BW).
This is approximately linear for f << BW, since in this region, phase = -arctan (f/BW) ~ -f/BW.
Then the signal time delay d(phase)/df = d/df (-f/BW) = -1/BW.
 

Attachments

Last edited:
Actually, a voltage propagates in copper around .64 of the speed of light.
But that's still very fast!

Ok, @ 120,000 mi./sec. That would make the wavelength of a 20kHz sine about 6 miles. The closed loop in an amp is @ 2ft. being generous. So we get around the loop within the first 1/16,000 of the sine. Yes, very fast. Again, this demonstrates that a phase shifted signal is not delayed, but it is an instantaneous voltage difference across a device, that has to considered and managed (planned for and adjusted) to get the best usage or avoid it turning into PFB.
 
The result looks like a continuous process. On the time scale of the amplifier HF limit there is some small amount of circulating re-correction to the initial distortion correction. This makes the output converge quickly to what the feedback factor predicts, not worse distortions as usually assumed.
If you can back this stuff up, the Norwegian Nobel committee for Physics will welcome you excitedly!

Assuming a phase corrected amplifier, so we only have very short propagation times to deal with, and amplifier BW is way in excess of signal.

Signal gets applied to the amplifier, and out comes a full amplitude distorted version a moment later, Fdbk sends back inverted output with full dist. in it, which then nulls out (say 50%) of the incoming signal and puts in a neg dist. component. Modded sum now propagates thru amplifier to give 50% signal output and very reduced dist. at the output.

But now the Feedback sends back a rather reduced dist. correction, allowing more dist. to propagate thru the amp in the next loop. So then the next loop sends back more dist. correction, then less, then more ....... till 5+ moments later the system has converged to the steady state solution predicted by the Fdbk equation.

I don't see anything earthshaking here. In a few uSecs the amp is behaving according to expectations. (as long as the Fdbk is less than 100%)

--------------------------------------------------------------------------------------------------

More interesting is the steady state case where the Fdbk is sending back inverted signal, inverted 1st pass dist., and some tiny inverted multi-looped dist.
Now that tiny multi-looped dist. component coming out looks just like amplifier dist., and so gets reduced by the inverted Fdbk component being fed back in. So as the amplifier's inherent dist. acts on circulating dist. to make higher order harmonics, it is also being reduced by the feedback correction each time as well. Lets say the amplifier inherent dist. is 5%. And the basic N Fdbk reduces dist. by 5x of amplifier inherent dist, Then each higher order of dist. being generated by the amplifier non-linearity acting on that re-circulating component is also being dropped by a factor of 1/20 times 1/5. So each higher order component would drop by 100x from the previous order. So as long as the amplifier is not grossly distorting and the N Fdbk is doing a reasonable reduction, these higher orders should be below audibility.
 
Signal gets applied to the amplifier, and out comes a full amplitude distorted version a moment later, Fdbk sends back inverted output with full dist. in it, which then nulls out (say 50%) of the incoming signal and puts in a neg dist. component. Modded sum now propagates thru amplifier to give 50% signal output and very reduced dist. at the output.

So as the amplifier's inherent dist. acts on circulating dist. to make higher order harmonics, it is also being reduced by the feedback correction each time as well.
There is nothing circulating around. The full amplitude distorted version before the NFB could act on it assumes a delayed feedback, which is not the case. The input signal and the feedback signal are present at the subtracting node at the same time.
 
The feedback is much faster than any audio wave/signal. A "full amplitude" output without feedback would clip the amplifier. Slew distortion happens when the amplifier is pushed to its speed limits. During the slew limiting, yes, the feedback is unable to keep up, because the output is unable to keep up. ~All the delay is in the amplifier and very little in the feedback network. As soon as the output reaches the closed loop voltage, feedback is reestablished, and the output settles to the feedback defined voltage. But slew distortion should be impossible if the slew rate is adequate for full output at 20KHz.

Ironically SPICE does essentially what you suggest as it "converges" on each time increment, but that is just a numerical algorithm, and not what happen in real hardware. SPICE is not perfect, but vastly more accurate and more importantly, less open to gross mistakes than grinding through a lot of differential equations. This is one reason why vacuum tube amplifiers were 99% rule-of-thumb, and hacked on the bench, with very little engineering as we do today. And that is why years ago, I found myself correcting the bias in tube amplifiers that were way off the optimal values. I fixed the bias in the phase splitter of a Bogen amp, and the result was 100Watts undistorted instead of 60Watts.
 
He probably never read RDH-4...By the way, we have very simple and reliable simulation models for tubes because tubes evolved for about 60 ...70 years .I have a very thick book on tubes designs(the design of the tubes themselves) from the 50's that would leave flat on their nose a lot of people...If you think tubes were just some iron and wires put together be ready to have a shock if you try dwelving into the physical constraints of tubes theory.el84 or ec8020 were the result of more than 60 years of tube design.
 
Last edited:
This immediate versus converging looping steps argument gives the same results math-wise either way. Recursive terms in the math equations get solved simultaneously acting like zero time steps.

If you have a 25 MHz scope it will look immediate. If you have a 10+ GHz scope you will see converging steps. ( -Everything- looks like stair steps with such O'scopes at sub nsec timebase setting. Waves reflecting back and forth for unmatched termination impedances and wire and device propagation times. ) Actually the bandwidth of the amplifier components will set the pace here, rather than the wire length. Tubes have electron transit times too.
 
This immediate versus converging looping steps argument gives the same results math-wise either way. Recursive terms in the math equations get solved simultaneously acting like zero time steps.

If you have a 25 MHz scope it will look immediate. If you have a 10+ GHz scope you will see converging steps. ( -Everything- looks like stair steps with such O'scopes at sub nsec timebase setting. Waves reflecting back and forth for unmatched termination impedances and wire and device propagation times. ) Actually the bandwidth of the amplifier components will set the pace here, rather than the wire length. Tubes have electron transit times too.
If you see steps on any scope, that is the scope samples, not the linear amplifier. The fastest analog scope ever built was about 500MHz.

Sure, real engineers built a few vacuum tube amplifiers, but the vast majority of the market were copies of previous market successes and "breadboard" designs. True, that still happens today. Careful engineering on vacuum tube is largely in vain because they age quickly and, in those days, standard resistors were 20% tolerance. Vacuum tube instruments had to be calibrated very often, and some corporations still do that as a legacy practice. When I was young, a car or other appliance was old at 5 years. Now we expect 20 years, but of course many things are useless at about 5 years anyway, and there are a lot of cheap trash on the market.