Sum and difference tones come from even-order distortion. Other IM clustered around the original input comes from odd-order distortion. Which is dominant depends on amplifier details.
Glad to be of assistance 😀 Also such HF signals are going to be much lower amplitude in music.The conclusion that Scott would draw is that these products have no practical impact because they will be masked by the main tones, due to their proximity and lower amplitude.
These are independent, in theory.Can someone here explain how these two parameters relate to each other? For example, how can you have 6u sec rise time with .002% THD?
Think of rise time as how responsive the amp is to a change of input voltage. Like how responsive a car is to putting your foot on the accelerator.
THD is a measure of how linearly the amp responds. Simplifying: linear means that the output only contains the frequencies in the input signal. Distortion adds new frequencies that were not present in the input.
An amp can be slow to respond but perfectly linear. It's frequency response is modified but this is not the same thing as distortion.
There is a third thing called slew rate. This is a physical limit on how fast the amplifier output can change voltage. The amp may behave linearly but once the slew rate is reached it will distort badly.
My $0.25:
Any amp has two clipping levels, amplitude and slew. The maximum slew of a signal is 2 pi radians per Hz, the speed at the circumference of a rotating wheel.
And very important is that the PEAK slew rate of a complex signal is the sum of the peak slew rates of each frequency, so if the sum of the amplitudes does not clip, then the peak slew rate is less than peak slew rate of the highest frequency. On short, if a 20KHz sine wave of maximum amplitude does not slew limit, then no combination of audio frequencies will slew limit.
Now, like amplitude limits (clipping), every amp approaches and recovers from slew limits differently. So some margin above a 20KHz power bandwidth is advised. The frequency that slew rate starts to limit output is called power bandwidth.
To address the original question, if the amplitude of a signal is modest, then so is the slew rate and can be below the slew limit when a full power signal of that same frequency is not. Slew rate should not affect THD until the slew limit is reached. Slew rate limit is like clipping, no problem until you hit it.
Any amp has two clipping levels, amplitude and slew. The maximum slew of a signal is 2 pi radians per Hz, the speed at the circumference of a rotating wheel.
And very important is that the PEAK slew rate of a complex signal is the sum of the peak slew rates of each frequency, so if the sum of the amplitudes does not clip, then the peak slew rate is less than peak slew rate of the highest frequency. On short, if a 20KHz sine wave of maximum amplitude does not slew limit, then no combination of audio frequencies will slew limit.
Now, like amplitude limits (clipping), every amp approaches and recovers from slew limits differently. So some margin above a 20KHz power bandwidth is advised. The frequency that slew rate starts to limit output is called power bandwidth.
To address the original question, if the amplitude of a signal is modest, then so is the slew rate and can be below the slew limit when a full power signal of that same frequency is not. Slew rate should not affect THD until the slew limit is reached. Slew rate limit is like clipping, no problem until you hit it.
Last edited:
Yep.My $0.25:
Example: 40V amplitude, 20kHz sinewave: max slew = 2 x 3.14 x 20000 x 40 = 5 MV/s.
More typically written as 5 V/us (microsecond).
So your typical amp needs only 5V/us slew rate capability. Especially since undistorted music rarely if ever slews this fast. And normally, digital music only requires these peaks from time to time...most listening is tens of watts. Music from an LP is a different matter and you can get spikes and unwanted HF and so on.
However, it isn't this simple because any amplifier's characteristics may change with load and current. 5V/us into an 8 ohm resistor is a heck of a lot easier for an amp than 5V/us into a 1uF capacitor. Loudspeakers have variable impedance. Amplifiers aren't always very stable.
The upshot is that 5V/us for a 100W amp of good design and easy speaker loads should be fine for digital sources. This is usually easy to achieve even with basic transistor circuits.
Last edited:
Thank you, that helps. My practicle take on rise time is your exact description. Hence my query regarding slow rise time and THD. I guess my real question then is, how can an amp be slow and be perfectly linear. It appears a bit oxymoronic in that a slow amp will have inherently higher THD than a fast one, no?(I'm referring to IMD). Unless of course THD and IMD have no relationship. But the faster the rise time, the cleaner more transparent the amp will sound and thus have inherently lower audible distortion. Aren't IMD artifacts an addition to the input? I'm clearly confused here.🙂These are independent, in theory.
Think of rise time as how responsive the amp is to a change of input voltage. Like how responsive a car is to putting your foot on the accelerator.
THD is a measure of how linearly the amp responds. Simplifying: linear means that the output only contains the frequencies in the input signal. Distortion adds new frequencies that were not present in the input.
An amp can be slow to respond but perfectly linear. It's frequency response is modified but this is not the same thing as distortion.
There is a third thing called slew rate. This is a physical limit on how fast the amplifier output can change voltage. The amp may behave linearly but once the slew rate is reached it will distort badly.
Well, my understanding is, IMD are mixing products that occur when an amplifier tries to correct the output signal to match the input signal. This works great at low frequencies but is increasingly ineffective at higher frequencies as the gain drops and delay starts to matter. Since each amplifier has some group delay the global feedback is always too late at higher frequencies, trying to correct an output signal that is already history, the amplifier corrects a signal that differs from the one that needed to be corrected. And so the amplifier adds 'phantom' signals to the output that it shouldn't, e.g. IMD is happening.
If I am not mistaken, there was a discussion on that, as global feedback can do a lot of good but also a lot of harm.
If I am not mistaken, there was a discussion on that, as global feedback can do a lot of good but also a lot of harm.
I will try to hammer a basic truth one more time (other participants have also participated, and the thread about slewrate also conveys the same message): frequency or rise-time performance have no direct effect on linearity, except through slewrate (when it is exceeded).
Otherwise, reducing the bandwidth (=increasing rise-time), if properly applied and if at all possible can in fact reduce the THD by increasing the FB at higher frequencies
Otherwise, reducing the bandwidth (=increasing rise-time), if properly applied and if at all possible can in fact reduce the THD by increasing the FB at higher frequencies
You may be confusing frequency response with linearity.I guess my real question then is, how can an amp be slow and be perfectly linear.
Also consider that audio is very slow compared with the speed of transistors. In theory, an amplifier does not need to work above 20kHz. If you made a perfectly linear amp with a bandwidth of just 20kHz you would be happy.
No. IMD are mixing products created by nonlinearity. They will be present whether or not feedback is used.kct said:Well, my understanding is, IMD are mixing products that occur when an amplifier tries to correct the output signal to match the input signal.
Not quite. Feedback reduces IMD. There will be less feedback at higher frequencies so IMD will rise. Delay is not an issue for audio amplifiers.This works great at low frequencies but is increasingly ineffective at higher frequencies as the gain drops and delay starts to matter.
No. This might be an issue if you were designing an RF PA but not for audio as the frequencies are far too low. It is a popular myth that the feedback is delayed by some hidden magic process.Since each amplifier has some group delay the global feedback is always too late at higher frequencies, trying to correct an output signal that is already history, the amplifier corrects a signal that differs from the one that needed to be corrected.
According to Wikipedia "distortion for a fundamental above 10 kHz is inaudible" Total harmonic distortion - Wikipedia
Exactly. None of the harmonics of 20KHz are audible, so at 20KHz, you can not hear the difference between .002% THD and 10% THD. For that matter very few people can hear 15KHz.
A fast amp is probably designed with poor stability aka "phase margin", so demanding a fast amp is not wise.
In any case, an amp using modern 40MHz power transistors will not have slew problems. This was an issue 40 years ago.
Last edited:
Except for Class D? 😉No. This might be an issue if you were designing an RF PA but not for audio as the frequencies are far too low. It is a popular myth that the feedback is delayed by some hidden magic process.
In any sort of a stable feedback loop, the delay of the feedback signal must be less than 180 degrees at any frequency where there's loop gain, so whether you consider it a straight delay or a phase shift, stability imposes an upper bound to any delay or phase shift. So, a class D amplifier that uses feedback is also subject to these restrictions.
Delay and phase shift are two quite different things. Feedback in audio amplifiers is subject to phase shifts.
Delay and phase shift are two quite different things. Feedback in audio amplifiers is subject to phase shifts.
Put another way analog filters cannot implement a pure delay, other than by using a delay line. Response is immediate as the output is related to the input by a differential equation that involves only variables at one point in time (for lumped element model of circuitry).
In reality any true delays in circuitry need to be much shorter than the timescale of the unity-gain transition frequency. For normal circuitry delays are measured in nanoseconds, other than some particular effects like stored-charge in BJTs, reverse recovery in diodes. Thus lumped element analysis at audio frequencies can assume delayless operation for linear operations, so the standard technique of determining phase-margin and gain-margin works well to assess stability.
Mark - you touched on the important part: delays in circuitry need to be much shorter than the timescale of the unity gain transition frequency.
We're all saying the same thing, and whether you use the 'delay yardstick' or the 'phase yardstick', it's still "how much later" and you can choose any unit you'd like, degrees or microseconds.
The underlying physics of why the feedback signal is "late" has nothing to do with how we characterize our measurement of it. And, the stability criterion does not care whether the feedback signal is delayed uniformly over frequency from a pure delay or if it's the result of a phase shift from a minimum phase network - too late is still too late, and 'fast enough' is still fast enough.
We're all saying the same thing, and whether you use the 'delay yardstick' or the 'phase yardstick', it's still "how much later" and you can choose any unit you'd like, degrees or microseconds.
The underlying physics of why the feedback signal is "late" has nothing to do with how we characterize our measurement of it. And, the stability criterion does not care whether the feedback signal is delayed uniformly over frequency from a pure delay or if it's the result of a phase shift from a minimum phase network - too late is still too late, and 'fast enough' is still fast enough.
My focus here is 40 years ago. The definition of a transparent amp was obviously one with flat FR but slew rate and rise time were very important factors among the well versed in audiophile circles. Here it seems not so much as "adequate" appears to be the operative. So then what is it about those 70's Brit amps that made them "sound" so fast in terms of transient response? THD numbers of .1% yet so transparent as if it were live unrecorded.Exactly. None of the harmonics of 20KHz are audible, so at 20KHz, you can not hear the difference between .002% THD and 10% THD. For that matter very few people can hear 15KHz.
A fast amp is probably designed with poor stability aka "phase margin", so demanding a fast amp is not wise.
In any case, an amp using modern 40MHz power transistors will not have slew problems. This was an issue 40 years ago.
Perhaps, but it helps if we use the correct language. What matters for stability is phase shift, not delay. This is the opposite of what you appear to be saying. Of course, some of that phase shift may come from delay (although very little in an audio amp) but it is the phase shift which causes the problem not the delay.Monty McGuire said:We're all saying the same thing
My focus here is 40 years ago. The definition of a transparent amp was obviously one with flat FR but slew rate and rise time were very important factors among the well versed in audiophile circles. Here it seems not so much as "adequate" appears to be the operative. So then what is it about those 70's Brit amps that made them "sound" so fast in terms of transient response? THD numbers of .1% yet so transparent as if it were live unrecorded.
Well, it could have been distortion. Particularly 2nd order distortion at higher frequencies is a "enhancement" deliberately added by some recording studios to voices that are too mellow.
Aphex Systems - Wikipedia
Human hearing is designed to look for certain things and 0%THD may not be the most pleasing sound, any more than a steak tastes best with no salt. Deliberate distortion is common practice in guitar amplifiers.
Last edited:
- Home
- Amplifiers
- Solid State
- rise/fall time and THD