How the negative feedback really works and an alternative feedback question

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Without global negative feedback the level of harmonic components of the distortion is proportional (not necessarily linearly) to the signal level. Also lower order harmonics rise more than higher order harmonics.

I find a lot of resonance between your theory and mine :)

What I think the problem with feedback comes when we use it around kinks in the transfer function. I reckon (but haven't verified) that when the instantaneous signal voltage passes through the kink there's a rise in the IMD at that voltage level. This doesn't get noticed with sine testing as it doesn't dwell at that level but when the kink turns out to be at the zero crossing point, its the worst possible place as music spends most of its time around there. Individual sines though spend an infinitesmal amount of time there.

I reckon the same effect is heard in sigma-delta DACs because these have inadequately dithered quantizers with feedback around. A 6bit quantizer (for example, commonly used in DACs) is a black-box stuffed full with 63 dead bands when not properly dithered :D
 
Let's see... Door #1 - noise (IM) modulation. Yah, yah, yah... I suppose its possible, but why then when (some years ago) I did the output-of-tube-amp-thru-attenuator-then-feeding-a-wowie-grade-semiconductor amp that was raved about as being utterly neutral and in no way showing acoustic signature ... why was the speaker tone so damned differnnt? The IM distortion (from the tube amp) was clearly being fed INTO the other semiconductor channel ... and its absolute linearity was gospel-truth just amplifying whatever came in to something larger on the output.

My tentative explanation here would be that the tube amp didn't have much IM coz tubes are remarkably linear at such low levels. But it did put out harmonic distortion at higher levels and the SS amp objected to those components and they intermodulated with the audio. Besides who said the SS amp was utterly neutral? I'm betting they didn't test it with already significantly harmonically distorted source material :D
 
My tentative explanation here would be that the tube amp didn't have much IM coz tubes are remarkably linear at such low levels. But it did put out harmonic distortion at higher levels and the SS amp objected to those components and they intermodulated with the audio. Besides who said the SS amp was utterly neutral? I'm betting they didn't test it with already significantly harmonically distorted source material :D
I do agree that we should look at the part of the signal that our brains do and not so much the part that hearing disregards. THD has been the standard because it was (relatively) easy to measure. In a private conversation I had with Craig Stark ( at the time he worked for Stereo Reviews) in 1984 he agreed that if the test does not relate well to how we hear then a new test is need . Progress in that area has been far too slow. Interesting quote from J.M. Keynes he had much insight about his followers Practical men they are.
 
Last edited:
I find a lot of resonance between your theory and mine :)

What I think the problem with feedback comes when we use it around kinks in the transfer function. I reckon (but haven't verified) that when the instantaneous signal voltage passes through the kink there's a rise in the IMD at that voltage level. This doesn't get noticed with sine testing as it doesn't dwell at that level but when the kink turns out to be at the zero crossing point, its the worst possible place as music spends most of its time around there. Individual sines though spend an infinitesmal amount of time there.

I reckon the same effect is heard in sigma-delta DACs because these have inadequately dithered quantizers with feedback around. A 6bit quantizer (for example, commonly used in DACs) is a black-box stuffed full with 63 dead bands when not properly dithered :D

I'm following along intently and like the way you're thinking. WAY too many people, who seem like they should know better, apparently want to live only in the steady-state sinusoidal time-domain (or frequency domain), which is of extremely-limited usefulness in defining a system's capabilities and describing its characteristics. But, as was mentioned, it's certainly EASIER.

You are talking about crossover distortion in a push-pull amplifier, which always tends to happen near the zero-crossing, since that's where push and pull do the hand-off of the signal.

It does seem crazy to have the worst part of the response occurring at around the average value of the signal. And low-amplitude output would tend to spend a larger percentage of the time nearer to zero, exacerbating any problem there.

If that is thought to be the cause of the main problem being discussed, then a Class A amplifier should not exhibit any trace of that problem, correct? So that hypothesis can be tested.

I think that these discussions are hopelessly flawed before they begin because no one knows (or at least they do not provide) any of the data that would be needed to even form an opinion, much less move forward toward a conclusion.

My guess is that no two people here have heard the same system(s), and the systems they each did and do hear are all flawed in more than one significant way and there are too many variables to even be able to start a discussion that relies in ANY way on anyone's description of anything they ever heard. So the only possible valid discussion is in purely-theoretical (and/or strictly measurement-based) scientific terms, which probably will never get us where we would like to be, or at least not easily-enough, for most of us. There are probably individuals who have sufficient hearing ability, and who have performed enough experiments, who really DO know some of the answers. But the rest of us would have a difficult time knowing who they are and still could not really KNOW that they were correct.

Many variables would need to be eliminated, to enable amplfier comparisons to have any meaning at all. For example, speakers and rooms and hearing must not be involved, at all (unless that happened to be the focus of a particular type of discussion).

Maybe a standardized configurable complex-impedance dummy load would need to be used, with a standardized sampling or recording type of output measurement device, with a standard data format for the resulting measured output. The input would need to be done similarly. Both would need to be provided, as the results of any type of testing. Actually, we would want to record the output from both before and after the complex part of the load (which could emulate crossover and/or maybe speakers, or whatever makes the most sense, there), or, maybe, with and without the load; whatever makes the most sense (I don't know. I am just making this up as I go. We could figure it out later.).

A set of standard input waveforms could be devised that would reveal almost everything we wanted or needed to know, about any amplifier. All of the input waveforms could eventually be produced in sequence on a single track and a standard suite of software modules could be developed that would analyze the results produced by each of the test waveforms, and a standardized program could be distributed that would analyze the results of the single standardized track that contained all of the standardized test waveforms (which would certainly not all be sinusoids!).

The output sampling/recording device would, ideally, need to be able to record frequencies up to at least tens and maybe hundreds of MHz. But a DIY hobbyist version could use a sound card's outputs and inputs for the system-under-test's (SUT's) inputs and outputs.

A standardized high-power complex dummy load (or maybe something else?) would be the only required new hardware, for the average DIY'er. If we got ambitious, we could even have a separate USB (or other) control method, to automatically change the characteristics of the load during testing, under software control, if that turned out to be very desirable. The load could either have a low-power (passively voltage-divided) output, and/or, it could have built-in high-impedance (probably differential-input instrumentation-type) sensing amplifier(s). Output could be converted to a WAV file, for standardization.

Designing the test track might be the most critical part of the entire system. Writing the software to analyze it and produce meaningful results would probably also be very interesting and challenging.

Of course the test track would include everything needed to perform the standard-type THD and IMD and noise and maximum slew-rate and many other tests and measurements, and plot the frequency and phase responses. But I, for one, am also at least as interested in the accuracy of transient responses in the time domain, and the accuracy of the reproduction of "real-world" signals. So I would expect at least one or more listenable snippets of music to be included, also, and possibly many individual instrument sounds, plus impulses, step functions, pulse trains, tone bursts, maybe all with lots of different amplitudes, and those are just as examples, off the top of my head.

Many people's first question might be about how any of this could then be correlated with which amplifiers SOUND the best. The answer is that at first, it would probably not provide that ability, or at least not to the extent that it eventually might. But it might be the only way to GET there. i.e. As more people acquired the needed "standardized" dummy load, or whatever ended up being devised to condition the output, they would upload their resulting WAV files, and maybe information about their amplifier, and probably the analysis results, so that everyone else could download them, and then also LISTEN to the musical portions of them. Yes, the downloaders' systems would alter the sound. But eventually enough people with good-enough systems might see and agree on patterns, and general agreements about what measurements mattered, and how they mattered, might take shape. And, all of the amplifiers could at least be compared, in terms of the analysis results. And the test track and analysis software would evolve and eventually we might have something very useful.

And, of course, an individual or organization could do the same sorts of things with it, on their own.

The hardware needed probably already exists, and is probably already described on diyaudio.com, somewhere. The main tasks for implementation are probably defining and then developing the test signals and the analysis software (and then maybe getting people to participate, if that part of the plan seems like it would be as helpful as I imagine it might be).

Sorry to have blathered-on for so long about all of that!

Cheers,

Tom
 
Last edited:
Interesting Tom, thanks for 'blathering' for so long. What I see as standing in the way of all that is transparent ADCs and DACs - practically all soundcards nowadays are using S-D architecture converters in both directions, which introduces many 'dead bands' into the transfer function so would likely mask precisely those things we want to see.

I'm making progress on the DAC end, after that I hope to move on to ADCs. :)
 
NFB

Setting aside the validity of empirical observations made by the same person who has introduced a known change, I'd like to suggest a third possible mechanism for audible differences in (what should be) "acoustically-identical" amplifiers.

Many years ago I was designing a column loudspeaker for a simple PA system. As part of this work, I modelled the cross-over, and by chance noticed that if some resistance was deliberately added to the driving source (i.e. amplifer ouput), the impedance "seen" by the driver(s) was reduced at some frequencies ..........

Thinking about this some more, it's not really surprising: for instance if you take a quarter-wave transformer, with a short-circuit input, the output is open-circuit.....(yes, I'm an RF guy).


John
 
Last edited:
modern "industrial/measurement" ADC are challenging audio ADC specs - AD7690 at -125 dB typ thd, spurs should be fine for audio - the S/N is actually not so bad in audio bandwidths but a "super analyzer" might parallel 4 (or as many as you can afford) for even lower noise

with good ADC you could watch your DAC output during test and "calibrate out" any error you can see

almost everything about an amplifier's output at audio frequencies into complex speaker loads can be seen with a "tug-of-war" test - a (bigger) amp on the other end of a power R load - driven with phase shifted test signal
you can also drive the other end of the power R with different frequencies, multi-tones, emulate nonlinear loads

and I hate seeing yet another claim that "conventional engineers" are foolish for using frequency domain tools - particularly for audio relevant applications

the Fourier Transform is a Mathematical Dual of the time domain waveform - contains "every bit" of the original information - nothing is "averaged" in the sense of being "lost" (although removing DC offset beforehand is nice)

I have designed scientific/industrial instrumentation electronics professionally for decades now: circuitry for noise limited amplification of a zoo of transducers, thru ADC, written DSP code...
and the practical limitations, ways to apply fft to time series data are "textbook" - and they "work" - agreement between theory and practice to a degree of accuracy that is nearly unbelievable to anyone with experience in any other measurement technology/science application

I see criticism of frequency methods as misunderstanding the math, the practical advantages, often based on (bad sense) “Sophomoric” objections – or deliberate “Strawman” arguments

Frequency domain tools make some “features” of signals, amplification errors be more "visible"/ interpreted more easily than by staring at the time series data
 
Last edited:
modern "industrial/measurement" ADC are challenging audio ADC specs - AD7690 at -125 dB typ thd, spurs should be fine for audio - the S/N is actually not so bad in audio bandwidths but a "super analyzer" might parallel 4 (or as many as you can afford) for even lower noise

with good ADC you could watch your DAC output during test and "calibrate out" any error you can see

almost everything about an amplifier's output at audio frequencies into complex speaker loads can be seen with a "tug-of-war" test - a (bigger) amp on the other end of a power R load - driven with phase shifted test signal
you can also drive the other end of the power R with different frequencies, multi-tones, emulate nonlinear loads

and I hate seeing yet another claim that "conventional engineers" are foolish for using frequency domain tools - particularly for audio relevant applications

the Fourier Transform is a Mathematical Dual of the time domain waveform - contains "every bit" of the original information - nothing is "averaged" in the sense of being "lost" (although removing DC offset beforehand is nice)

I have designed scientific/industrial instrumentation electronics professionally for decades now: circuitry for noise limited amplification of a zoo of transducers, thru ADC, written DSP code...
and the practical limitations, ways to apply fft to time series data are "textbook" - and they "work" - agreement between theory and practice to a degree of accuracy that is nearly unbelievable to anyone with experience in any other measurement technology/science application

I see criticism of frequency methods as misunderstanding the math, the practical advantages, often based on (bad sense) “Sophomoric” objections – or deliberate “Strawman” arguments

Frequency domain tools make some “features” of signals, amplification errors be more "visible"/ interpreted more easily than by staring at the time series data

Where did you see anyone criticizing frequency-domain methods? I've just spent too much time looking and still can't find it.
 
Where did you see anyone criticizing frequency-domain methods? I've just spent too much time looking and still can't find it.

I think its one of jcx's strawmen Tom :)

Nothing at all wrong with frequency domain methods from my pov, just they're only one side of the coin and for balance would need to be combined with time domain methods, not relied on exclusively.

To give an example of this apparent blindness to the time domain I noticed in a paper by Lipshitz criticizing SACD that he relied exclusively on FFTs and concludes from them that there's no noise modulation occurring. However the FFT shows only the average noise over the sample window - the noise can still be changing during the sample window. Its this reliance solely on FFTs (when for example wavelets could be employed to gain more time domain insight) which I see as in part responsible for the dominance of S-D architectures in digital audio systems.
 
WAY too many people, who seem like they should know better, apparently want to live only in the steady-state sinusoidal time-domain (or frequency domain), which is of extremely-limited usefulness in defining a system's capabilities and describing its characteristics.

isn't a critique?

I think people claiming there are (by implication) "time domain only" properties of systems, signals need to show some math, measurements

if you claim you are criticising "steady-state " - doesn't any practical signal that doesn't destroy the system have a "steady-state" representation - just by repeatng the signal a sufficiently long interval for the transients to decay into the noise floor

even hysterisis is studied by pushing the "hidden state" of the system through complete cycles

are you claiming that a PSD won't show the peridocity of noise amplitude with a enveloped signal for this Delta-Sigma "modualtion noise" that it seems is suddenly so fasionable to worry about
 
Last edited:
I think people claiming there are (by implication) "time domain only" properties of systems, signals need to show some math, measurements

There's that same strawman again. No-one is claiming 'time domain only' properties of systems. What I'm showing (not claiming) is there are, as a corollary to what you said earlier aspects which show up more clearly in time domain than they do in frequency domain. Not that they cease to exist in freq domain.

are you claiming that a PSD won't show the peridocity of noise amplitude with a enveloped signal for this Delta-Sigma "modualtion noise" that it seems is suddenly so fasionable to worry about

Why the scare quotes? No I'm making no claims about periodicity merely pointing out that demonstrating that the averaged noise (shown by FFT) is constant is no guarantee that the instantaneous noise is.
 
isn't a critique?

I think people claiming there are (by implication) "time domain only" properties of systems, signals need to show some math, measurements

if you claim you are criticising "steady-state " - doesn't any practical signal that doesn't destroy the system have a "steady-state" representation - just by repeatng the signal a sufficiently long interval for the transients to decay into the noise floor

even hysterisis is studied by pushing the "hidden state" of the system through complete cycles

are you claiming that a PSD won't show the peridocity of noise amplitude with a enveloped signal for this Delta-Sigma "modualtion noise" that it seems is suddenly so fasionable to worry about

JCX,

Actually, I was only trying to criticize those who mostly only look at THD.

But I do also tend to think that not enough attention is given to transient performance, by many here at diyaudio.

When I was in school, I fell in love with the mathematics of both the time and frequency domains, and much more. It was the most beautiful thing I had ever seen, or thought about.

So, in my case at least, you may stand down, except that I would enjoy reading more of it, as I am trying to bring myself back up to speed after a couple of decades of neglect.

Regards,

Tom

P.S. I'm not so sure about the idea of repeating any signal until the transients decay enough, and then calling it Steady State. It would be periodic, at least, which would be useful. But Steady State seems to imply sinusoids, only. And I can easily make a signal that is periodic, but evokes the system's full transient response in every cycle, e.g. a pulse train, or a Dirac Delta train.
 
Last edited:
Hi, I'm totally new to this forum, I hope my question fits in right here. I have a 4watt /channel single ended amp (driving fullrange speakers) and it uses 6db negative feedback (wire from speaker ground to input tube cathode through 5k resistor). I intend to install a 100k pot, in order to play around with the feedback, so that I hear with my own ears what it's about. I have 2 alpha pots lying around (the amp's stock pots), but I have no idea what pot-power-rating is appropriate for this application. I believe the pots I have are rated for 1/4 watt. Will they do (especially in the wide open position)? Thanks for any input.
 
However the FFT shows only the average noise over the sample window - the noise can still be changing during the sample window. Its this reliance solely on FFTs (when for example wavelets could be employed to gain more time domain insight) which I see as in part responsible for the dominance of S-D architectures in digital audio systems.

That's because you stubbornly consider noise signals somehow equivalent to the deterministic signals.

Noise could be, in principle, characterized by it's instantaneous value. The instantaneous values are pure random, and the probability to get, in any time interval, a huge instantaneous value from a pure random process are about the same of the water in you glass to spontaneously start boiling or freezing. There are two statistic methods to determine the probability of an instantaneous noise value, that are the probability density function and the cumulative distribution function; unfortunately they are usually not known. Hence the common characterization method based on averages. Perhaps you are thinking of noise as something that is only "about" random? Such signals exist, fortunately they can be separated (with a certain, arbitrary high, probability) in a pure random component and a set of deterministic signals, which can be analyzed using the well known tools and processes.

So, from a pure random noise perspective, averaging can be over time (like in your example) or over identical systems - build a large number of identical systems, measure the instantaneous noise values, then average them. The common sense is leading to the conclusion that the second method would be more precise or "realistic". In fact, the two methods are exactly equivalent, since the underlying physical noise generating processes are ergodic. So it doesn't matter how you average the noise, the results will be consistent.

So why would one care about the instantaneous values of noise, if the time averaging provides a good (and consistent) method of characterizing the noise properties of a system? The lower the averaging time window, the lower the average noise. With the same probability of the water freezing in your glass, it is theoretically possible that by lowering the time window, the noise average to increase. But this probability is many orders of magnitude smaller than the probability of other random incidents that may audible affect the sound.

Provide some proof of the audibility or otherwise relevance of the instantaneous values of noise, otherwise everything you are saying qualifies as audio FUD. The radio telescope people could be very interested in your results.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.