John Curl's Blowtorch preamplifier

Status
Not open for further replies.
PMA said:
Bob,
some opamps have never identical rise and decay part of unity gain step response, they are never linear for high frequencies, even for small signals. OPA134 is the one. You can mask this be preceding RC, or by reduction of generator slope.
For OPA627 we get better behaviour-attached.

The results shown were one of the reason why I moved to discrete circuits and insist on consistent step response for ALL amplitudes. The other reason, more important, is the sound 😉


Hi PMA,

I think we are in strong agreement in regard to issues concerning nonlinearity and stability. Nonlinearity at HF is clearly a bad thing, particularly when EMI and such gets in. I also agree that considerable care is needed in avoiding EMI ingress. In my job I work with signals as high as 40 GHz, and know that EMI issues can be tough.

I also agree that any behavior that is symptomatic of a possible instabilty is a bad thing. If a wild change in group delay is truly indicative of the danger of instability in the absence of other indicators like frequency response peaking and ringing, then I surely agree that it must be viewed cautiously.

However, the only area where I have questioned this group delay thing is in regard to its significance in a small-signal sense, when there is no nonlinearity. I suspect that the small-signal simulations of group delay peaking when group delay change below 1 MHz is less than 1 ns are, in and of themselves, not likely to be well-correlated to sonics.

It is best if we focus on nonlinear and stability effects as likely predictors of sound quality. I agree with you that an amplifier that has a well-behaved large-signal step response is more likely to sound good.

Cheers,
Bob
 
syn08 said:


Bob,

This is rather academic, however: assuming the circuits is minimum phase, that is, linear and time invariant, is a pretty rough approximation of any audio circuit, at least at the level of linearity we are talking here. Assuming the amplitude and phase are conjugate functions (and also Titchmarsh's theorem applies) may or may not be a valid hypothesis. I was talking about the general case.


We may have a misunderstanding here, perhaps in my interpretation of what you have said.

An all-pass filter is linear and time-invariant, but it is NOT minimum phase. It looked like you were equating linear and time-invariant with minimum phase, and that is not so.

Cheers,
Bob
 
Bob Cordell said:



We may have a misunderstanding here, perhaps in my interpretation of what you have said.

Most likely. According to my definition, a linear and time invariant circuit is called a minimum phase circuit if, and only if, the circuit transfer function and its inverse are both causal and stable.

Causality and stability imply that all poles of the transfer function H(s) must be strictly inside the left half s plane. Adding the condition regarding 1/H(s) leads to the requirement that both the zeros and the poles of a minimum phase circuit must be strictly inside the left half s plane.

Of course, for a passive circuit (and neglecting any effects like dielectric absorbtion) the minimum phase condition can be met for certain topologies. However, according to the definition, a non-linear or time variant circuit can't be of minimum phase and to the extend we are concerned here, audio circuits are both non linear and time variant. As a result, the relationship of magnitude response to phase response using the Hilbert transform 1/PI*int(x(tau)/(t-tau))d(tau) where int is from +/-infinity does not hold.

If you are thinking of Spice simulations, the AC analysis is always running on top of a linearized model. An AC analysis can lead to a minimum phase circuit, however this is not necessary valid when nonlinearities/large signal models are considered.
 
One can extend the transfer function idea to a nonlinear time-invariant system though. In the nonlinear time-invariant case, the input-output relationship of the system is given by a Volterra series expansion. The first term of the expansion after the DC offset is just a convolution integral of the input with the first-order Volterra kernel. If we take the "transfer function" to be the Laplace transform of the first-order Volterra kernel, then we could associate a transfer function with the system. It's just that this transfer function no longer completely specifies the system.

Now in the real world, and as you said, the system will be time-varying as well, due to thermal effects among other things. Still, such a transfer function can be very useful for things like stability analysis.
 
andy_c said:
One can extend the transfer function idea to a nonlinear time-invariant system though. In the nonlinear time-invariant case, the input-output relationship of the system is given by a Volterra series expansion. The first term of the expansion after the DC offset is just a convolution integral of the input with the first-order Volterra kernel. If we take the "transfer function" to be the Laplace transform of the first-order Volterra kernel, then we could associate a transfer function with the system. It's just that this transfer function no longer completely specifies the system.

Andy,

Absolutely agreed, with one comment: while the Taylor series can be used to approximate the nonlinear response of a system to a given input signal, the Volterra series can be used to approximate the nonlinear response of a system to a given input signal at all other times. Therefore, the Taylor series can be used to approximate nonlinear systems, while Volterra series can be used to approximate systems that are both nonlinear and time variant. But essentialy they are both methods to reduce a whatever complex transfer function to a form which is obviously holomorphic (therefore subject to the minimum phase criteria) and therefore (according to a famous theorem) analytic, and hence subject to a pole/zero analysis.

How good such approximations are, it is very hard to tell. Sometimes the stability can be accurately estimated, sometimes not. I have read in another life some article regarding safety margin criteria in the pole/zero synthesis and worst case estimations, but I can't find any references now. I'll take a closer look in my files when I'm back home. So many years passed since...
 
There's some possible confusion in terminology here between "time variant/varying" and memoryless/non-memoryless systems.

Just to clarify things, for time-invariant systems (linear or not), if you have an input x(t) and an output y(t), then for the input x(t-t0), the output will be y(t-t0).

For a memoryless system, the output at time t0 depends only on the input at time t0. For a system with memory, the output at time t0 depends not only on the input at time t0, but for all time prior to t0 (assuming causality).

I realize that you know this. I'm just trying to get a common terminology here as used in the classic texts.

According the Schetzen's The Volterra and Wiener Theories of Nonlinear Systems, the Volterra Series is only valid for time-invariant systems. For LTI systems, it reduces to the convolution integral of linear systems theory. Time-varying nonlinear systems (as defined above) are out of the scope of Volterra analysis. OTOH, Taylor series applies for the special case of memoryless nonlinear systems.

So in short, the distinction between Volterra and Taylor series analysis for nonlinear systems is that of non-memoryless vs. memoryless nonlinear systems.
 
Who cares? This topic has become some sort of IEEE 'Circuits and Systems' dialogue that means little or nothing to the advancement of audio design. I think that many here are just 'showing off' the fact that they studied this stuff in university. We all have, so what?
 
Just one of my little interjectional notes:

Understanding how to make the circuit behave is well and good, but understanding how the ear hears is ~easily~ half the battle. When you focus on that, and 'grok' it, then such understanding will begin to show how a single ended tube amp with it's rather high distortion figures can be found to sound excellent to the vast majority of listeners.

This sobering point alone, shows that a discussion of purely theoretical aspects of electrical function will bear little new light on the essence of what is best in 'audio quality'. When you get past that point of having a better understanding of what the ear desires to have presented to it...then you can target such in circuit design,and be more successful in such endeavors. What I'm saying is, don't attempt to argue with the ear, you are wasting your time. 🙂
 
There is some real truth in what you say, KBK, but just stating the problem does not really attempt to solve WHY this is true. We are not experts on the workings of the human ear, so it is difficult for us to contribute to this thread with anything meaningful. However, I would say that some here give the 'ear' a lot less credit than it deserves.
 
KBK said:
Just one of my little interjectional notes:

Understanding how to make the circuit behave is well and good, but understanding how the ear hears is ~easily~ half the battle. When you focus on that, and 'grok' it, then such understanding will begin to show how a single ended tube amp with it's rather high distortion figures can be found to sound excellent to the vast majority of listeners.

This sobering point alone, shows that a discussion of purely theoretical aspects of electrical function will bear little new light on the essence of what is best in 'audio quality'. When you get past that point of having a better understanding of what the ear desires to have presented to it...then you can target such in circuit design,and be more successful in such endeavors. What I'm saying is, don't attempt to argue with the ear, you are wasting your time. 🙂


Correct, but we must take it even further. The ears are just the front end, so to say, of the perceptive apparatus. It depends to a very great degree to the interpretation of what the ears offer up, on what you eventually 'hear' (perceive). I would say that we understand the mechanics of the ear much better than the interpretive, perceptive processing the signal is subjected to.

Compare it to a scope. The same signal can look very different depending on the settings of your time base, AC/DC coupling, bandwidth switch, vertical attenuator, triggering etc. It's like the attention you give to a certain attribute. If you are looking for noise, you do AC coupling and turn up the level. If you are looking for DC drift, you'd switch to DC coupling. So, what you 'see' depends not only to the signal but to a large extend to what you're looking for.

Jan Didden
 
Status
Not open for further replies.