Bob Cordell Interview: Error Correction

traderbam said:
I have a problem with that last equation. As I see it the two "error"s are not equal unless "error" is independent of "in". Since the "error" is a function of "in" the "actual_in - error" will produce a different "error" at "out".

It's assumed you're computing the error in real time, as it's occurring. In the case of the sim, it's an infinite bandwidth error correction, except for the compensation capacitor C1. Try this in the sim. Delete the compensation capacitor C1. The curcuit will be marginally stable but not an oscillator. Switch to transient and do a "Simulate, Run". Choose V(out) as the output. After the sim completes, do a "View, SPICE error log". This will show the results of the .FOUR THD analysis specified in the .FOUR directive. It should read zero (below the LTSpice residual).

Clearly this is an idealized case, but its intent is proof of concept.
 
Re: Re: Odd IRF p-channel behavior

ilimzn said:


I have to admit I have not been aware of this too. It is precisely the very low frequency at which the dip occurs that is more than curious. It would be possible to eliminate Cgd as a factor reducing drain voltage swing to near zero (cascoding, small drain resistor, current probe...). Although, if it turns out it is Cgd, it must be a truly monumentally nonlinear capacitance, something akin to a ferroelectrical effect, but then this would actually account for the shelf in the gm curve, not the dip (lower output keeps Cgd swing less but DC component on Cgd more, possibly below the treshold of serious Cgd nonlinearity). I freely admit I am no expert at semiconductor processes and MOSFET physics, but I can't recall any fundamental mechanism that would change gm this way - not with such long time constants. Not even electrostriction effects would do this given the size of the die.
Whatever it is, it's certainly ne of those things you would have great difficulty achieving on purpose :)


In our institute in Russia where I did my diploma there were forbidden to use any perfumes after one lady broke with boyfriend, changed perfume type, and affected production.
:cool:
 
mikeks said:
Is has occured to me that resistors 21 and 22 in Yokoyama's figure 2 may be completely superfluous.

On the other hand i could be wrong...:scratch2:


For error sense LTP is transconductance stage, then it needs resistive loading to subtact exact amount of error.
Note, that R21||R22 is same in value as R43+R44.
Again, this more or less the same that Fig 4 in Hawksford's paper.
 
AX tech editor
Joined 2002
Paid Member
lumanauw said:
Hi, Bob Cordell,

I've been wondering about this for a long time, but couldn't find the answer.

In your EC (Hawksford), looking at the EC central there are 2 transistors, that also works as VBE multiplier.
The input signal (from VAS, then to the predriver) is entering the emitors of these transistors. The output condition is entering the base of these transistors. The base is more sensitive than the emitor (emitors need big current, base need only small current to stimulate the transistor). If there is anomalies (like speaker impedance non-linearity, or HF intrusion, or loudspeaker's back EMF), it will be entering the EC system from the base (the sensitive part).

Is there any alternate design of your EC (Hawksford) where the input (from VAS-predriver) is entering a base, and the output signal is entering emitors? The key operation is still the same, that is the Vb-e, but changing the input/output position. I tried to draw it, but until now no success....


David,

You should look at the emitter current that results from the 'error' between the Vas and the Vout. That error current is duplicated at the collectors of the bias/ec transistors and added to (or stolen from) the drive current. This effects the EC: the current gain from emitter to collector is almost exactly '1', so the error current from the Vas (the current it lacks or has too much for ideal amplification) is corrected at the output transistor bases. Hmm. Not sure this is clear....

Jan Didden
 
Error correction corrected

I've figured this "error correction" system out. I've read Hawksford's 1981 paper. I'm both intrigued by the superficially seductive algebra and appalled by its complete failure to admit the impractical assumptions it relies upon. I now see why I was and still am having trouble with fig 11 in Bob's paper.

As shown, fig 11 is impossible except when e(x)=0. Sorry. :sorry:

The error in the diagram is that the output of S2 does not equal e(x) and the output of S1 does not equal x + e(x).

This is a case where Mikeks can declare "singularity" with impunity! :smirk:

IMO it is misleading, if not etirely incorrect, to call this a feed-forward system. Should feedback have been redubbed "error correction"? :spin:
 
EC Patents

SY said:
The problem is not the invalidity, but the cost to anyone being sued by patent holders to get to the point where invalidity can be considered by a judge. That's fine if you've got a few hundred thousand bucks to spare...


Sy you're right, and patent litigation can be very expensive. However, it works both ways - a company might think long and hard before actually taking someone to court if their patent is on weak ground, and spending all that money, and then risking losing any value the patent has with respect to any other licensees or others who are intimidated enough by the patent to not use a design that MIGHT infringe the patent. The whole process is unfortunately unpredictable, so it can be a game of chicken.

The presence of the Halcro patent should not make anyone think twice about using error correction, especially if they don't choose to use the rather expensive and clumsy output-based bootstrapping that Halcro claims. Hawksford and I both used what amounts to feed-forward bootstrapping of the EC circuits.

Bob
 
Re: Error correction corrected

traderbam said:
I've figured this "error correction" system out. I've read Hawksford's 1981 paper. I'm both intrigued by the superficially seductive algebra and appalled by its complete failure to admit the impractical assumptions it relies upon. I now see why I was and still am having trouble with fig 11 in Bob's paper.

As shown, fig 11 is impossible except when e(x)=0. Sorry. :sorry:

The error in the diagram is that the output of S2 does not equal e(x) and the output of S1 does not equal x + e(x).

This is a case where Mikeks can declare "singularity" with impunity! :smirk:

IMO it is misleading, if not etirely incorrect, to call this a feed-forward system. Should feedback have been redubbed "error correction"? :spin:

I think at least part of the confusion must reside with the fact that Bob calls his error ''e(x).

This implies, by mathematical convention, that ''e'' is a function of ''x'', which it most certainly is not.
 
Re: Error correction corrected

traderbam said:
Should feedback have been redubbed "error correction"? :spin:

I suspect the term ''error correction'' is an error. :smirk:

Error feedback would be more apt. :scratch2:

Here is figure 11 as i think it should have been in the interest of clarity:
 

Attachments

  • fig11.png
    fig11.png
    8.5 KB · Views: 533
"error feedback" is more apt but what is negative feedback if it is not this?

In your diagram, to agree with Hawksford, I think the signs of S2 are the wrong way around and the "e" term should be subtracted at the output stage summer. I understand your point and agree because you have assumed the output stage summer is linear - that the "e" term is added after and is independent of the input to the summer, which is "x + e". But the whole point is that the output stage summer is non-linear: this is the device whose distortion is meant to be reduced.

I've observed two feedback loops in this feedback system. Can you see them?
 
traderbam said:
Andy,
I've looked at the whole circuit a little. I see what it does. What were you intending it to do?

I've been acquainting myself with Bob Cordell's MOSFET with error correction paper. I'm not sure I understand fig. 11. In the text it says wrt e(x) that "This error signal is then added to the input of the power stage by summer S2 to provide that distorted input which is required for an undistorted output. Note that this is an error-cancellation technique like feedforward as opposed to an error-reduction technique like negative feedback".

But the diagram doesn't add up to me. If the input to the output stage is x + e(x) then the output must surely be x + e(x) - e{x + e(x)}, rather than x. And doesn't this assume that the e(x) function is linear itself? So I don't see how the output error can ever be cancelled unless e{x + e(x)}=0, which resident mathmaticians may well offer some solutions for, other than e(x)=0.

It may be the ale, but the diagram looks unstable to me, as it were :clown:. The x and e(x) tail chase round the loop reminds me of something...
:boggled:


I should never have used the term feedforward in that article, even though I was not describing the circuit as a feedforward circuit, but rather one that behaves like a cancellation process rather than a process where an error is progressively driven to become smaller as loop gain is made larger. There is literally a pot in the actual circuit implementation that can be adjusted for a distortion null. I apologize for the confusion that I may have created by using the analogy to feedforward.

In that same paragraph, you should have also included in the quote "This technique is in a sense like the dual of feedforward....The technique of Figure 11 also tends to become less effective at very high frequencies because, being a feedback loop (albeit not a traditional negative feedback loop), it requires some amount of compensation for stability, detracting from the phase and amplitude matching."

Bob
 
Re: Error correction corrected

traderbam said:
I've figured this "error correction" system out. I've read Hawksford's 1981 paper. I'm both intrigued by the superficially seductive algebra and appalled by its complete failure to admit the impractical assumptions it relies upon. I now see why I was and still am having trouble with fig 11 in Bob's paper.

As shown, fig 11 is impossible except when e(x)=0. Sorry. :sorry:

The error in the diagram is that the output of S2 does not equal e(x) and the output of S1 does not equal x + e(x).

This is a case where Mikeks can declare "singularity" with impunity! :smirk:

IMO it is misleading, if not etirely incorrect, to call this a feed-forward system. Should feedback have been redubbed "error correction"? :spin:


I beg to differ. I don't think you've yet figured it out.

As I mentioned above, I did not call it feedforward. It's my fault - I should have written it more carefully so that people who don't read it carefully still are not misled.

Bob
 
Bob,
Is it not the case, though, that the Hawkford system, when used in feedback mode (b=0, a=1) is a negative feedback system in disguise? I say this because I see that the difference between output and input is driven to zero by virtue of the enormous forward gain that is generated by a positive feedback loop. In your fig 11, this is the regenerative loop between S1 and S2. The negative feedback from the output acts to control that positive loop.

Suppose the -'ve input to S1 were disconnected. Any non-zero value of x at S2's input would cause the output of S2 to seek infinity. Not unlike an op-amp would behave in open-loop if it had infinite gain, infinite slew rate and infinite bandwidth. Hawkford's algebra neatly eliminates the output error term because there are no practical constraints in the algebra.
 
traderbam said:
Bob,
Is it not the case, though, that the Hawkford system, when used in feedback mode (b=0, a=1) is a negative feedback system in disguise? I say this because I see that the difference between output and input is driven to zero by virtue of the enormous forward gain that is generated by a positive feedback loop. In your fig 11, this is the regenerative loop between S1 and S2. The negative feedback from the output acts to control that positive loop.

Suppose the -'ve input to S1 were disconnected. Any non-zero value of x at S2's input would cause the output of S2 to seek infinity. Not unlike an op-amp would behave in open-loop if it had infinite gain, infinite slew rate and infinite bandwidth. Hawkford's algebra neatly eliminates the output error term because there are no practical constraints in the algebra.


Yes, you are exactly right. One can re-draw the block diagram with the summers in such a way that the whole thing looks like an inner positive feedback system with PFB gain = 1 (resulting in infinite gain) that is then enclosed in a negative feedback system. Both ways of looking at it are equally valid, but each way lends one a different perspective on how it works and on how best to apply the technique. The whole thing works as well as it does because those two loops are implemented in a very tight arrangement with synergy from shared devices.

The key to making it work in practice, which Hawksford did not address in his original paper, is in compensating it in such a way that it is robustly stable while not giving up too much of its error-correcting ability at high frequencies (where it is needed the most). This is the best application that I know of where the higher small-signal speed of the MOSFETs really pays off, since the less excess phase they introduce into this tight loop, the more effective the error correction can be at high frequencies.

BTW, I think that one would have to be borderline suicidal to use this circuit without any output HF isolating inductor at all (because of the resulting unpredictable effect of the load on the excess phase). But note that I was able to use a very small inductance.

Bob
 
Bob,
I share your perspectives and I agree completely with your caution about ensuring stability of the circuit. I choose to look at it like a high feedback system and mention it here because I would like anyone who is thinking of implementing it to treat it with no less rigour than they would a conventional feedback arrangement. By considering it as a pair of feedback loops it may be easier to appreciate it and analyse it in stability terms...something that I don't think is all that obvious from the Hawksford paper.
Brian
 
traderbam said:
Bob,
I share your perspectives and I agree completely with your caution about ensuring stability of the circuit. I choose to look at it like a high feedback system and mention it here because I would like anyone who is thinking of implementing it to treat it with no less rigour than they would a conventional feedback arrangement. By considering it as a pair of feedback loops it may be easier to appreciate it and analyse it in stability terms...something that I don't think is all that obvious from the Hawksford paper.
Brian


Bingo! Amen.

Bob
 
Andy_c wrote:
Try this in the sim. Delete the compensation capacitor C1. The curcuit will be marginally stable but not an oscillator. Switch to transient and do a "Simulate, Run". Choose V(out) as the output. After the sim completes, do a "View, SPICE error log". This will show the results of the .FOUR THD analysis specified in the .FOUR directive. It should read zero (below the LTSpice residual).
That worked. It is interesting. Forget stability! :clown: I found that 50pF gives a reasonable stability margin and gets 0.0003% THD with the load you supplied. As you pointed out this looks great in the rarefied atmosphere of Spice. Getting it to behave in practice will be no small challenge.