Wishlist: Op Amp Characterization Curves in Datasheets

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Since it's the holiday season I thought a wishlist thread would be appropriate! So what additional characterization curves do you wish to see added to op amp datasheets in the future?

I'll start the list:
THD for high source impedances to characterize input impedance linearity
FFTs of the output distortion to show which harmonics are dominant
 
. . . I'll start the list:
THD for high source impedances to characterize input impedance linearity
you could just pick from Groner's op amp distortion file
SG-Acoustics · Samuel Groner · IC OpAmps
That's by far the best source of measured data. The last update is now several years old. I'd revise the wishlist entry to something like, "Somebody who has the motivation, access to necessary test equipment, and time, to continue Samuel Groner's work.".

FFTs of the output distortion to show which harmonics are dominant
Occasionally (but not often) you see separate curves for second and third harmonics. See, for example, Figure 28 (page 13) in the THS6012 Data Sheet at http://www.ti.com/lit/ds/symlink/ths6012.pdf .

Dale
 
if the opamp datasheets are too honest the result can be reduced sales.

A manufacturer like Analog show a lot more than many others because they are confident in the quality of their product compared to equivalents from their competitors.

This is a good point. But I want to remind everyone: I work for Texas Instruments, on the Precision Amplifiers team. So I'm well aware of the downsides of over-characterization. There's also the downside of too much data diluting the important metrics. An example of this is older op amp datasheets which have a spec table for every possible power supply voltage combination. It gets old quick...

My intent for this thread was simply to see how we (TI) can do better in the future. There is quite a bit of analog engineering talent on here, and I felt it was prudent to reach out to the diyaudio community and get some feedback on what you would like to see in the future. I agree that Samuel Groner's work is really impressive, but is it 100% complete? Is there no other testing you would like to see done? Regardless of the perceived effect it might have on op amp sales.

On the topic of curves for individual harmonics, the separate curves for the the 2nd and 3rd harmonics are common in high-speed amplifiers, but I actually find them misleading. The reason is that it excludes higher order harmonics. Often I see the 5th harmonic become dominant in an op amp output stage when it is required to deliver significant current.
 
One thing I would like to see, and it really is a one liner is detail on which supply pin the Vas integrator capacitance is referenced to, this is really helpful when thinking about supply decoupling as the presence of that capacitor makes this pin an additional input, the information is sometimes out there, but it can be hard to track down.

Distortion Vs load at somewhere other then 1KHz would also be appreciated, knowing the sand can drive 600R to +20dBu is nice, knowing it can still do that at 20Khz is nicer.

High frequency IMD would be interesting, as would open loop output impedance, mainly for looking at things like reconstruction filters where there may be significant energy present well up into the MHz region, where some parts run out of GBP.
 
One thing I would like to see, and it really is a one liner is detail on which supply pin the Vas integrator capacitance is referenced to, this is really helpful when thinking about supply decoupling as the presence of that capacitor makes this pin an additional input, the information is sometimes out there, but it can be hard to track down.

Distortion Vs load at somewhere other then 1KHz would also be appreciated, knowing the sand can drive 600R to +20dBu is nice, knowing it can still do that at 20Khz is nicer.

High frequency IMD would be interesting, as would open loop output impedance, mainly for looking at things like reconstruction filters where there may be significant energy present well up into the MHz region, where some parts run out of GBP.

These are fantastic inputs! To you first point, the big indicator is going to be which power supply rail has worse PSRR. But the internal topology may not allow for a simple 1 line answer in the datasheet.

Great point on the open loop output impedance. All of our newer op amps include this curve (open loop, NOT closed loop). The effect the open loop output impedance can have on high frequency attenuation (especially in Sallen-Key filters) can be significant.
 
datasheet measured PSRR, CMRR and open loop gain should follow the relation in “A General Relationship Between Amplifier Parameters, And Its Application to PSRR Improvement” E Sackinger, J Groette, W Guggenbuhl, IEEE Trans CAS vol 38, #10 10/83 pp 1171-1181

this may require adopting a test fixture for the CMRR, PSRR # that actually measures them independent of the op amp's gain curve - Pease wrote some on this


I would like to see updated minimums for popular oldies like LM3886 which must have moved to new processes with presumably much tighter parameter spreads since the original datasheets


manufacturer supplied free Spice models also suck - nobody seems to even use their better Spice Modeling app note macromodel techniques consistently
and input parasitic Z, CM parts should go to the respective power supply pins instead of the fictitious Spice node 0 - which has no place in a op amp model

some seem to have given up altogether at output Z modeling above the unity gain intercept
to be useful for stability analysis Spice models should be representative up to 10x the gain intercept

I have complained, got a "response" that trimmed the Spice model unity gain intercept phase to <1 degree of the datasheet value while being off >20 degrees just an octave away - below the intercept frequency!

actually open loop gain can change noticeably with loading vs totally unloaded - factors of 2 aren't impossible


macromodeling has always been hit or miss - often the "new guy" may be assigned to familiarize him with the products - even if he's never built a model before, hasn't even read the app notes on modeling

http://www.analog.com/static/imported-files/application_notes/AN-138.pdf

http://www.ti.com/lit/an/sbfa009/sbfa009.pdf

http://www.ti.com/lit/an/snoa265b/snoa265b.pdf

http://www.ti.com/lit/an/snoa247a/snoa247a.pdf

just checking the pin assignment on the op amp package shows any model with spice node "0" in it is suspect - particularly with any sub component of the model directly connecting from the op amp pins to Spice gnd

even the notes above often get this wrong with CM Z, leakage parts going to gnd

what is frustrating is the bad use of node "0"/gnd in modeling has been clear from before monolithic op amps and Spice - as early Philbrick/Analog Devices papers show input parasitics/compensation referenced to the power supply pins of the op amp modules

schematics of the Spice macromodels also make for easier analysis of their usefulness by experienced analog circuit designers - just the Spice text listing is very opaque and suggests someone is just blindly computer generating it
 
Last edited:
. . . manufacturer supplied free Spice models also suck - nobody seems to even use their better Spice Modeling app note macromodel techniques consistently and input parasitic Z, CM parts should go to the respective power supply pins instead of the fictitious Spice node 0 - which has no place in a op amp model

some seem to have given up altogether at output Z modeling above the unity gain intercept
to be useful for stability analysis Spice models should be representative up to 10x the gain intercept . . . .

schematics of the Spice macromodels also make for easier analysis of their usefulness by experienced analog circuit designers - just the Spice text listing is very opaque and suggests someone is just blindly computer generating it
Stand up and say that louder!

The great majority of models I see still follow the template from the 1974 paper by Boyle et al. (Which was, in fact, excellent for its time.) One significant motivation for that work was to reduce the amount of computing resources needed to model opamps. Today, even casual hobbyists have both SPICE simulators, and computing hardware to run it, that must be two or three orders of magnitude more capable than what was available to even major corporations and universities in 1974. Restricting ourselves to Boyle's model is a bit like equipping a 2014 automobile with tires from the 1920's.

As "jcx" pointed out, there are papers and application notes from around 1990 that present much-improved models, but they are essentially ignored by the very organizations that published them.

Dale
 
Stand up and say that louder!

The great majority of models I see still follow the template from the 1974 paper by Boyle et al. (Which was, in fact, excellent for its time.) One significant motivation for that work was to reduce the amount of computing resources needed to model opamps. Today, even casual hobbyists have both SPICE simulators, and computing hardware to run it, that must be two or three orders of magnitude more capable than what was available to even major corporations and universities in 1974. Restricting ourselves to Boyle's model is a bit like equipping a 2014 automobile with tires from the 1920's.

As "jcx" pointed out, there are papers and application notes from around 1990 that present much-improved models, but they are essentially ignored by the very organizations that published them.

Dale

I agree, it's ridiculous to still be using the Boyle model for an op amp.

However, the issue manufacturers run into with Spice models is NOT computing power, but rather convergence issues in simulation. From our experience, the spice engine used by Tina-TI is perhaps too forgiving and converges rather readily, but other commonly used simulators are much more rigid and may not converge with a model that worked fine in Tina. Unfortunately, as you increase spice model complexity the likelihood of convergence in all simulators decreases drastically.

Therefore the challenges with Spice models is to create a model that:
1. Accurately represents important op amp behavior
2. Simulates properly in all simulators. This is no small challenge consider the number of simulators currently floating around...
3. Can be rapidly developed for all new op amps.

Furthermore, the issues raised about over-characterization in datasheets affecting sales also ring VERY true for Spice models. You can lose business because a model doesn't simulate, or because it predicts improper operation in the circuit. I have personally seen opportunities where our part was designed-out initially for a competitor part because the competitor's Boyle model didn't show a stability issue in simulation. Forget what reality might be, their part works fine in simulation...

I'm definitely not making excuses though, the models can be better, much better. It's just the priority is to model basic behavior and ensure convergence in all simulators, then add complexity and re-check, and repeat...

It's been a while since TI has put out a Boyle model though, open up the OPA172 model for a look at a typical newer op amp model.

But this thread was supposed to be about datasheet characterization, not Spice models!
 
I don't think the spice model is such a tangent (certainly not as far as a wishlist goes).

The SPICE model is much a piece of published information as the datasheet is.

Obviously it would be nice to wish for a model that you could effectively characterise any operating point of the device off, making individual characterisations in the datasheet somewhat redundant.

However, reality strikes us all (in different ways)
 
l

some seem to have given up altogether at output Z modeling above the unity gain intercept
to be useful for stability analysis Spice models should be representative up to 10x the gain intercept

I have complained, got a "response" that trimmed the Spice model unity gain intercept phase to <1 degree of the datasheet value while being off >20 degrees just an octave away - below the intercept frequency!
t

Believe it or not, sometimes a full top level sim won't do better especially considering external unmodeled components.
 
I don't think the spice model is such a tangent (certainly not as far as a wishlist goes).

The SPICE model is much a piece of published information as the datasheet is.

Obviously it would be nice to wish for a model that you could effectively characterise any operating point of the device off, making individual characterisations in the datasheet somewhat redundant. However, reality strikes us all (in different ways.
I wish the published models would include a summary of what characteristics are, and are not, modeled. I seem to recall seeing this kind of information in several of the models from Burr-Brown, and a few from Analog Devices, but mostly it is totally omitted.

Of course, no model will ever anticipate all the different ways an op-amp (or any complex component) will be used. By definition, "engineering" is a mixture of roughly equal parts of science and creativity. However, it would be reassuring to know whether (or not) the modeling process had considered some of the more common "non-standard" ways to use an opamp. Cases in point:

  • The offset adjustment, and compensation, circuitry is ignored by just about every model I've ever seen. If we're going to use simulation to evaluate circuit stability, then modeling the behavior of COMP pins would seem to be a fundamental prerequisite.
  • Further elaborating on the above point, the offset and comp pins provide useful access to the IC's internals in the NE5534 (and, I believe, a few others). I recall App Notes from the 1970's and 80's where significant performance improvements were claimed by replacing the 5534's front end with an outboard discrete design, interfaced through these pins. But simulating and optimizing such circuits is impossible without including those pins in the models!
  • Mark Alexander (and a few others) have cleverly used an opamp's power supply pins as output terminals. Again, it's impossible to model this kind of circuit if the (admittedly complicated) interactions between the power pins, and the output pins, isn't modeled.
Dale
 
Something that I would like to see a lot more often in opamp datasheets is typical CMRR(f), which I believe is closely tied to common-mode distortion at higher freqs and could thus explain the differences among various manufacturers' 5532s (and why 5532s can be distinctly average and outclassed by nominally lesser parts in inverting operation). I bet that folks building balanced line receivers would not be unhappy about this either.
 
Max supply voltage vs. recommended supply voltage

According to supply voltage configurations. I have to admit I like that content because one can see e.g. LME49720NA tends to have better characteristics with higher Voltage. Most likely that is true for the majority of Opamps.

But once in a while I stumbled across recommend supplyrange vs max supply voltage range and there is no conclusive Plot.
Does recommended Voltage results in better measurements, or because it is more stable or because it is less likely that the chip will degrade in a max temp environment? LME49720 recommended +/-17V but +/-18V possible.

For Instance if it is the last reason I would just crank Voltage to the limit (assuming it will measure better). But if it measures better at lower voltages and thats why it is recommended I would turn down the supply voltage.

On the other hand I do understand that arguments against too much plots.
Maybe a footnote briefly describing the recommendation / or the results of beeing out of recommended spec might be very useful to squeze the maximum.
 
Last edited:
If by max supply you mean absolute maximum, then you simply should not be using the device at that voltage in the first place. Go by the recommended operating voltage range for decent reliability. There's always going to be spikes on voltage rails to worry about - you don't what a distant thunderstorm to take out all your opamps!



For most opamps with 36V absolute max supply that means +/-15V supplies, and for NE5532 with its 44V supply range, +/-17V or +/-18V is often recommended. Thermal issues start to come into it for standard opamps running at ~4mA per amp, that's about 300mW for an NE5532 at +/-18V, which is pretty hot for a DIL8 let alone SOIC8. This is probably why dual opamps have more or less taken over from quad opamps.
 
I know that driving the opamp at its specified limits has its drawbacks. But exactly that is my point:

Usually in engineering fields you can not sort out good/bad specs with a razor sharp line. Thats by statistics almost impossible (gaußian distribution ...or other distributions) . You once define some max spec point usually under some referred test procedure. We (the company) guarantee the functionality of this product under the mentioned circumstances (datasheet). It will not fail prematurely or it fails but with a certain percentage defined by some legislation or standard 3sigma..whatever.
There are always safety margins.

The company can not know if I use heatsinks or sophisticated surge protectors. Its impossible to cover all use cases. But if they say max Voltage I know that I can safely use the device at that voltage. It is common sense that beeing at the limit it is more likely to be damaged - I guess most engineers reading this datasheets should know being at the edge of specs means being at the edge. Educated guesses / or investigations should be done to determine if it is worth taking the risk.

My idea is related to e.g. LME49720NA recommended supply voltage is up to +/-17V - max supply voltage +/-18V. looking at the plots for pm12/pm15/pm17 V one can clearly see that almost all characterisitcs are getting better with higher voltages. Only crosstalk tend to get bumpier in the 17V frequency plot.
By extrapolation I would argue at -/+18V these effects will continue. So it generally measures better but TI recommends 17+/- because in some odd situations the opamp tends to oscilate and gets hot and if then the ambient temperature is close to 80C we doubt a safe operating (caused by thermal runaway of jfets). Or is it recommended for 17V because we are at the turning point and characteristics are getting worse again and so there is no benefit of higher rail to rail voltage but the chip would not have an issue with it?

If it measures better but a thermal issue might ocure a heatsink will be a quick fix.
If it measure worse reducing the voltage will be the way to go.
So why the recommendation?

Btw I'm at the moment pushing LME49720/OPA1612/OPA1642/OPA2134/MUSES8920E with -/+18V (36V across the rails) and they are not even close to be considered hot. Putting the finger on it shows that there is no feeling of heat or first appearence of pain - which means we are well bellow 45-50C (sligtly depending on thermal conductivity). Considering that IC's with doped Si based semiconductors seriously start to degrade around 110C - 120C means we are far away of diffusion processes. Which by the way is exactly the max specified environment temp 85C plus my roughly "measured" 30C degree chip temp (plus the delta of package vs die temp). The higher risk of some barrier brakedown at +/-18V is of course still there.

So why the recommendation - measurement? thermal concerns? other effects? A brief footnote would help. e.g. "for best measurements we recommend +/-5 to +/-15V ..." or an inverse statement "in hot environments we do not recommend supply voltages above +/- 17V...."for capacitive loads supply voltages below 34V r2r are to be preffered"...
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.