Spice simulation

john curl said:
Pardon me for not knowing in advance, but is there an automatic component tester that will give accurate values for SPICE models?

Hi John,

It's been about 8 years since I last did automated test equipment stuff, and I was not doing SPICE model development at the time. However, a colleague of mine was doing this, so I can report, at least second hand, some of what was involved. I suspect Scott has an excellent handle on this.

Anyway, one item used was an Agilent network analyzer and S-parameter test set. The S-parameters are used instead of trying to measure, say, y or z directly because the 50 Ohm terminations in the test set prevent the devices from oscillating. The S-parameters are measured at a bunch of bias points and converted to h-parameters using the standard formulas for doing this. Then, for example, fT is computed from hfe. Other h-parameters may be used in the parameter extraction process as well - I'm not exactly sure.

My colleague also used an Agilent Semiconductor Parameter Analyzer. This thing is like a super fancy digitizing curve tracer, with "personality modules" for testing the transistors in various configurations. It can do things like logarithmic Vbe sweeps for measuring Ic and the like.

Then there will be miscellaneous other instruments as needed, maybe DVMs and such. These are all gathered in a rack and controlled by computer. There is special software made by Agilent called IC-CAP. I am not completely aware of the specifics of the software, but I believe it brings together the automated device measurement and the model parameter extraction.

As you can imagine, this gets expensive and complex. Some of what needs to get measured is not what you'd generally see on a device data sheet. The device data sheet parameters are not enough for a truly complete parameter extraction.

Some of the semiconductor vendors are apparently subcontracting the model parameter extraction to a company that's using a tool called MODPEX. This tool was originally developed by what is now Synopsis. Apparently the tool can scan the paper data sheets, and do a "best effort" parameter extraction. As far as I can tell, one step that's missing from this process is putting the completed model back into the simulator and running a set of curves to see if the simulated data predicted by SPICE is the same as the datasheet data. These kinds of tools use an optimization process to fit the parameters. Optimization itself is somewhat of a black art. The problem is that most optimizers will tend to converge on a local minimum of the function being optimized (the fit), rather than the very best minimum obtainable. Sometimes when this happens, the results can be completely ridiculous. This, combined with the fact that the resulting model parameters aren't being put back into a simulator to verify the performance is a recipe for many of the problems of models that completely suck.

I've got some examples of model parameter fitting on my web pages. My process is very manual and very time consuming. Teodoro has been working on automation of a similar process, and has gotten some very interesting results. But one problem is that the data on the data sheet is not enough to do a near-perfect job.
 
We used h-parameters for microwave devices in a graduate microwave design class that I took years ago. I think that the Agilent test equipment is more important with these faster devices where more parameters are required. It would of course also work for lower frequency devices, but I think that this Tektronix 370 unit is more commonly used for static parameters:
http://www.valuetronics.com/vt/assets/pdfs/TEK_370_371.PDF

Perhaps I should have said that I think many use the Tek 370 with minimal additional testing to determine ft and some of the capacitance values, when much more accurate HF results would probably be obtained with the Agilent system.

I've recently been looking into the Tek 370 for a client looking at basic testing.

Pete B.
 
I didn't know that MODPEX was from Synopsis. I've used Synopsis tools extensively and they usually do not want to hear that they need improvement, at least in my experience. Their tools are very expensive and they want the reputation of being the best. They had one of their Phd's come out and give us a talk and it really opened my eyes as to how they didn't even try to apply or adapt well known software optimization methods to hardware synthesis. They were trying to play it safe and encouraged a very low level structural coding style which negates many of the advantages of hardware synthesis.

Anyway, I get the impression that these Spice model services are just cranking out the models as fast as they can with little concern about accuracy or producing defective models. They probably have a productivity ranking for workers based only on models per hour. I remember when ISO9000 was being introduced and metrics were the big buzz going around. They thought it was so simple, software people should be ranked by lines of code per hour, and hardware people by pins or connections per hour. This absurd ranking valued software and hardware people who produced ineffficent code/designs higher than the better engineers who provided optimized designs.

It's like Dilbert, and it was real in a multi-billion dollar company LOL!

Pete B.
 
PB2 said:
I didn't know that MODPEX was from Synopsis. I've used Synopsis tools extensively and they usually do not want to hear that they need improvement, at least in my experience. Their tools are very expensive and they want the reputation of being the best. They had one of their Phd's come out and give us a talk and it really opened my eyes as to how they didn't even try to apply or adapt well known software optimization methods to hardware synthesis. They were trying to play it safe and encouraged a very low level structural coding style which negates many of the advantages of hardware synthesis.

Anyway, I get the impression that these Spice model services are just cranking out the models as fast as they can with little concern about accuracy or producing defective models. They probably have a productivity ranking for workers based only on models per hour. I remember when ISO9000 was being introduced and metrics were the big buzz going around. They thought it was so simple, software people should be ranked by lines of code per hour, and hardware people by pins or connections per hour. This absurd ranking valued software and hardware people who produced ineffficent code/designs higher than the better engineers who provided optimized designs.

It's like Dilbert, and it was real in a multi-billion dollar company LOL!

Pete B.

Hi Pete,

These guys from MODPEX have not only done a bad job, but even worse, they also have given SPICE a bad reputation.
Luckily, even a novice will soon discover that their models are just cr@p.

Cheers, Edmond.
 
andy_c said:


My colleague also used an Agilent Semiconductor Parameter Analyzer. This thing is like a super fancy digitizing curve tracer, with "personality modules" for testing the transistors in various configurations. It can do things like logarithmic Vbe sweeps for measuring Ic and the like.

I got my PhD degree in device/material measurements, characterization, qualification and parameter extraction in the early 90's. At that time, while HP Keithley and Tektronix were the leading edge in characterization equipment, extracting Spice models was a cumbersome and semi-manual process. To the extent that I am aware of, little has changed since...

There are at least two levels of complexity here. First, it’s the parameter definitions and extracting methodology. Take for example the MOSFET threshold voltage. There are several methods to define this parameter, starting with "the gate voltage for a certain drain current" (Device in saturation or in the linear region? And how much current? 1uA? 1mA?) and ending with more elaborate definitions like "the X-axis intercept of the Id-Vgs extrapolation in the linear region (Vds very small)". Different definitions are providing different values for the same parameter! And to add insult to injury, from a designer's perspective, there is always a "best definition" depending on the application type. E.g. for CMOS design, the extrapolated threshold voltage works best, while for linear (and discrete devices) the drain current in saturation (with a value depending on the application type) works best.

To stay as much as possible accurate, each parameter have its own specific (and always evolving) extraction methodology. Most of them are two steps: first, make sure the experimental data is correct, and then extract the target parameter. Example: the MOSFET subthreshold slope. The first step is to measure and plot Id vs. Vgs around the (pre-determined) threshold voltage on a logarithmic scale. Make sure there is a linear region, identify its boundaries and, in the next step, compute the slope and get S. As simple as it looks on paper, automating such a process is very difficult, and in fact, at that time, there was no equipment that was able to fully extract such parameters out-of-the-box. All these equipments were coming with a) measuring hardware and b) a library of software. The engineer in charge had to write the software and check the consistency of the results, across the wafers and batches.

Things were always evolving and separating the parameters from the measured data became more and more difficult. Example: as the MOSFET channel lengths were scaling down, the separation of subthreshold conduction, linear region, mobility degradation effect was blurring, to the point that none of the already developed methodologies (and software) would apply. New (and integrated) methodologies were required, involving non-linear regression, etc... When I left the semiconductor business, there was, to the best of my knowledge, no equipment able to fully and automatically extract a complete and consistent set of device parameters from measured data. The main problem was that the device parameter set, to match the external/terminal measured data, was no longer unique!

That was the time when Spice began to fade out of the industry focus. The industry was looking at integrating more and more functionality. From an internal design perspective, model parameter extraction was a one time job per manufacturing process, while from an IC end user perspective the focus was on macromodels, intended to plug in high level design tools (so no need for device level simulation).

The second level of complexity is the lack of proper model templates in Spice. As strange as it may look, the core device models are still those developed in the 60's! Everything that came on top is a non-native extension (like the tanh trick; that's a synthetic way to add something that was originally not captured in the MOSFET models: the subthreshold conduction). Or the Gummel Poon limitations that were revelead here... There was little interest to develop and integrate more precise device templates in Spice, simply because macromodelling became the tool of choice.

Finally, very few manufacturers are publishing parameter dispersion data. Datasheet values and curves (like Ciss) are "typical" and that says exactly that these values are not monitored or guaranteed other than "on average". It is the circuit designer's duty to analyze the impact of parameter dispersion and to make provisions for worst case scenarios.

I certainly doubt that any loud cry for better models would have today any resonance in manufacturer's ears.
 
syn08 you are my man !
You say:
The main problem was that the device parameter set, to match the external/terminal measured data, was no longer unique!
this is exactly what troubled me.
I can accept that a set of parameters do not allow to reproduce (with a reasonably good approximation) the experimental behaviour of some BJT. I went crazy when I was able to find very different parameter sets giving quite the same results.
Since the SGP model looks like a phenomenological model, that is it is derived from a certain modelization of how a BJT is built, this seemed unacceptable to me.
As a counter-example the Koran-Koren model for triodes is not phenomenological, but it works (for Vg<0) and the best fit equations don't have so many minima (only a good one, as far as I know).
So, my (still unanswered) question is: when a set of SGP parameters is good ?
Thanks
 
PB2 said:
I didn't know that MODPEX was from Synopsis. I've used Synopsis tools extensively and they usually do not want to hear that they need improvement, at least in my experience. Their tools are very expensive and they want the reputation of being the best.

In digging into this a bit further, it looks like it was written by a company that was later acquired by Synopsis, and then sold off. The web site of MODPEX is here. They might be the people that do the subcontract work for the semiconductor manufacturers. From their site:

"MODPEX was created by Symmetry Design Systems in the early 1990's. The company was later sold to Analogy, and then Analogy was purchased by Avant!, which was then purchased by Synopsys."

Anyway, I get the impression that these Spice model services are just cranking out the models as fast as they can with little concern about accuracy or producing defective models.

I agree. From the modpex.com web site, it looks like it's a very small operation. Maybe it's a fixed-price contract that he's working on. That would be a disincentive to take the time necessary to do the best possible job. I don't see this situation improving anytime soon.
 
andy_c said:


In digging into this a bit further, it looks like it was written by a company that was later acquired by Synopsis, and then sold off. The web site of MODPEX is here. They might be the people that do the subcontract work for the semiconductor manufacturers. From their site:

"MODPEX was created by Symmetry Design Systems in the early 1990's. The company was later sold to Analogy, and then Analogy was purchased by Avant!, which was then purchased by Synopsys."

I agree. From the modpex.com web site, it looks like it's a very small operation. Maybe it's a fixed-price contract that he's working on. That would be a disincentive to take the time necessary to do the best possible job. I don't see this situation improving anytime soon.

Yes the big EDA companies did often buy up smaller companies, it seems that Synopsys should be the owner now. It also seems that this Dan Waterloo was a distributor of MODPEX who knows how to use it and provide support. His company seems to actually be Interface Technology:

"Interface Technologies was founded in 1991. The original purpose was to distribute engineering software. Over the last decade, the company has evolved to now specialize in spice simulation software, training, and generation of simulation models for analog discreets and integrated circuits."

"Interface Technologies provides the only commercial support for MODPEX available today. "

Not sure if Synopsys still owns MODPEX and just lets him support the tool.

It seems to me that the Quality requirements of ISO9000 should actually require the large semiconductor companies to fix the models. I would think that there would also be potential legal issues once the defects are brought to their attention. We should probably email their QC departments.

Pete B.
 
Here's a link to a page for Interface Technologies:
http://www.i-t.com/engsw/symm/SYMM.HTM

Hmmm, interesting that Synopsys is spelled Synopsis here, the name is spelled both ways on the Interface Technologies site:
"Symmetry Design Systems has recently merged with Analogy/Avant!/Synopsis. The Modpex program is no longer commercially available, but please see the support and modeling services that we offer by clicking on the link below..."

Synopsys' home page:
http://www.synopsys.com/

I doubt that this small company Synopsis would have the money to buy Modpex, not sure if it is a company or a workgroup:
http://synopsis.fresco.org/news.html

I just spoke to a colleague/friend at Synopsys who's trying to help get an answer about Modpex.

Pete B.
 
Re: LM4562 Spice model

I tested the model for noise, and it is very accurate.

Attached is a plot.




Sigurd

syn08 said:
Not sure if this is not already old news: National released the spice model for their high performance audio opamps (LM4562, LME49710, LME49720, and LME49740)

http://www.national.com/models/spice/LM/LME49860.zip

The model seem to be pretty accurate.
 

Attachments

  • lme49860 noise test sr rev a.jpg
    lme49860 noise test sr rev a.jpg
    80.2 KB · Views: 1,009
john curl said:
It was a favorite in 1968, or 40 years ago.

Thanks for your comment.
A clarification was well in place.

I understand this, John.
Anyone who reads your current posts, will know,
your JFET favourites of today = the best available at given time & space.


As from a human to another human:
A little advice from an older man ( born 1951 ) to an even older man:
------------------------------------------------------------------
- beware you from become too much 'was'
- world is full of audio people, refering to old amplifier papers, articles and works
- i do not have to give you names, do i ?



There is another man, doing new 'papers', constantly.
F1, Zen v9, F2, F3, F4 and so it goes.
Now we have to deal with his B1, B2, B3, B4 as well.
Who knows .. there might be a Zen v10, F5 in pipeline
and some B6, B7
and maybe even one B52 ... coming from overseas and out of blue sky down upon us.

We have a saying in my country:
Always consider a good example.
Let's dooo iit!


Friendly greetings - Lineup - not only a 'hasbeen' but an 'isbeing', too
 
Fairchild KSC3503/2SC3503 Model

Hi all,

I'm having some trouble running the Fairchild SPICE model for their KSC3503 NPN transistor in LTspice. The model from Fairchild's web site is below:

* KSC3503 NPN EPITAXIAL SILICON TRANSISTOR
*-----------------------------------------------------------
* CRT DISPLAY, VIDEO OUTPUT
* High Voltage: Vceo=300V
* Low Reverse Transfer Capacitance: Cre=1.8pF at Vcb=30V
* PARAMETER MODELS EXTRACTED FROM MEASURED DATA: KSC3503-D
*-----------------------------------------------------------
* If QCO and RCO are set to zero, this model applies to
* the Gummel-Poon model LEVEL 1.
* If IBC and IBE are specified, they are used instead of IS.
*-----------------------------------------------------------
.MODEL KSC3503 NPN ( LEVEL=2
+ IS =2.0893E-14
+ BF =101.5
+ NF =1.0
+ BR =7.655
+ NR =1.007
*+ IBC =2.0893E-14
*+ IBC =2.0893E-14
+ ISE =4.3652E-14
+ NE =1.5
+ ISC =1.2598E-9
+ NC =2.0
+ VAF =717.25
+ VAR =13.16
+ IKF =0.2512
+ IKR =0.0832
+ RB =2.98
+ RBM =0.001
+ IRB =0.001
+ RE =0.5305
+ RC =0.9
+ QCO =0.05
+ RCO =50.1187
+ VO =2.476
+ GAMMA =1.8231E-7
+ CJE =6.6039E-11
+ VJE =0.7017
+ MJE =0.3253
+ FC =0.5
+ CJC =6.6072E-12
+ VJC =0.5
+ MJC =0.2439
+ XCJC =0.6488
+ XTB =1.4089
+ EG =1.2129
+ XTI =3.0 )
*---------------------------------------------------------
* FAIRCHILD PUCHUN S.KOREA CASE: TO-126 PID:KSC3503-D
* 2000-03-30 CREATION

LTspice first complains about not recognizing LEVEL 2. I can get it to sort of run if I take some stuff out, but I'm then not sure of the validity of the model. Things at issue are the LEVEL 2 statement and the following parameters:
IBC
IBC (repeated in their model)
QCO
RCO
VO

These params are not in the complementary KSA1381 model, nor does that model reference LEVEL 2.

When I got it to run by deleting the stuff mentioned here, the simulation ran OK when that device was used as an emitter follower driver. But when I used it as a cascode in a VAS, LTspice reached its iteration limit in the DC run. If I just swapped out the KSC3503 with a 2N5551 in the VAS cascode location, the thing converged fine.

Not sure the best way to proceed with confidence here.

Thanks in advance for any suggestions.

Cheers,
Bob