John Curl's Blowtorch preamplifier

Status
Not open for further replies.
Upupa Epops said:
Not so much time ago, simulations wasn't, Philippe....Final arbiter was only our ears....😉
I definitively agree with you.
Sims are sims that’s all. We don’t even know precisely what to measure, I have never heard about a measurement that correlates with all the mechanical damping applied to electronics. I don’t see how you would simulate that…

Regrads.
Philippe.
 
Ears are still the final test. The simulation at best tells you what a given circuit is doing in terms of bias, etc. It's good tool for saving initial bench time.For a simple circuit, it can even be faiirly accurate for things like THD and harmonic structure. For the times I've followed through, Orcad and an Audio Precision Series 2 analyzer were in fair agreement for the simple single-ended JFET circuits I'm currently using. However, my ears are still the final arbiter as to whether a given circuit stays in my living room or gets replaced.
BTW, I noticed a pretty dramatic difference between using a 100k pot or a 10k pot for a level control in a TT preamp I built for the college radio station where I DJ.
 
John,

Welcome back.

You mentioned somewhere that self biasing cascodes (using JFETs a la Borbely) is not always satisfactory.
I presume that it is because they offer very low Vds across the cascoded JFETs but for very low Id (like 2-3mA).

Would it still be the case if we use a cascode JFET with very high Idss (say > 100mA) ?
For example, one could get 5V across a 2SK170BL at 4mA bias using selected J111 as cascode.

Or are there other reasons that makes voltage-biased MOSFETs as cascodes preferrable, other than they can take higher voltage across DS ?


Patrick
 
I would think it would depend on the current variation in the device in question. The top JFET in a cascode has limited transconductance, so that a large current variation in the bottom device would mean that that the VGS of the top device would need to vary to accomodate the change in current, partially defeating the cascode function. Bipolars have higher transconductance than JFETs at modest values of drain current, so they might be better suited as a cascode device, though you have to deal with the variation in emitter current with a bipolar cascode as a source of non linearity. I kinda like using a JFET for the top cascode device because of the low parts count. As you have noticed, 5-6V is pretty much the limit for what you can expect for VDS to the bottom device, unless you are really limiting the current. I use a PN4391 as a cascode JFET. You get 5-6V on the bottom FET for a drain current of 5mA or so, though the caveats I mentioned above apply. You probably want to take a look at the operating conditions of the bottom device for 5-6V drain voltage to make sure you aren't messing things up. I don't have the curves in front of me, so I can't say how a 2SK170 will operate with that kind of drain voltage. I will say this - the knee point for exponential increase of gate current for a 2SK170 is at <8V, so you don't want to go overboard with the cascode voltage, either, no matter what device you use on top.
 
Of course simulations only tell part of the story but that's the same for the ears. They are not the final tests except for a preference for euphony. They're too easily fooled by harmonic, intermodulation and noise distorsions. Yes, noise can be interpreted as more detail, just as a bit of haze can enhance details in shadow parts of a photography. I am not aware of any author who builds audio circuits without some instruments, first of them a scope which can show many misbehaviours which are not audibly perceptible.
 
PMA said:
Michael K.,

the Eniac was great machine - don't you remember? 😀

Pavel


One of these made it to our school board for student use by the early Seventies: http://ibm1130.org/. I fondly recall many an hour re-doing punchcards softened by the rain. Writing code to simulate circuits wouldn't have been beyond an Ampex or HP in the mid-Sixties. I was simulating 2-way first order crossovers on an HP hand calc in the late Seventies.
 
Hi rdf,
You mean a big box of darn punch cards. Ever drop one and have the cards spill all over?

How about loading and hearing "CHUNK" as the reader ate one! At Ryerson, we were hooked up to U of T's machine. The printer was always jamming too.

Dark days indead.

-Chris
 
Hi Chris, no those days were still ahead. This was high school so programs had to be of reasonable size to fit in a coat pocket for the bike ride. The smart ticket was to wrap them in waxed paper held with an elastic. Not too tight or you'll be re-punching. 🙂 Since we only got the machine one night a week we also all agreed to a voluntary 1130 Code of Conduct: programs without result after 15 minutes CPU time are terminated . Either it's too long or you've coded an infinite loop. By the time programs were big enough to warrant a box it was University and my thought processes were starting to parallel Watfiv syntax.

BTW, don't know if it was ever an issue but that lobby furniture that went missing from Ryerson in the late Seventies had absolutely nothing to do with film student friends of mine. And I wasn't anywhere near Toronto that night. Just wanted to clarify that.
 
When I was a wee little guy, my dad used to bring home these huge (3 x 3 ft) panels. They were like a honeycomb of sorts. He would plug patch wires into these things by the hundreds. He was programming for IBM at the time.

I believe these ultimately got shoved into the mainframes and acted as program ROM.

Anybody know what those things were?



:xeye:
 
I first used ECAP developed by IBM for the military for worst case analysis, transient analysis, and DC analysis for reliablity projections. Our models were linear, but very useful for analysis. One of my responsibilities was to CHARACTERIZE the devices that we used, both germanium and silicon on a TEK 575 semiconductor curve tracer. I ran an IBM 1620 mainframe computer with punched cards. I had a tech or two to punch the cards. I ran the simulations. Later, I took courses at UC Berkeley as they were developing SPICE. I have been around SPICE for about 36 years, so far. Still, I prefer real prototypes. Maybe there is a good reason, but then maybe not.
 
John, as you mention you've been simulating for quite some time and I assume been around long enough to conclude if there has been significant improvement in available software and reliable models.

I'm aware of the fact that not every aspect of a design can be simulated, but in using such a program my own experience is the lack of reliable component models.

What is your experience/advise in that regard?
 
john curl: Really accurate models are the problem.

Assuming that you are referring to mainly active components (transistors, J/MosFets, etc.) we have to take into account that simulated plots and data resemble, say, 80% of an identical real circuit?

Another question which come to mind is, why, despite the years of development of simulation software and component modeling, we are not able to bring the accuracy level of simulations close to real designs. In other words, why is it not possible to create reliable models?

In my simulations of an open-loop cascode design, the shown bandwidth extends to 1 MHz or more. If I were to control the bandwith with, for instance, a simple 1st order low-pass RC filter, would using such a filter at the input be the best option or do you advice otherwise and maybe a 2nd or higher order filter?
 
Do you have any idea how many components are in a model of a transistor that is used to design a real IC?

I found one answer in 'EDN' magazine May 11, 2006 p. 69
"Amplifier-IC designers use detailed models to design their circuits. Transistor models can contain more than 50 parameters, reflecting the level of performance that is available from a given process."..."So why don't manufacturers give these SAME models and circuits to customers?"


I think that this answers the question.
 
Really accurate models are the problem.
What do you mean? Spice models are very accurate for "typical" devices.

IC modelling is a whole different kettle of fish. JC is talking about the many ways in which fabrication processes can vary. It is a difficult challenge to design a new IC to be manufacturable within a reliability and performance specification, and manufacturing yield specification. This is an truely complex problem in analogue ICs, and pretty nearly as complex in digital ICs.
 
The problem with the opamp macro models is not the level of accuracy of the individual transistor models. Those models are VERY accurate and available on the market.

What is the issue is that IC manufacturers do not disclose the exact internals of their chips. They may use, say, a Spice current dependent voltage source for a specific stage in the opamp rather than the detailed transistor circuitry involved. One good reason is that this level of macro-modelling speeds up simulation with reasonable accuracy; your SwitcherCAD III would probably run all day to simulate a fully detailed opamp. Another reason may indeed be related to intellectual property.

Jan Didden
 
Status
Not open for further replies.