• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

Question re: Williamson amplifier and stability

I've been told many times and have read many places that the Williamson push-pull amplifier topology is inherently unstable. But... I'm not sure if that's a valid blanket statement.

1. Of course it's true that the Williamson design has a lot of RC-coupled stages, with multiple poles. Also, the OPT adds another pole. If two poles are close in frequency but out of phase, and the usual 20dB of global negative feedback is applied, oscillation will surely result. But...

- What if the poles are staggered so that no two poles are too close to each other in frequency?

- What if MOSFET source followers are added after the differential driver stage, so that the F3 of the RC network between the diff driver and output stage can be chosen so as to be far away from other F3 points in the circuit?

- What if there is much less NFB wrapped around the circuit? Let's say only 6dB of global NFB applied?

- What if *local* NFB is employed between the output stage and differential driver? Perhaps from the plates of the output tubes to the cathodes of the diff driver tubes? The F3 of the RC network in the feedback loop could be chosen so as to be far enough away from other poles, or low enough to be out of the way of the OPT LF pole.


2. It's also true that the original Williamson had not-so-well filtered power supply decoupling, so low frequency stability (motorboating) was often a problem.

- What if stiffer, higher capacitance decoupling networks are employed, or capacitance multipliers, or even voltage regulation on the driver stages? With the decoupling going down well below 1Hz, wouldn't that keep the motorboating tendencies at bay?


Thoughts?

--
 
It is not inherently unstable. It just has one more low frequency pole than competing topologies. Most amps with 3 gain stages are similarly “unstable” at high frequencies. To fix it, you manipulate the open and closed loop response such that you reduce the amount of feedback at frequencies where there is too much phase shift. This is in general called frequency compensation. With a Williamson, you simply have to do it at both ends of the spectrum. With other topologies, you may not have to pay attention to the low end.

Reducing the overall amount of feedback is but one method. An amp with only 6 dB of feedback will have more margin to work with so less “frequency compensation” will be required. Monkeying with the RC time constants caused by the coupling caps manipulates the open loop response, so it *is* a form of frequency compensation.
 

PRR

Member
Joined 2003
Paid Member
If you make major changes, it is not a "Williamson". Williamson pushed the NFB, staggered the poles, supercharged the OPT.... that was his philosophy on amplifier improvement. He didn't HAVE this MOSFET you take for granted. Regulators would drive-up the cost of an already fancy amplifier.
 
In 1947 capacitors were expensive and folks wound their own transformers, if they were lucky enough to find materials. Britain still had food rationing, and for a lot longer. It was a different time, and DTN Williamson pushed the boundaries. Proof of that is that his design is still challenging to make well today, despite progress and our modern wealth.


Today, with hot shot Russian military pentodes and huge cathode sweep tubes in abundance, we might do things differently, but we build on the foundation he created.


All good fortune,
Chris
 
To fix it, you manipulate the open and closed loop response such that you reduce the amount of feedback at frequencies where there is too much phase shift. This is in general called frequency compensation.

No. To fix it you reduce the gain, either by increasing the NFB, typically with a shunt C across the negative feedback R, or else by a step network across the input stage's anode load.

DTN Williamson was violently against the first technique for his amplifier.
 
Banned Sock Puppet
Joined 2020
It was a different time, and DTN Williamson pushed the boundaries.

Proof of that is that his design is still challenging to make well today, despite progress and our modern wealth.


There is no evidence to suggest modern parts are better than in 1947, the proof being nobody is actually able to make an exact copy of the KT66 he used in triode mode and much more important


I have yet to see a transformer better than Partridge apart from LS in Chicago who made their own version of the Williamson at great expense.


capacitors expensive in 1940-1960?

No way, the most expensive bit in all those Radford, / quad / Vortexion and classic British designs was always the transformer.


Fact is, British transformer tech from Gresham, Parmeko, Gardners, Partridge often made to British mil spec was and still is world beating quality.


I have some of those transformers working to this day,- that's 50+ yrs old.



...a bit like the team in Bristol that engineered the Concorde - the first fly by wire supersonic aircraft, and the Harrier, which has never been equalled or beaten since.


1966-9 a golden era of British technology despite all the strikes, and fudging/bodging. :rolleyes:
 

45

Account Closed
Joined 2008
I've been told many times and have read many places that the Williamson push-pull amplifier topology is inherently unstable. But... I'm not sure if that's a valid blanket statement.

1. Of course it's true that the Williamson design has a lot of RC-coupled stages, with multiple poles. Also, the OPT adds another pole. If two poles are close in frequency but out of phase, and the usual 20dB of global negative feedback is applied, oscillation will surely result. But...

- What if the poles are staggered so that no two poles are too close to each other in frequency?

- What if MOSFET source followers are added after the differential driver stage, so that the F3 of the RC network between the diff driver and output stage can be chosen so as to be far away from other F3 points in the circuit?

- What if there is much less NFB wrapped around the circuit? Let's say only 6dB of global NFB applied?

- What if *local* NFB is employed between the output stage and differential driver? Perhaps from the plates of the output tubes to the cathodes of the diff driver tubes? The F3 of the RC network in the feedback loop could be chosen so as to be far enough away from other poles, or low enough to be out of the way of the OPT LF pole.


2. It's also true that the original Williamson had not-so-well filtered power supply decoupling, so low frequency stability (motorboating) was often a problem.

- What if stiffer, higher capacitance decoupling networks are employed, or capacitance multipliers, or even voltage regulation on the driver stages? With the decoupling going down well below 1Hz, wouldn't that keep the motorboating tendencies at bay?


Thoughts?

--

What if you just remove the global feedback?

The EL34 in triode mode running at 400/50mA into 10K plate-to-plate for pure class A operation can provide 15W at 1% THD. At 1W distortion can be as low as 0.2-0.3%. If the load provided by the speakers drops to 4R this output stage can deliver up to 20-21W at 3% THD! Why would you need any loop feedback? The Zout for driving fancy speakers with roller-coaster impedance? It wouldn't work that great most of the time, despite the feedback....I mean, a lot of effort for a result that is not really worth the trouble respect to using suitable speakers.

So one can build the Williamson as is with no global feedback and only needs to judge the right amount of gain for the front-end by picking the "right" voltage amplifying tubes.

Want some challenge beyond the 15W amp? Double the power! Make a parallel PP with 4xEL34 per channel. Even more challenge? What about 60W? Use 8xEL34 in triode mode into 2.5K,Is is worth for me? Yes, you don't find a zero feedback class A delivering 60W at 1% THD (90W @3% THD same secondary tap....) so easily and/or at affordable price off-shelf. If want to use some feedback 10% cathode feedback might be the best choice.

I am not joking about the 15-30-60W amps. It has been done and works....

No need to use fancy tubes. Selected JJ EL34 will work just fine. The only weakness of this tube is that it is mechanically delicate. Never shake it (remove it from the socket) while still hot....the heater might break. But that's it, really.
PP output transformers of good quality are not that expensive. 2.5K/60W transformer is rather big but nothing fancy either...
You will have to play quite a bit on the front end to get such overall performance.
 

Attachments

  • VTA-The-Last.jpg
    VTA-The-Last.jpg
    6.5 KB · Views: 315
OK, some clarifying points...

1) By "Williamson" I was thinking of the *topology* (overall 'shape') of the original Williamson amplifier design, not every detail down to time constants, capacitor values, how much NFB used, tube types used, etc.

The 'Williamson topology' is basically:
- Input stage is a common cathode voltage amplifier
- Direct coupled to a split load (concertina) phase splitter
- RC-coupled to a differential pair (push-pull driver)
- RC-coupled to push-pull output tube pair
- Transformer coupled to speaker load

The originally published Williamson amplifier *design* called for push-pull KT66 triodes and 6SN7 (or 6J5) driver triodes, and a particular output transformer construction that would allow a rather high level of global negative feedback to be applied. So yes, I can see that if you modify components of the "Williamson amplifier" you no longer have that particular design. But that doesn't address the question of the basic topology, which is what I mean to address. So, can we call that the "basic Williamson topology"?

2) Back in the 1940s-50s, the other popular push-pull driver topology was the floating paraphase, but that one is generally looked down on today. It's difficult to achieve stable push-pull balance with a paraphase phase splitter/driver, so distortion will be higher.

3) Another topology introduced later in the 1950s was the one made popular by the Mullard 5-20 amplifier, which is basically:

- A common cathode voltage amplifier
- Direct coupled to a long-tailed pair (LTP) of triodes
- RC-coupled to a push-pull pair of output tubes
- Transformer coupled to speaker load

The published Mullard design used an EF86 pentode as the first stage voltage amplifier, and an ECC83 triode pair for the LTP. Many people have taken that basic topology and altered it to use a triode in the first stage and a medium-mu, lower rp triode pair for the LTP. Eico did this for the HF-87 (using 12AX7 for the input stage and 6SN7 for the LTP) and I believe the Bob Latino driver for the Dyna ST-70 uses a 12AX7 and a 12AU7. I heard this called the "Mullard" push-pull topology ca. 1990, way back in the Fi days.

4) Speaking of Dynaco, their ST35, ST70 and MarkIII amps took the basic Williamson *topology* and removed the differential driver stage, leaving the common cathode input amplifier DC-coupled to a split-load phase splitter doing double duty as the output tubes' driver. (You could also look at this as a 'Mullard topology' with a split load phase splitter/driver employed instead of the LTP.) Dyna used a pentode as the first stage to get enough gain for EL34 or 6550 output tubes. This is by far the most commonly used topology for PP EL84 amplifier designs, using a twin triode as the voltage amp --> split load phase splitter/driver. What do we call this topology?

5) For output tubes that require relatively few volts swing at their grids (like 6V6, EL84 or 7591), a 'single-differential' driver can be used. Eli Duttman's "El Cheapo" is exactly this, with a 12AT7 LTP as the phase splitter/driver RC-coupled to a push-pull pair of 12AQ5/6AQ5/6V6. I remember a very successful version made by Noriyasu Komoru using Dyna ST35 iron, which used a 12AT7 LTP to push-pull UL EL84s, with some gNFB.

6) Finally, there's a more recent (only 50 or so years old) topology that I call "dual-differential", but I don't think it has a common name. It goes like this:

- LTP input splits phase
- DC-coupled to a differential pair push-pull driver
- RC-coupled to push-pull output tube pair
- Transformer coupled to loudspeaker load

Tubelab George has a pcb for this topology which he calls the Universal Driver Board. 30 years ago, J.C. Morrison had an amp which he called "Tube-o-saurus Rex", which used a 6DJ8 input LTP DC-coupled to a 6SN7 diff driver, RC-coupled to push-pull 6B4Gs. That was a very good sounding amp.

And that about covers it for the popular push-pull topologies I know of.

I was asking about the Williamson topology, not about the original Williamson amplifier design.

The reason I ask is that in simulation, a Williamson-style driver topology using 6SN7 voltage amp-split load phase splitter --> EL86-triode LTP comes out looking really good driving PP 300B or PP GU50-triode. Yes, it's incredibly wasteful of electrical power (all that heater power and plate current for a measly 15W of power delivered to the speaker) but it looks like it should sound excellent.

???
 
Last edited:
Rongon, imho the devil is in the detail and simple broad-brush changes/modifications aren't going to be comparable or really assessable unless all the nitty-gritty details are defined and measured.

The output transformer that Williamson prepared and designed was done circa 1944-5, as the WW articles came out a few years after he prepared the base amp and he was only in his early 20's then as well. It was his OPT that allowed the remainder of the design/values to work and achieve a stable output. Very few wanted to make or spend on that OPT, and there wasn't enough technical nous around at that time to tell people that LF and HF step networks were needed for the many alternative OPT's to make the amp stable, and even then some OPT's just had responses and resonances that left them marginal wrt stability.

One of your initial comments about 'not-so-well filtered power supply decoupling' was actually a detailed design effort by Williamson to introduce a titch more phase margin at LF. As such, the simple naïve act of 'improving' the decoupling with more filter capacitance would make the LF stability worse.

There is an interesting 4-part article from 1962 by Bailey and Radford that set about to improve the Williamson in a new Radford amp (MA15 Mk2). That design effort ended up highlighting how specifically tailored the amp circuit values had to be to 'align' with the eccentric/optimised output transformer needed in order to achieve good stability margins. The bottom line was that if you change the output transformer you pretty much have to reassess the whole stability related design of the amp.

And I'd be cautious of simulation modelling of amps with GNFB around the output transformer, as I don't think LTSpice modelling of the output transformer has enough detailed characterisation built in yet for the LF or HF ends, and even then you would have to initially characterise the output transformer you wanted to use and tweak the model and sim to show good agreement - a very onerous task (as I am characterising the Williamson related OPT's I have right now).
 
Last edited:
The bottom line was that if you change the output transformer you pretty much have to reassess the whole stability related design of the amp.

Yes, exactly.

I don't put much stock in LTspice's modeling of transformers. Transformers are far too complex for spice to simulate with any accuracy — unless, as you pointed out, one wants to embark on the daunting project of creating an accurate model of a particular transformer (which is far beyond my meager level of expertise). Therefore, I don't trust simulations of amplifiers with lots of global NFB that include the OPT. I just try to get the driver stages delivering as clean drive as possible and use load lines to guess at the output stage operating points.

The basic, fundamental question is:
Was the high level of global NFB (20dB) employed in the original Williamson amp design the reason for the issues with low frequency instability? (I suspect the answer is yes.)

If you take an original Williamson amplifier and simply remove the gNFB loop, then lower the input signal by 20dB, then measure, will there be less ringing on the output? Yes, but the bandwidth will be reduced and THD will rise by 20dB. But stability issues will be reduced. Correct?

It should also be possible to use much longer time constants for the RC coupling between the split load inverter and the differential driver grids, and between the differential driver plates and the output stage grids. With these poles now very far down, the gNFB loop won't have to apply gain to those low frequencies to bring them back up to the input bandwidth. Oh... and also, it's important to restrict the bandwidth at the input of a feedback amplifier anyway (which Williamson did not do). If the input is filtered to -3dB at 10Hz, the output transformer will never see those low frequencies, so the feedback loop(s) won't have to try to 'put them back'. Yes, the low frequency response will be reduced. It's a tube amp. Low bass is not what we like tube amps for anyway.

I figure if I use a triode with much lower gain for the differential driver stage (e.g., EL86-triode), and reduce the gain of the output stage by using local NFB (not encompassing the OPT), low frequency stability can be rendered less of an issue (requiring less fiddling with the exact decoupling poles to keep the amplifier from motorboating, etc.). With the reduced gain, I won't need quite as much gain reduction as was necessary in the original Williamson design, so less gNFB will be required to set the gain to a reasonable level (about 0.2V rms to full power).

Hopefully that will allow me to apply only 6dB of gNFB and still get reasonable performance and gain.

Maybe.
 
Last edited:
The other possibility would be to remove one set of RC time constants, by DC coupling the differential driver stage plates to the output stage grids. This would require a center-tapped choke be used as the plate load for the diff/driver pair, and for the cathodes of the output tubes to be raised at least a couple dozen volts higher than the diff/driver plates. I have done that by 'stacking' the driver stages' plate DC supply on top of the output tubes plate DC supply, but with output tubes with 400V plate-to-cathode, the total supply voltage would get up there at about 800V DC. That's getting dangerous.

I did that once with a two-stage push-pull 2A3 amp. The 2A3s had a 300V plate supply resulting in 258V plate-cathode, with a shared 350-ohm cathode resistor (grid-cathode was -42V) with its 'ground' referenced to the driver tubes' plate supply (150VDC).

The driver was a 5687 LTP, with a 150VDC supply with its ground referenced to 0V. The driver tubes' plate load was an EL84 PP OPT with its secondary left disconnected. The transformer had enough inductance to get decent bass response from the 5687 LTP and had low enough DC coil resistance to drop only a few volts.

The input of that amp was fed by a 5687 common cathode stage configured as a line preamp (with selector switch and volume control). If you think about it, that's sort of the 'Mullard topology' broken up into pieces and with its DC coupling moved to the driver stage-output stage (where it belongs!).

However, doing that with a quasi-Williamson topology might be a daunting task.

Another way of looking at it would be to divide the system up into parts. Looked at from the speaker back to the source:

1) A differential driver stage is DC coupled to a push-pull triode output stage, transformer coupled to the speaker load. This can be in one chassis. Input sensitivity would be very low, perhaps 5V rms signal required to drive this to full power.

2) Look at SY's Impasse Preamplifier. SY designed that to drive a Nelson Pass amplifier that had very low input sensitivity and required balanced output from the preamp.

3) Or, consider an LTP made from a twin triode, with a CCS tail load, with RC coupled outputs.

2) or 3) could drive the differential inputs of 1).

???
 
Last edited:
No. To fix it you reduce the gain, either by increasing the NFB, typically with a shunt C across the negative feedback R, or else by a step network across the input stage's anode load.

DTN Williamson was violently against the first technique for his amplifier.

Shunt C across the feedback R does increase NFB and provides phase lead, but at the expense of closed loop bandwidth. A zero in the loop gain, but a pole in the feedback factor. The step network across the input stage anode is bog standard lag compensation - making that set the dominant pole in the open loop gain. Both ultimately limit the GBW that you end up with. Nothing is free, there is a maximum stable gain at high frequency, period.

I usually end up using both to frequency compensate ANY amplifier with two (or three) gain stages plus a power stage. Tubes or transistors. The use of lead compensation allows the dominant pole to be pushed a little further out before you run out of phase margin.

The low end requires additional consideration but the same general ideas apply. You can only get so much distortion correction out of any given circuit at the frequency extremes.
 
1) 2) 3) 4) 5) 6)

7) Citation II topology, "local" N Fdbk

All this Williamson stability stuff can be fixed with some gain eating "local" N Fdbk loops. Keeping the global loop gain down for stability. Various "local" loops are possible. ("local" here meaning just not thru the OT)

Thorsten Loesch was a proponent of "local" N Fdbks back to the driver stage grids, similar to the Citation II.

The crossed feedbacks required to do that have the interesting property of the driver stage currents being synced with the related output stage currents, due to the crossing pathes. When the outputs are increasing their gm, so are the related drivers, providing a little boost in N Fdbk when output gain is up. This current tracking synchronism is helpful for removing higher harmonics better with finite local loop gain, not just the 2nd harmonic you get with a typical inverted driver cancellation.

You can make better power triodes today using UnSET on pentodes (or high mu triodes) than Williamson had access to.

Taking any of the "local" feedback types from the output tube plates requires a well interleaved and balanced OT. The usual mediocre OTs tend to have high leakage L on one side (outer winding half) which can mess up balance at HF.

An alternative is to take those "local" feedbacks from the UL taps instead, since those tend to be better balanced and better coupled to the secondary as well. The risk being possible phase shift introduced by the partial OT insertion versus the plate connections. At least use some half way decent OT, not some power supply xfmr. One can add an RC snubber between plate tap and UL tap if needed for local stability, but best performance is when it can be left out.

Once the UL taps are being used for "local" N Fdbks, there is some advantage in NOT using a triode output device. High Zout pentodes will drive right thru OT leakage L transparently (except for eating a tiny amount of B+ at HF). The UL feedbacks still produce a NET low output Z, overcoming OT distributed capacitance too. OT performance will be maximized.

Modest global N Fdbk can clean up the residual low order distortion and lower output Z further. However, it is not hard to sample the output tube currents and use that to produce a little neg. R to cancel the constant R in the OT windings. (attenuated feedbacks back to driver cathodes) Helps to have a balanced design OT here. Don't over-do the R cancellation to try and fix the speaker driver, since that may lead to oscillation under some signals.

Use some tubes that can handle CURRENT, so a low primary Z OT can be used. Better OT performance, and Bass slam for sure.

If one is adventurous, one can try crossed N "local" feedbacks to the driver screen grids, instead of to the driver grids. This avoids lowering the input Z of the driver stage. The driver stage needs linear pentodes (the internal triode) to do this, and either Mosfet follower drivers for the driver screens or setting up the local loop gain so that the driver screens stay at a constant V fraction of the drive plates, so that they will look like constant resistive loads (instead of non-linear loads). {Jan E Veiset pioneered this scheme} You'll need some good test equipment to do this.
 
Last edited:
The basic, fundamental question is:
Was the high level of global NFB (20dB) employed in the original Williamson amp design the reason for the issues with low frequency instability? (I suspect the answer is yes.)

If you take an original Williamson amplifier and simply remove the gNFB loop, then lower the input signal by 20dB, then measure, will there be less ringing on the output? Yes, but the bandwidth will be reduced and THD will rise by 20dB. But stability issues will be reduced. Correct?

A lot less. You just have more “noise gain”. Noise gain at low frequencies isn’t really a problem like it is at the high end (ie, audible). What I’ve done successfully is add a low frequency time constant in the NFB to effectively remove the NFB and let it run close to open loop below band. Ringing is gone, even running the amp without a load. The closed loop gain “roll up” at low frequency (single digit hertz) doesn’t to seem to cause any noise issues because noise at the low end is low to begin with. I don’t see the woofer moving in and out nor the bias drifting around any more that it normally would. And the zero can be cancelled by the input coupling cap high pass which is outside the feedback loop to restore square wave response.
 

45

Account Closed
Joined 2008
If you take an original Williamson amplifier and simply remove the gNFB loop, then lower the input signal by 20dB, then measure, will there be less ringing on the output? Yes, but the bandwidth will be reduced and THD will rise by 20dB. But stability issues will be reduced. Correct?

If you remove the global feedback completely so the instability will be....not just reduced but removed, unless the amp is badly designed/made.
Besides, even lower cost reasonably made output transformers, especially for PP application, do not have any bandwidth issue like 60-70 years ago where typical affordable transformers could barely make 10 KHz flat.
Distortion: there is no practical difference between 0.02% and 0.2% at 1W.

I still do not understand what is your goal. Is this just a technical exercise or is there an actual reason?
 
The idea of applying NFB locally, without including the OPT, is very attractive. The OPT is going to be reactive, a real problem for feedback to correct. Why bother, since the sound of the OPT is a lot of what makes the sound of a tube amp?

It would be interesting to make a two-stage push pull 'speaker driver block' using a differential pentode driver stage to push-pull pentode output stage with plate-grid or screen-grid local feedback, with no global NFB applied.

I've experimented a little with 12HL7 and 12GN7A pentodes, and I'd like to try 6e5P too. These should make good driver tubes. High gm, basically small power pentodes with frame grids. Or would higher plate resistance be needed, rather than max gm?

For output tubes, I think I'd try 6AV5GA first, since they're easy to use (no plate cap), cheap enough, and don't require a super-high plate voltage. Maybe 400V on the plates and 150V on the screens.

Run local NFB from the 6AV5GA plate to the 12GN7A plate (which is the same as saying from the plate to the grid of the 6AV5GA).

Put that in an enclosure with 3-pin XLR input sockets.

Then make a 'line stage with balanced outputs' to supply the necessary gain to make the whole system's sensitivity around 200mV RMS to full power. It might be fun to do a 12AT7 voltage amp-split load inverter, with plate-grid feedback on the input voltage amp to adjust the gain to the desired level. It would have to be a series RC network from plate to grid with a large series resistor from input to grid, but oh well. I've tried that with a standalone (single-ended 6DJ8) line stage and it works well, sounds good, and is not noisy at line level.

???
 
I still do not understand what is your goal. Is this just a technical exercise or is there an actual reason?

I was thinking of making a push-pull amp using the Williamson 'general topology', but that seems to be a very unpopular idea because it's "unstable". I am questioning that idea if applied to a design that follows the general shape ('topology') of the Williamson design but deviates from the parts list, RC time constants, amount of NFB applied, even how the NFB is applied.

The answer is becoming clear that the topology is not inherently unstable in and of itself. One has to consider the many details of the individual implementation of the topology.

When I suggested someone check out a Williamson driver for push-pull 2A3s, the ideas was dismissed out of hand because the Williamson 'has too many RC time constants' which makes it 'unstable'. I question that assumption.
 

45

Account Closed
Joined 2008
I was thinking of making a push-pull amp using the Williamson 'general topology', but that seems to be a very unpopular idea because it's "unstable". I am questioning that idea if applied to a design that follows the general shape ('topology') of the Williamson design but deviates from the parts list, RC time constants, amount of NFB applied, even how the NFB is applied.

The answer is becoming clear that the topology is not inherently unstable in and of itself. One has to consider the many details of the individual implementation of the topology.

When I suggested someone check out a Williamson driver for push-pull 2A3s, the ideas was dismissed out of hand because the Williamson 'has too many RC time constants' which makes it 'unstable'. I question that assumption.

I actually have a Williamson amp at my parents place. This was sold as a kit in the 90ies. I have not used it for a long time but I clearly remember that it's a typical 40W UL with EL34's and all ECC82's front-end. I have never noticed instability issues. Global feedback applied is 12 dB, from memory, with one compensation at the input stage and the typical RC in the loop. "Vintage" sound.... :)