• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

Newby question: Why are classic negative bias supplies so poorly filtered?

In analyzing many direct grid-biased schematics from the 50's 60's and even now. (I'm talking classic amplifiers that have unregulated HT and unregulated bias so the HT and bias can both track any line voltage variations).

I noticed that the negative bias supply is very often just a half wave rectifier with a capacitor, followed by the divider and bias adjustment pot network. When I do come across something that does bother to use full wave rectification, there is still often no multi stage pi filtering. I found a few that capacitive couple the HT bridge with another bridge, then reference the cathode of that bridge to the HT negative then take the anode side of that bridge as the bias. In all these cases nobody seems to bother putting much filtering on the negative bias other than a C filter and maybe another C in the upper leg (HT neg side) of the voltage divider at the end...

Question: Why does the negative bias supply not require very much in the way of filtering? Is it because the current draw for a grid bias is so very very small that it's ripple gets swamped by the incoming signal? Should I bother to pi filter an unregulated bias supply like this with a couple of RC stages or not (probably not enough current flow to do that right)?
 
I think you've answered your own question. Because the bias current requirements are so low that one filter capacitor does the job well enough. And remember that manufactures, back then as now, didn't want to spend anything more then necessary. It's not really a case of signal cancelling anything out, but the bias is mostly just a static charge on the grid. The voltage divider is the only real minimal current draw. Not counting a gassy tube of course.
 
  • Like
Reactions: schiirrn
A bias tap off the HV winding is cheap, and requires a half wave rectifier.
So the voltage ripple is at 50/60 Hz, not at 100/120 Hz.
But Dynaco used a CRC bias filter in their tube power amplifiers.

RC filters operate based on time constants ( R x C ), not on the level of DC current.
In fact, the filter operation is independent of whether there is any DC current present,
other than the power rating of the resistors involved.
 

Attachments

Last edited:
Ripple voltage depends on load current discharging the filter cap(s).
Without a load current there's no ripple at all.
So low current means low ripple.

But there's another reason:
To the power tubes in a PP amp the residual bias ripple is a common mode signal.
By the wiring of the OT primary in-phase plate signals cancel, so don't appear at the secondary.
That is as long as the tubes are balanced.
 
Another tangentially related fact: Electrolytic capacitors used to be A LOT MORE expensive than they are now. Today you can buy a 150 microfarad, 200 Volt electrolytic capacitor for less than 2 dollars. This wasn't true when vacuum tubes ruled the audio industry; big capacitors were very expensive. So designers used the cheapest, smallest capacitors they could get away with. And that meant poor filtering on bias supplies. Which, it turned out, was acceptable. For the reasons mentioned in other posts of this thread.
 
Dynaco has about 3mA of DC current in their Stereo 70 amplifier bias circuit (see post #3),
which gives about 1V pp of voltage ripple on the first 47uF bias filter capacitor.
That's out of 35 to 45 VDC of bias voltage. I certainly would filter that ripple.

And so did Dynaco, who sold their preamp and amplifier kits with genuine Mullard
and Telefunken tubes, and custom transformers, for very reasonable prices that
even students like me could afford. That extra 47uF capacitor didn't break them.

In fact, Dyna sold about 350,000 Stereo 70 amplifiers alone, not to mention all the other
tube amp models they sold. They didn't do that by "over-engineering" their circuits.
 
Last edited:
Thanks, knowing now that a CRCRC stage can be effective irrespective of current, and I have the room, its worth doing! I've never tried PSUD2 at such low currents before but it will be interesting to model it tomorrow. I would think the voltage divider current is the only draw I'd need to calculate and model, since the grid application is so insignificant (applied either as grid leak bias or interstage secondary bias)?
 
Last edited:
Thanks, if I ever build a direct bias amp in the future B+ delay will definitely be part of it. Ramp up time of the bias wasn't so much a problem in these old amps that had small electros and poor filtering, and a tube rectifier gave a natural delay of the B+ anyway. So I guess a damper tube would work after the diodes too, but at the expense of 25 volts or so. I'd have to time it experimentally to see if the bias gets there before the tube warms up.
 
Last edited:
Back to Windcrest post 1# There is another reason. 1950/60´s valve/tubeamp builds the hum/signal noise spec was often quoted at mediocre output (ad hock) levels with power supplies fitted with low value paper/oil smoothing caps. On the main amp one would be lucky to hit hum & noise -60dB down, even when 15dB global feedback is thrown in. (Worse to come isn´t mentioning the contribution by the pre-amp).
However, any noise or ripple present on the output stage negative bias rail would be by comparison, a negligible contribution. So the half wave cheap concept worked fine. Subsequent later designs have defined the hum & noise spec as below full output which is more meaningful. .
 
As said, it depends on the effective source resistance.
There are circuits where the bias is derived from the HT winding using a large value series dropper before the diode.
Such circuit may have an effective source resistance around 50k.
The time constant with a 1000µ filter cap would be 50s.
If the input voltage was stepped DC, it would take around 4 time constants to get close to the final value.
But with half-wave rectified AC it seems to take even longer.

I recommend using PSUD2 to simulate the ramp-up.
I just simulated with a source resistance of 2k and a 1000µ cap showing a ramp-up time of around 30s.
 
Last edited:
As said, it depends on the effective source resistance.
There are circuits where the bias is derived from the HT winding using a large value series dropper before the diode.
Such circuit may have an effective source resistance around 50k.
...

Yeah even with my limited knowledge I don't like the idea of that big resistor to drop the HT voltage. I like the method of AC coupling a second bridge rectifier to the HT winding then take the negative side of that and ground the positive side to the HT ground. By using two small MKP (experimentally selected matched) 100nf to 500nf capacitors you can drop the voltage right at the HT AC to be just above what you need at the end, no voltage dropping resistor needed. Using 100nf will drop the voltage to 1/10th without a resistor, so I assume you can hit other reduced voltages near what you need with std value caps as well. This is described in fig 8-13 of the great Merlin Blencowe book "Designing Power Supplies for Tube Amplifiers".
 
Yes, many "weird/unusual" design choices can be explained by manufacturers buying something by the containerload and then using it everywhere.
This is the reason why I quit repairing audio gear from the 80s/90s.
(unless very obvious and easy to replace)

It feels like that every electronics engineer was going wild on quirky designs. Even worse for guitar amplifiers.

One thing I hate the most, is cleaning up other people's mess.
Even more so when you know that the whole concept is bad to begin with.