• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

tube regulators

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Mains voltages can vary over quite small timescales. Lets assume a variation of 1%, so perhaps a couple of volts peak-peak on the HT rail. Let us assume a first-stage voltage gain of 30, so the LF sensitivity at the anode is 30x1.5mV=30mV. We have to ensure that a volt of signal on the HT rail results in much less than 30mV at the anode. How much less? As a wild guess I would say at least -40dB and -60dB if we can; some might argue for -70dB. Pick the middle and say we want -50dB below peak signal, so then we need PSRR of 80dB in the first stage in the 20Hz region.
You don't need 50 dB down, nothing like it. You only need it to be down far enough that the cone flapping is well within the speaker and amplifier power handling capacity. Intermod products are going to be quite inaudible due to ear close in masking.

Yes, big caps would do it for the 20Hz region. What about 1Hz? Well, you can reasonably hope that the LF rolloff in the system would compensate for the lack of attenuation provided by the decoupling. So two LF rolloffs at 20Hz (ish) would be required - but many systems only have one dominant rolloff.
This is the tubes forum. We are talking about tube systems. There is always one rolloff MUCH higher than 1 HZ - the output transformer. There will be at least one grid coupling cap in the power amp. Most often, there's another in the preamp output. So there is 3 rolloffs. In the amplifier is competently designed, there will indeed be just one of those rolloff dominant. To ensure that excessive distortion does not arise in the output transformer (rolling off below 15 Hz or so due to cost reasons), the grid coupling needs to roll off at 30 Hz or higher.

There won't be any gain to speak of at 1 Hz.


(You need) ... just one dominant LF rolloff in the system
For stability and low distortion, the should always be just one dominant roll-off - the amp grid coupling capacitor.

Heavy decoupling with significant series resistance can sometimes lead to motorboating.
Only if the designer is incompetent. You just need to chose the HT filtering to roll off at a frequency sufficiently higher than the interstage roll off. Or have just one gain stage.
 
Last edited:
Why do you need a regulator for a preamp?

This question is answered by yourself

Tube circuits are AC coupled, so all that is required in a HT supply is a low impedance and low hum.

And with a well designed regulator, low noise too.

Gas tubes are inherently noisey.

Totally agree.

It seems to me that using a regulator is a simple and cheap way of achieving the goal.

Totally agree. Hooray, at last once. :D

Regulated power supplies aren't normally used with preamps.

I have never built a preamp without a regulated PSU.

Yes, really. Except for maybe some misguided souls making home-built amps maybe. They come under that same category as the few folk that insist on CCCS cathode biasing, SRPP preamp stages, and other snake oil stuff. Those few make a lot of posts on diyAudio though.

Then I am a misguided soul, making SRPP preamp stages, heretically powered by regulated PSUs. :D

BTW, they sound amazingly transparent. ;)
 
You seem very confused, Popilin.

I advanced the case for not bothering with a regulator.

An amplifier is "transparent", that is you can easily recognise what instruments were played, and diction of singers is clear, comes from having low signal intermodulation. Not the impairment we are discussing here - speaker cone excursion at very low frequencies....

That distortion products fall away with signal level, and thus are a non-issue with the low levels encountered in preamps in tube-based systems is a mathematical fact, like 1 + 2 = 3. It arises because a curved line (the tube input-output transfer) appears more and more like a straight line as you shorten the excursion over that curved line. In a tube amp, the output stage contributes most distortion, with some from the driver stage.

Thus SRPP can have some value in driving output stages in some designs, but are a useless fad in preamps.

This fact (percent intermod falls rapidly with fall off of signal level) is used by communciation radio engineers, as is well known. By inserting a resistive attenuator between the aerial and the radio input, you (counter-intuitively) improve the ability to resolve a weak signal in the presence of a multitude of strong signals. Radio enginners use the concept of 2nd and 3rd order intercept - the signal level at which the intermod products rise to the same level as the signal. If you know the intercept level you can draw a graph shwing the (lower) intermod levels at any lower signal level, without knowing anything about the circuit.
 
Last edited:
famousmockingbird said:
Or are you talking about the change of mains voltage causes a change of voltage on the HV secondaries, this voltage change can be looked at as and alternating current?
Yes. It is a low frequency signal on the supply rail.

Is this sinusoidal in nature?
No, but so what?

This change in voltage is much smaller on the anode due to the potential divider effect of the load resistor and anode resistance?
Yes, if you have a conventional grounded cathode stage. But even then not "much" smaller, just smaller - maybe -14dB before you add decoupling. If you have an SRPP only -6dB.

Since we are dealing with such small signal voltages coming from the turntable these small voltages from the power supply not being regulated showing up on the anode get amplified and cause the woofer flap?
Yes.

Can you show me the math on how you got the 80s for the time constant?
Just a first order low pass filter. We want -80dB (i.e. voltage ratio of 0.0001) at 20Hz, so we want 2 pi f C R = 10 000 - so CR = 80s.

Keit said:
Regulated power supplies aren't normally used with preamps. They are used in applications were a power tube screen requires a steady voltage, or in electronic instrument, oscilloscopes etc where DC precision is required.
Regulation can be used anywhere a stable voltage is required. Sometimes this is because a particular voltage is required, but a phono preamp doen't care too much what the voltage is provided it is stable. Regulation is like decoupling, except that it works down to DC - exactly what we want.
 
Yes I planned on going two section anyway with the regulator. I was just asking about the 80s time constant which I see now.


DF96 makes a good argument for why a regulated power supply is good for a phono section. Thanks for your help.

He makes the reverse argument actually. Any any reasonable system there is going to be a 2-section RC filter anyway.

But how is an electronic regulator, comprising 2 vacuum tubes, two tube sockets, a gas tube, and sundry resistors and a loop compensation capacitor going to be a cheap as a simple RC or RCRC filter? Or the equivalent in silicon be as cheap? It won't be of course.

This seems to be the same argiment as using CCS anode loads - super high impedance. Since more is better, the thinking is that still more is still more better. Ain't right. Elegant design isn't just added complexity; elegance is achieving all that is required with the simplest option.
 
No, not at all. Just make sure you include a 1.0M resistor between the DC rail and the bottom VRT.

Another trick I've used works well with a choke input supply. Set it up so that the critical current isn't reached until the VR tubes strike and add their current draw. So before the VR tubes strike the voltage will rise and continue to rise to that of a cap input supply, then drop back down to normal upon striking.

Works like a charm - I have a 0A3 and 0D3 in series on each channel and never a problem with striking.
 
You seem very confused, Popilin.

I born confused, but do not worry, I am medicated now. :D

I advanced the case for not bothering with a regulator.

Ah, silly me, I thought that this thread was about tube regulators…

An amplifier is "transparent", that is you can easily recognise what instruments were played, and diction of singers is clear, comes from having low signal intermodulation. Not the impairment we are discussing here - speaker cone excursion at very low frequencies....

Although the transparency issue was only anecdotal, your approach seems to me very simplistic, cited only as one among too much examples, if we add ripple to a sensitive stage, (i.e. a phono stage as the OP proposes) if its PSRR is low, goodbye transparency.

That distortion products fall away with signal level, and thus are a non-issue with the low levels encountered in preamps in tube-based systems is a mathematical fact, like 1 + 2 = 3. It arises because a curved line (the tube input-output transfer) appears more and more like a straight line as you shorten the excursion over that curved line. In a tube amp, the output stage contributes most distortion, with some from the driver stage.

We all know that valve curves, (transfer curve included) follow a 3/2 power law.

I agree that the output stage contributes most distortion, nonetheless a pre-distorted signal is not the best case scenario, and it is also a fact that many preamps sound horrible.

Thus SRPP can have some value in driving output stages in some designs, but are a useless fad in preamps.

A competent designer build an SRPP stage with fixed load, it allows to balance SRPP stage to give ridiculously low distortion, as a final stage you can choose some cathode follower, e.g. Allen Wright SLCF.

DF96 makes a good argument for why a regulated power supply is good for a phono section.

Agree.
 
Although the transparency issue was only anecdotal, your approach seems to me very simplistic, cited only as one among too much examples, if we add ripple to a sensitive stage, (i.e. a phono stage as the OP proposes) if its PSRR is low, goodbye transparency.

That is the case with power supply rectification ripple (100Hz or 120 Hz). But DF96 wasn't talking about rectification ripple. He was talking about much low frequencies arising from AC mains instability.

Rectification ripple reduces transparency because it intermodulates with the signal. Eg if the signal is 1000 Hz, and you like in a 50Hz power countries, the intermod products will be 900 and 1100 Hz. You can hear that. But if you have 1 Hz ripple, the spurs ar 999 and 1001 Hz. Your ears cannot resolve that.
 
We all know that valve curves, (transfer curve included) follow a 3/2 power law.

I agree that the output stage contributes most distortion, nonetheless a pre-distorted signal is not the best case scenario, and it is also a fact that many preamps sound horrible.

That's muddy thinking. With typical gains, the tube curve traverse in preamp stages is so small distorton is very much less than in the output stages or output driver. Not a little bit less, a whole lot less.

Say the output stage distortion is 1%. If the premap predistorts another 0.01%, you ears are still going to perceive 1%. You ears won't tell you 1.01% is worse. Your ears won't even distinguish 1.1% from 1%.

If a tube preamp sounds horrible, the designer must be completely incompetent.
 
That is the case with power supply rectification ripple (100Hz or 120 Hz). But DF96 wasn't talking about rectification ripple. He was talking about much low frequencies arising from AC mains instability.

Please, do not mix the discussions; I talked about transparency because you did.

Rectification ripple reduces transparency because it intermodulates with the signal. Eg if the signal is 1000 Hz, and you like in a 50Hz power countries, the intermod products will be 900 and 1100 Hz. You can hear that.

Totally agree.

Similar reasoning favors a well regulated PSU for heaters too.

But if you have 1 Hz ripple, the spurs ar 999 and 1001 Hz. Your ears cannot resolve that.

My ears cannot resolve that, but you transfer the problem to the speakers.

That's muddy thinking. With typical gains, the tube curve traverse in preamp stages is so small distorton is very much less than in the output stages or output driver. Not a little bit less, a whole lot less.

Not necessarily, there are very low distortion power amps out there.

Say the output stage distortion is 1%. If the premap predistorts another 0.01%, you ears are still going to perceive 1%. You ears won't tell you 1.01% is worse. Your ears won't even distinguish 1.1% from 1%.

Despite your numbers are a bit tendentious, again, you assign to distortion as the only cause of bad sound.

A phono stage is the major contributor of noise, even the volume pot before the line stage can generate one order of magnitude more noise than valves per se, not to speak of some HF oscillation problems, very difficult to see with a scope.

Unfortunately, preamps has its own sound signatures, and they are far to be absolutely transparent.

If a tube preamp sounds horrible, the designer must be completely incompetent.

The existence of completely incompetent designers it is also a fact.

Does not exist an amplifier better than its power supply, then, an incompetent design can be even worse.
 
Last edited:
Please, do not mix the discussions; I talked about transparency because you did.
No. You mentioned transparency first, at the end your post #22, in which you implied that the transparency of your systems is due to HT regulation.

My ears cannot resolve that (intermod 1 Hz away from 1000 Hz), but you transfer the problem to the speakers.
No. The result of intermoding 1000 Hz with a bit of 10001 Hz is a tiny bit of 1Hz (inaudible) and a tiny bit of 2001 Hz, masked by the speaker's harmonic distortion. In any case the speakers should operating in their linear range. If they are not, intermod from HT drift artifacts will be the least of your concerns.

You assign to distortion as the only cause of bad sound.
No I didn't.

A phono stage is the major contributor of noise, even the volume pot before the line stage can generate one order of magnitude more noise than valves per se, not to speak of some HF oscillation problems, very difficult to see with a scope.
You seem very confused again. Pots contribute noise, but their contribution to distortion is negligible.

If you have a preamp that oscillates, its either faulty or the designer was incompetent.

The existence of completely incompetent designers it is also a fact
So, avoid them and their products.
 
This seems to be the same argiment as using CCS anode loads - super high impedance. Since more is better, the thinking is that still more is still more better. Ain't right. Elegant design isn't just added complexity; elegance is achieving all that is required with the simplest option.

Two other reasons for using a CCS, power supply isolation and getting close to maximum mu from a tube without very high B+ supplies and large resistors.
Both of these are relevant when designing phono stages.
As for your thought that 30Hz is good enough, when you design for 30Hz the roll off starts well before that number. Design for a 5Hz roll off and you can have a circuit that has good frequency and transient response at 20Hz. Most record cutting lathes in the stereo age were / are able to properly cut 10Hz signals. This isn't opinion, it's well documented. I wouldn't consider a preamp that only went to 30Hz competently designed. Apparently neither would almost all commercial manufacturers.
 
Two other reasons for using a CCS, power supply isolation and getting close to maximum mu from a tube without very high B+ supplies and large resistors.
Both of these are relevant when designing phono stages.
This is the old more is better, so still more is still more better argument. Woolly thinking.

You get as close as desired to max mu as long the the anode load is large compared to the tube anode impedance. You don't need a CCS for that.

As for your thought that 30Hz is good enough, when you design for 30Hz the roll off starts well before that number. Design for a 5Hz roll off and you can have a circuit that has good frequency and transient response at 20Hz. Most record cutting lathes in the stereo age were / are able to properly cut 10Hz signals. This isn't opinion, it's well documented. I wouldn't consider a preamp that only went to 30Hz competently designed. Apparently neither would almost all commercial manufacturers.

The lathe cutter head could generally do 10Hz and even lower. But the cutting amplifiers mostly could not. Cutting engineers didn't like deep bass because it requires too much groove spacing. And in the UK, the performance royalty system encouraged long duration songs, so groove spacing had to be kept down. Also, records were made to be playable on the cheapest record players. Too much bass makes for tracking issues. And mixer desks used to cut-off at 50 Hz too.

In the UK, much that was released on vinyl was recorded on tape recorders that could barely meet consumer grade specs of the 1970's.

Heck, a lot of music recorded in British studios in the 1950's and 1960's was mastered on the original Wright and Weaire 1/4 inch tape recoder - the same one that the BBC did not regard as really good enough for music on AM radio. EMI reserved their professional BTR series for musicians that earned them millions eg Cliff Richard, Beatles.

The specified response of the Wright and Waire (later sold as Ferrograph) was 70 Hz to 12,000 Hz +.- 1.5 dB and 50 Hz to 10,000 Hz +,- 2 dB - considered quite good in its day for a 1/4 inch tape recorder but not hi-fi. It was popular with studios because it was easy to operate, did not seem to lead to accidental tape erasure, was rugged and reliable, and had a flexible arrangement for inputs and outputs. And a fraction of the cost of a BTR or an Ampex.

In the USA, big outfits like RCA had fully profesional studio gear that was of the highest grade and really excellent. But, like in England, a lot of studios (eg the famous Sun Studios) didn't want to spend the money and used standard radio station equipment - mike amps, mixer, tape recorders, cutting lathes, all specified for 50Hz to 10,000 Hz per radio station practice.
 
Last edited:
Two other reasons for using a CCS, power supply isolation and getting close to maximum mu from a tube without very high B+ supplies and large resistors.

Agree, but the most important benefit of CCS anode load is valve linearity, despite the fact that someone want to cover the sun with her hand.

Both of these are relevant when designing phono stages.
As for your thought that 30Hz is good enough, when you design for 30Hz the roll off starts well before that number. Design for a 5Hz roll off and you can have a circuit that has good frequency and transient response at 20Hz. Most record cutting lathes in the stereo age were / are able to properly cut 10Hz signals. This isn't opinion, it's well documented. I wouldn't consider a preamp that only went to 30Hz competently designed. Apparently neither would almost all commercial manufacturers.

Totally agree.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.