Another goofball idea

Here is your chance to tell me why this wouldn't work.

In most commercial amplifiers distortion is controlled in the cheapest way possible, i.e. NFB. Given that the DIY is not bound by all of the economic constraints of mass market electronics we are free to try other methods of reducing the non-linearity of power transistors even some goofy ones. If some NFB is still required at least we will have lowered the amount needed.

My thinking goes thusly (only SE amplifier topology considered here)... Please correct me at any place that I go astray.

Much of the non-linearity of the bi-polar power transistor is a result of the variation of forward current gain with collector current. One way to reduce the variation in Hfe is then to limit the current swing needed to produce a particular output power. Paralleling a boat load of output devices taking proper precautions to avoid current hogging and to provide adequate heatsinking is a typical way to do this.

Now I wonder if there might not be another way. To develop a particular power output we can either run high AC current or a lower current at high AC voltage. So what if we were to use a transistor with a rather high maximum collector working voltage and high AC load impedance so that the current swing is reduce and with it the variation in Hfe is reduce. In addition we bias the device at a point of the Hfe curve where the variation is minimized.

Now to provide the high dynamic impedance we insert our old friend the output transformer into the collector circuit. The transformer would be wound to provide the highest practical AC load to the output device.

OK. So let me have it. I have the asbestos suit handy. :hot: :D why wouldn't this work. Assume earlier stages are linearized as much as possible by local NFB or emmiter degeneration.

mike
 
I am also kicking around tube ideas (in fact will probably end up doing mostly tube stuff in the end) but I am always keeping an eye on trying to improve transistors since I have a lot of sand lying around that I wouldn't mind using. :)

Since some of the best amps in the world use OPTs it doesn't seem like the trany itself is such a low-fi device but if high voltage transistors are crappy then it kind of 86s the whole idea. I don't know enough about MOSfets to even guess at whether such an idea is applicable to them.

mike
 
yup
sometimes I think tubes and SS designs differ mainly in philosophy, i.e. tubes are mainly used in their very linear region with low gain and low feedback, transistors are forced to give much gain often outside linear region with loads of feedback (both local and global).
Time to mix it!!
regards
 

PRR

Member
Paid Member
2003-06-12 7:04 pm
Maine USA
www.diyaudio.com
> Much of the non-linearity of the bi-polar power transistor is a result of the variation of forward current gain with collector current.

Only in bad designs. And Beta is so cheap that this is rarely a problem.

The first-order nonlinearity is the variation of forward voltage gain with collector current, or Transconductance.

> One way to reduce the variation in ... is then to limit the current swing needed to produce a particular ... power.

Yes. This is routine in input stages: standing current much higher than signal current.

> insert our old friend the output transformer

Does not change the basic problem. If you swing 1A to 3A, or 1mA to 3mA with a 1000:1 transformer, same transistor distortion, plus iron cost/weight/color, plus the generally poorer preformance of high-volt devices.

Where you seem to be headed is getting 10 Watts out of a 100 Watt amp. Distortion is lower, about 3X lower because current swing is 3X lower. And of course, that's what many people do: run high-power amplifiers at modest powers.

If you read Doug Self, you will see that most transistor distortions have solutions. And that there are several non-obvious distortion mechanisms. Applying all that gets the THD number well down into the noise level.
 
Excellent, here is some meaty information.

> Much of the non-linearity of the bi-polar power transistor is a result of the variation of forward current gain with collector current.

Only in bad designs. And Beta is so cheap that this is rarely a problem.

Just want to clarify. Are you saying that a good design actually reduces the variation in Hfe with Ic or that a good design reduces the effect of non-linear Hfe?

The first-order nonlinearity is the variation of forward voltage gain with collector current, or Transconductance.

So the distortion is a result of the voltage/current relationship at the base-emitter juntion rather than a variation due to the collector current itself?

> insert our old friend the output transformer

Does not change the basic problem. If you swing 1A to 3A, or 1mA to 3mA with a 1000:1 transformer, same transistor distortion, plus iron cost/weight/color, plus the generally poorer preformance of high-volt devices.

What I am not clear on is why this would be the case. Is distortion not reduced in an amplifier by using multiple output devices each delivering less AC current (i.e. less collector current swing)?

Where you seem to be headed is getting 10 Watts out of a 100 Watt amp. Distortion is lower, about 3X lower because current swing is 3X lower. And of course, that's what many people do: run high-power amplifiers at modest powers.

Well actually what I was proposing is higher signal voltage and lower signal current for the same power output. Looks like I need to do some more reading. It is possible that multiple output devices would be a better approach but you seem to be implying that the current swing is not the real problem which would indicate that more devices would not help either.

If you read Doug Self, you will see that most transistor distortions have solutions. And that there are several non-obvious distortion mechanisms. Applying all that gets the THD number well down into the noise level.

Will see if I can look this up.
Thanks for you very helpful post.

mike
 
I did find a Self article. A lot of it didn't seem to apply he is specifically adressing Diff input, PP, class AB with GNF rather than SE Class A with local FB only. However one thing I was getting from his article is that large signal distortion in the EF is a result of the non-linear current load on the driver stage due to the variable Hfe of the output device. In other words the decrease in Hfe with increased current causes a greater increase in the needed driver current which causes a voltage droop in the driver. Am I reading that correctly? If so minimizing output impedence and maximizing current capability of the driver should help right? Would adding a CC source to the driver be helpful?

mike
 
mashaffer said:
I did find a Self article. A lot of it didn't seem to apply he is specifically adressing Diff input, PP, class AB with GNF rather than SE Class A with local FB only. However one thing I was getting from his article is that large signal distortion in the EF is a result of the non-linear current load on the driver stage due to the variable Hfe of the output device. In other words the decrease in Hfe with increased current causes a greater increase in the needed driver current which causes a voltage droop in the driver. Am I reading that correctly? If so minimizing output impedence and maximizing current capability of the driver should help right? Would adding a CC source to the driver be helpful?

mike


Mike,

You seem to know very well what you're talking about. The issue with the driver being influenced by the output device Hfe is well known. By the same mechanism, the variations in load (speaker. xover) with freq also influence the driver; it is as if the load and its variations is 'reflected' to the driver. One approach often taken is to make the driver able to deliver enough current and give it a very low Zout, so no CS but a VS.

One idea in the direction of your thinking is to use multiple speakers in series. If you use say 4 times nominal 8 ohms instead of a single 8 ohms, your load will be nominal 32 ohms, and you could use a higher voltage and lower current. You wouldn't need the tranny. Of course, you would need that speaker designed for your amp, but it is an option.

Jan Didden
 
mashaffer said:
Here is your chance to tell me why this wouldn't work!
---

My thinking goes thusly (only SE amplifier topology considered here)

Much of the non-linearity of the bi-polar power transistor is a result of the variation of forward current gain with collector current.
--
Now I wonder if there might not be another way.

To develop a particular power output we can either run high AC current or a lower current at high AC voltage.

So what if we were to use a transistor with a rather high maximum collector working voltage and high AC load impedance
so that the current swing is reduce and with it the variation in Hfe is reduce.

In addition we bias the device at a point of the Hfe curve where the variation is minimized.

:cool:
Now to provide the high dynamic impedance we insert our old friend the output transformer into the collector circuit. The transformer would be wound to provide the highest practical AC load to the output device.

OK. So let me have it. I have the asbestos suit handy. :hot: :D why wouldn't this work. Assume earlier stages are linearized as much as possible by local NFB or emmiter degeneration.

mike
SY said:
It's been done, but lord knows why.

OPTs are pretty awful devices and HV transistors have low hfe and ft- it's a wonderful case of the cure being worse than the disease.
Duo said:
This is an interesting line of theory. The whole idea of running the transistors in such a gentle current situation is wonderful.

The transformer on the other hand.... :eek:
darkfenriz said:
If so...
... then why stick to SS and not take a tube?
Most advantages of a transistor seem to 'blur' and 'fade' in this configuration.
This way you get (as vintage as vodka+cucumber) SE triode amp+output transformer.


mike, mashaffer,
I remember when you joined www.diyaudio.com
and some good discussion we had, in Solid State


Your idea/question is a very good one.
Looking for some other alternatives to common amplifier thinkings
in the audio amplifier mass industry.

As you put it, so well!!!!:
In most commercial amplifiers distortion is controlled in the cheapest way possible, i.e. NFB.

Another golfball idea :cool:



Speaking for myself .........................................................
I have done some paperwork, simulation & experimenting with Very High Voltage Preamplifiers.
The results look not bad.
There are some practical issues, however, when using very high voltage and transistors.
Most components/devices commonly avalable
are made for those cheapest ways possible = lower voltages, load impedances etc.

This goes for not only transistors, but also resistors, capacitors .... etc.

I see you dwell 99% in Tubes Forum now
... maybe this is as good an answer as any, to your question in this topic ;)


Regards :) lineup
..................................................................................



Originally posted by mashaffer:
I did find a Self article. A lot of it didn't seem to apply he is specifically adressing Diff input, PP, class AB with GNF rather than SE Class A with local FB only. However one thing I was getting from his article is that large signal distortion in the EF is a result of the non-linear current load on the driver stage due to the variable Hfe of the output device. In other words the decrease in Hfe with increased current causes a greater increase in the needed driver current which causes a voltage droop in the driver. Am I reading that correctly? If so minimizing output impedence and maximizing current capability of the driver should help right? Would adding a CC source to the driver be helpful?

mike