cviller said:
How hard are those output devices biased? ~5W per device?
And how does this sound different to fewer devices biased harder?
I'm a bit puzzled... the mantra in this forum seems to be higher current equals lower distortion, but does this still apply when the current is shared between many devices... ?
Now there's an interesting question. As with most things, you can hit it quickly and move on or take a deep breath and submerge yourself in things for a while...
Set aside the rail voltage question for the moment and assume that all other factors are held constant. The more current you can supply, the better. In a class A amp, you have to plan on delivering all the current from the quiescent bias--going to class B is cheating.
I glanced at the owner's manual and saw that the amp doubles into 4 Ohms. That means that you've got to double the 8 Ohm bias. In a 30W/ch amp, that's not really all that intimidating because the rails are fairly low and the heat dissipation is correspondingly less.
Okay, that's the quick answer. Now go a level deeper.
I believe Nelson has said that he feels that it's the overall level of bias that matters--not the bias per device. That's a slightly different way of saying the same thing I said above. Treat the amp as a classical Black Box and ignore the innards. Run the bias up as high as you dare and you don't have to worry about low impedance loads or dips in the impedance curve (which aren't a problem until you happen to hit that frequency, granted, but...).
Lotsa devices versus few devices is another question entirely. There are at least a dozen ways to look at this. Part of Nelson's view is--has to be, really, from a businessman's point of view--reliability. He puts a small army of devices in the output in order to make things more reliable in case of operator error. Even day-to-day reliability due to heat dissipation improves.
I've been thinking about this and I'm leaning the other way at the moment: Fewer output devices with more bias per device. It's not that hard to argue high bias per device, so I'll leave that alone. There's a more subtle point in that if you think about matching devices, there's always a little slop in the result. We, as DIYers, don't have the resources to buy as many devices as someone who does this for a living. If, as a result, you're limited to a matching pool of 20 units, you're not going to find perfect matches unless you've been a very, very good boy and the audio gods are smiling upon you. The practical implication of this is that you're building something like an Aleph 2, which uses six pairs of devices per channel and you've got each device going a little more or a little less while the music is playing. To some greater or lesser degree this causes distortion.
I can just see the next fifteen posts wailing about how they matched their devices to .00001V etc. etc. etc. Follow that with another fifteen posts from people worried that somehow their Alephs suddenly sound worse than they did yesterday. Not the case, guys. Your Alephs sound just as good as they always did.
What I'm talking about is the idea that if I can get two devices to match then I'm better off than having four or six...assuming that the devices can take the heat. So it's a balancing act. And since the devices are running harder, you have to consider the possibility that they will pop sooner rather than later. A not-so-inconsequential benefit is that the cumulative Gate capacitance seen by the front end drops.
Note also that the pictures also seem to show the output devices to be IRF parts. Given that the IRF P-ch devices have that odd frequency-related gain thing, then Nelson just might be rolling out his stock of IRF outputs with an eye towards using, say, Fairchild in future designs. What better way than to overbuild the outputs in current product? Mind you, I'm not saying that this
is the case, just that it's a possibility.
There are a few other things I was going to throw in but I'm needed elsewhere. It's taken me three or four hours even to get this far. Bummer.
Grey