Bridging power amps

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Yes, my theory if that is the way you want to see it.
I see it as being the messenger. The information is all in the Forum, there is no need for me to invent some new theories, especially since I know I am not capable of that.
I also see the very big difference between measured power fed into a resistive load and the stress imposed on amplifiers trying to feed reactive loads. I know they are not the same. I adjust my models to take some account of that difference.

Folk that blindly think: Ahh "That amp can deliver 57.5 watts into a 4r0 resistor". It must be just perfect for driving a two way 50W 4ohm 87dB/W @ 1m speaker with internal passive crossover that has lots of correction components to flatten the in room response to something the manufacturer thinks I will like.

BTW,
I have repeatedly stated my opinion about the crippling current limitations of the National chipamps and particularly what to design for when attempting to build an amplifier for a 4ohm speaker. 68W into 8ohms and ~40W to 50W into 4ohms, will give similar stress values to the internal components of the IC.
Asking a 3886 to deliver 68W into a 4ohm reactive speaker is a nonsense. That's why National only quote resistive loads throughout the datasheet. National like all the other manufacturers hide the data they don't want you to see.
 
Last edited:
Yes, one thing we really could use is a better metric for the load a given speaker poses to an amplifier. An "8 ohm" speaker is very different from an 8R0 resistor and from any other "8 ohm" speaker. Such a metric should include both impedance, phase angle, and efficiency, since what we really are interested in is the sound, not the heat. Ideally, it should also weight different frequencies in a way similar to the spectrum of typical programme material. I suspect that no manufacturer will step up to develop such a metric, though. It is convenient to have many different measurements, so everyone can find some metric that makes their product look good on paper.
 
The trouble with finding such a model is that there are too many different speakers around and 'the' typical speaker does not exist.
The only thing all speakers have in common is a resistive component and the vast majority of speakers falls into an impedance range that roughly resembles an average load of four to eight Ohm. That led to the convention to quote the maximum output power into four and eight Ohm resistors.
If that makes National's amp ICs look good, it does the same for all other amps.
 
It's a different article from the one previously linked in this Forum, but the message is very similar.
The effective "resistance" seen by the amplifier is much less than the measured Re of the speaker when fast transients are fed through the system.

I note the 0.01% of time that EPDR exceeds 4times from the table2 results.
That 0.01% could be 10 clipping/limiting incidents, each lasting an average of 100us, occurring in every 10 second section of the replayed music program. Or one long clipping incident lasting 1ms every 10seconds. Is that acceptable when high quality sound reproduction is expected and/or designed for?

as for the ~0.5% of clipping or limiting incidents during the replay of program material. I suspect that will sound really bad.

Note that table 2 and the EDPR are power referenced.
What usually matters is the current capacity of the output stage when the devices have a particular Vce.
The 3 factor I use for current is equivalent to an EPDR of 9 when applied to the nominal impedance of the speaker. It would be a different EPDR if the Re value were used instead.
 
Last edited:
Those numbers are for the B&W 802, which has, in words from the article, "a reputation for being an amplifier ball-breaker". Not all speakers are that cruel to amps.

Ideally, I would like to scale the EDPR metric with the efficiency of the loudspeaker to understand the absolute magnitudes involved, not just the relative. That scaling gives us a curve for how much current is needed to produce a particular SPL at a particular frequency, given the complex impedance of the loudspeaker. For instance, a 97 dB @ 1 W speaker with EPDR of 1 ohm will need 1 A to deliver 1 W and 97 dB. An 87 dB speaker needs 3.2 A to produce the same 97 dB, since power increases as the square of the current and this speaker needs 10 W to do the same job.

Finally, I would calculate the weighted average, using the spectral content of representative programme content, perhaps just 1/f (pink noise), as the weighting function. Gnarly impedance curves in the bass will draw more current than equally gnarly impedance curves in the treble, simply because there is more signal. Then we would have a "figure of merit" that says a bit more about how heavy load a particular speaker poses to the amp. The metric would probably be in units of dB @ 1 m per Ampere.

For instance, I have a pair of Beyma TPL150 tweeters for my next project. These have 99 dB efficiency and constant 5 ohm impedance from 1.5 kHz up. To the amp, they look like a 5R0 resistor. Pretty much any amp can drive those, since 1 A current will produce 5 W power or somewhere around 106 dB @ 1 m. I will not need more than that very often, and I wouldn't worry too much about powering these with bridged chipamps.

Of course, this is a digression from the question in the opening post, but perhaps useful to understand in what applications a bridged chipamp is a reasonable idea and where it is not.
 
any comments on the percentage of time that the music program causes apparent massive reductions in effective speaker impedance?

Does 0.48% or 0.01% mean anything? Do we ignore these "events" as not likely to happen to me?
Or are they very relevant to what we do to our systems on a regular basis?

in my previous post, the last paragraph
Note that table 2 and the EDPR are power referenced.
What usually matters is the current capacity of the output stage when the devices have a particular Vce.
The 3 factor I use for current is equivalent to an EPDR of 9 when applied to the nominal impedance of the speaker. It would be a different EPDR if the Re value were used instead.
has mixed up resistances and powers. Do not bother trying to understand what I was trying to say
 
Does 0.48% or 0.01% mean anything? Do we ignore these "events" as not likely to happen to me?
Or are they very relevant to what we do to our systems on a regular basis?

That depends on how loud you usually listen and for which purpose and speaker you design an amplifier. There will always be a compromise. The equivalent EPDR of 9 for your factor of 3 is surpassed by all three speakers in the article at low frequencies. So will you increase that factor of 3 further to make your amp all-speaker-proof? Or will you stick to it, because you find it gives the best investment/performance ratio or because it is good enough for the speakers you have? If you do, what is wrong with somebody else deciding for a different, maybe lower factor?

And what does it actually mean for a chip amp? The LM3886 has a worst case peak current of 7 A, i.e. ~5 Aeff. If you take the equivalent impedance of 1,5 Ohm as reference, the amplifier would be limited to ~7,5 V output. That is nearly 8,5 dB louder than the output at 2,83 V. Even with an insensitive speaker that is quite loud. Should that not make us happy about the built-in hearing protection such an amp offers?

And then there are the speakers that don't pose such difficult loads.
 
any comments on the percentage of time that the music program causes apparent massive reductions in effective speaker impedance?

Does 0.48% or 0.01% mean anything? Do we ignore these "events" as not likely to happen to me?
Or are they very relevant to what we do to our systems on a regular basis?

Frankly, I'm not sure these percentages tell us anything that is not already in the EPDR curve. The B&W 802D has EPDR well below two ohms in two different frequency bands. Every time the programme content has most of its energy in one or both of those two bands, current will be more than four times higher than what you would expect from an 8R0 resistor. To me, these percentages only say something about what fraction of time the recording has most of its energy in these narrow worst-case bands for this particular speaker. The recording in question is a bass solo, so no wonder that it has quite a bit of energy in the 50-90 Hz band. The flute and soprano pieces tickle the 600-1000 Hz minimum instead. I think the main purpose of those tables was to get an independent confirmation of the EPDR as a metric.

We could easily build a loudspeaker that never dips below an EPDR of 5 ohms, for instance using nominal 16 ohm drivers, and combine this with 95+ dB efficiency. "Events" at the problem frequencies for the 802D would be a non-issue with such a speaker, even if we still get nearly four times the "nominal" current at some frequencies. It would still play ear-shatteringly loud at quite moderate currents. That would be a much more tube- and chipamp-friendly speaker than the B&W 802D.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.