How to limit amp's output power when connected to small speakers?

Status
Not open for further replies.
Thanks to all who replied and provided insight. I think I am going to go with the bulb solution as I really need something easy. My electronic skills are also very limited so messing with gain is definitely beyond my knowledge.

AP2, since I am noob in electronics, by "in series to load" you mean just before the speakers?
 
Thanks to all who replied and provided insight. I think I am going to go with the bulb solution as I really need something easy. My electronic skills are also very limited so messing with gain is definitely beyond my knowledge.

AP2, since I am noob in electronics, by "in series to load" you mean just before the speakers?

yes, bulb connected as "fuse" in speaker wire (or inside of speaker).
 
Hi,
A very simple,very good system is a 6V lamp 1,5A in series to load (or try other current).
this , (becouse thungsteno change R with current) ensures an excellent automatic adjustment of the current 🙂

Regards

HI...

BOSE uses this method in they're systems for the woofer and satellites and it works flawless.

regards,
Savu Silviu
 
Limiting the gain is useless. The extra gain is required because different recordings may vary in level as much as 10dB.


So then set the 0dB point at your peak level (with digital, peak level is equal to your highest possible average level) and let the -10dB fall where it may. This is not rocket science ... you can't do a digital recording without figuring this stuff out.

And, although it depends somewhat on what you're listening to, it's an extremely rare modern digital recording that is not pegged pretty much at 0dB with, at most, 3 dB of dynamic range over most of the track (ie +0, -3dB). Most, in fact, clip the digital waveform and basically just sit at 0dB for the entire song. Pop in your favourite CD (or, if you must, an mp3) and look at some waveforms. No dynamic range at all. Zero. Nada.

So, open any sound recording app on a computer, make an 0dB CD-R, and pop it into a CD player connected to your amp and speakers. Set the level. Done. The level will be lower on any other source, including a CD with actual dynamic range (if you can find one), so nothing to worry about.

After building my first gainclone amp based on LM3875 kit and two TA2020 amps (from pre-assembled boards), I would like to venture into a tube DIY. I know how to solder, identify parts, use DMM, etc ...
Thanks.

Thanks to all who replied and provided insight. I think I am going to go with the bulb solution as I really need something easy. My electronic skills are also very limited so messing with gain is definitely beyond my knowledge.

Hey, you can use the bulb. It's safe and will work. So what if it wastes power for no apparent reason? So what if it produces a load that the amp almost certainly was not built to drive. That's why you asked people who know, right? So you could choose the most lame possible way to do it. But, according to you, you have the skills to do it right.
 
Last edited:
And, although it depends somewhat on what you're listening to, it's an extremely rare modern digital recording that is not pegged pretty much at 0dB with, at most, 3 dB of dynamic range over most of the track (ie +0, -3dB). Most, in fact, clip the digital waveform and basically just sit at 0dB for the entire song. Pop in your favourite CD (or, if you must, an mp3) and look at some waveforms. No dynamic range at all. Zero. Nada.

That's actually not quite correct. Although it's true than modern recordings are much louder than older recording, and thus have vastly reduced dynamic range, most will still have a average level at -10 to -9 dB which is todays de facto standard.

In the "good ol' days" before the "loudness war" recordings followed the RIAA guidelines that dictated between -18 to -12 dB depending on wanted dynamic range. Where -18 dB was called uncompressed and -12 dB was called maximum compression.

Please note that we here talking electrical dB where each +3dB is a doubling in level and not acoustic dB (Sound Pressure Level) where each +6dB is a doubling in level.

Todays de facto standard of tested amps reflects this in that they're tested against gaussian pink noise, usually normalized to -1 dB max peak which gives the -10 dB average level against RMS value.

So effectively you can pretty much always always divide the max RMS output power of an amp by 8 or 10 to get the actual maximum average power output with music signals.

This is why I say that unless he wants to play continuous sine waves, he don't need to lower the output at all, or even protect the speakers in any way. They'll not see more power than they're rated to.
 
Please note that we here talking electrical dB where each +3dB is a doubling in level and not acoustic dB (Sound Pressure Level) where each +6dB is a doubling in level.

This is a misconcepcion. First of all, level means a dB value, so this is a self-contradiction. Further, there are no acoustic dB and electrical dB (or level) as such. The 2 different kinds of levels you could have heard about are levels related to voltage (or current, pressure, etc...)
http://upload.wikimedia.org/math/9/0/2/9023cb5d56a945e9332196fabacb463a.png
and level related to power (or intensity, etc...).
http://upload.wikimedia.org/math/3/5/a/35a09a3f3812c4fd369bf3fea2b719b0.png

One of the most important question is: what is the reference value (A0, P0)? In case of CD Audio it can be the physically allowable maximum (2 Vpeak in analog, 00h or FFh in digital), or the maximum of the actual song. Both are useful, but in other aspects...

which gives the -10 dB average level against RMS value.

Level of what, and RMS value of what? You compared 2 undefined values wich are different in many aspects (but none of them are mentioned by you).
 
That's actually not quite correct. Although it's true than modern recordings are much louder than older recording, and thus have vastly reduced dynamic range, most will still have a average level at -10 to -9 dB which is todays de facto standard.

I suggest you look at some waveforms of your favourite CDs, or even better an LP transcribed to digital. On older, tape-mastered disks you can normalize at -13dB Average and avoid clipped samples on 90% of the music recorded prior to 1990. Those that cannot be normalized at that level will have increased dynamic range (ie you would have to normalize at lower than -13dB to avoid clipped samples), not less.

On modern digitally recorded CDs the overall level will be reduced significantly if you were to attempt to normalize at that average level, indicating that even that small amount of dynamic range doesn't exist in the recording. [However, when playing back music, the modern and the older recording would have the same perceived loudness if normalized at the same average level, which is much more pleasant to listen to when playing multiple songs in a random order; it's what iTunes does when you enable "sound check", although sound check does a poorer job, sonically than manually editing the file would].

Eg: Katy Perry's "I kissed a girl"
Maximum dynamic range 4.9 dB (peak level measurement); -6.1 dB (average level measurement). In other words, you could normalize at -6.1 dB average and not clip any samples (that were not clipped on the original disk).


This is a misconcepcion. First of all, level means a dB value, so this is a self-contradiction. Further, there are no acoustic dB and electrical dB (or level) as such. The 2 different kinds of levels you could have heard about are levels related to voltage (or current, pressure, etc...)
http://upload.wikimedia.org/math/9/0/2/9023cb5d56a945e9332196fabacb463a.png
and level related to power (or intensity, etc...).
http://upload.wikimedia.org/math/3/5/a/35a09a3f3812c4fd369bf3fea2b719b0.png

One of the most important question is: what is the reference value (A0, P0)? In case of CD Audio it can be the physically allowable maximum (2 Vpeak in analog, 00h or FFh in digital), or the maximum of the actual song. Both are useful, but in other aspects...

Level of what, and RMS value of what? You compared 2 undefined values wich are different in many aspects (but none of them are mentioned by you).

I just assumed you understood that a recording level is one thing and electrical signals are another.

With an analog recording, the 0vu level would be set based on an electrical level, although that level was a standard, the standard varied depending on whether your meter was calibrated to consumer of professional levels. Exceeding 0Vu was possible at the expense of increased distortion when recording a session to tape, but the engineer would use his good judgement, so large variance in average levels wasn't common. LP records would require certain constraints be adhered to to avoid the cutter making extreme excursions and creating an unplayable record, but that would be accounted for in mastering.

Broadly speaking an LP record had a fairly consistent level due to the desire to maximize level versus the practical needs of the cutter and the physical limitations of the LP. If you were to have large excursions encoded in the LP (deep bass), for example, the length per side had to be reduced. So the desired length of the LP by the label would limit overall levels and probably involve equalization to limit LF content. If you looked at an album and it had 25 minutes per side, you knew there could not possibly be much low bass encoded. If it were perhaps 18 minutes per side, it could possibly have extended LF information. And so on.

With a digital recording, 0dB is the maximum record level. This level cannot be exceeded regardless of how you try; all that will result is clipped samples and a higher average level (less dynamic range). All levels on a digital recording are referenced to that level; so -6dB is six decibels down from the maximum level possible. In contrast to an LP record there is no practical limit on LF response since there is no penalty to include it that matters to the artist, engineer, or label ... you can always create a CD Master of the same length, for example, with or without extended LF response.

In practical terms analog recordings were mastered for LP, so the cassette would use the same master, as would the CD release. Similarly, modern digital recordings are created for CD and the mp3 would use the same master, although there are some engineers who do master for each format independently today.

"In the case of CD Audio it can be the physically allowable maximum ... or the maximum of the actual song."

On a modern mastered disk, those values will be the same.

For more on dynamic range and current mastering practices, see:
http://www.cdmasteringservices.com/dynamicdeath.htm

Note that the most recent example in that article is from 2000, I can assure you it's become worse in the last decade.

When played back by hardware, that level will correspond to some maximum output level electrically. Exactly what voltage that is depends on the device and the load it's driving.

Since most CD players are supposed to output 2V, and since no other conventional line level signal is that high (eg, an am/fm tuner, typically 0.775V) then you can use the output of your CD player as your maximum output level expected.

All other line level signals will be lower. It's best to use your own CD player since few actually output exactly 2V and there is variance from unit to unit (it's extremely common for it to be above 2V), and you want to set the level necessary based on the gear you will be expected to use with the amplifier and speakers.
 
Last edited:
the obvious (to me) answer is to lower the rail voltage to the chipamp. Reducing it to about 2/3 will give less than half the unclipped output power.

Another way is to put a power resistor in series with each speaker. This makes a voltage divider with the speaker. Half the voltage gives 1/4 the power. This will affect the frequency response of the speaker, if the speaker impedance is not flat across frequencies (it never is). Given the lo-fi nature of this sytem, it should not be a problem.
 
the obvious (to me) answer is to lower the rail voltage to the chipamp. Reducing it to about 2/3 will give less than half the unclipped output power.

Another way is to put a power resistor in series with each speaker. This makes a voltage divider with the speaker. Half the voltage gives 1/4 the power. This will affect the frequency response of the speaker, if the speaker impedance is not flat across frequencies (it never is). Given the lo-fi nature of this sytem, it should not be a problem.

Lowering the voltage rails would work. The only reason to consider not doing so, and limiting the input sensitivity instead, is that most Class D amplifiers have better distortion characteristics at all output levels when supplied at near optimum voltage rail levels. With more conventional amplifiers, eg Class AB, that would be less of a problem in many cases.
 
I think the series light bulb idea is nothing but a compressor with a lot of distortion and will seriously affect the frequency response by increasing the impedance driving the speakers and in a very non-linear way. A series power resistor is a much better way IMHO.

But nobody mentioned a parallel resistor. By placing about a 4 ohm resistor in parallel with the 8 ohm speaker the resistor will consume 2/3 of the power (the speaker and resistor will see the same voltage) and the amp will be forced to current limit if you attempt to exceed the rating of the amp, thus protecting the speakers. This assumes overcurrent protection or clipping of the amp to limit the power.

This method wastes power but it won't screw up the frequency response (assuming the amp can drive 3 ohm loads) and it won't distort like the light bulb.

A light bulb is good for a compressor, but even then sized too small and it will sound horrible. Sized too big you have no real protection. Finding the perfect bulb will be very difficult IMHO. If you want to make a limiter using a light bulb you really need an adjustable output to size the output power to the light bulb, not the other way around. I'll bet you either end up with distorted sound or no effective protection unless you're very lucky.

I've been playing around with incandescent compressor/limiters and it's tricky to get it right. Using only an op-amp and a flashlight battery can yield a useable compressor, but the signal amplitude has to be just right. There's also the problem of more bass compression than treble. The filament has a thermal time constant and won't respond to fast transients as well as it will to bass, so the sound can get harsh without a treble pre-emphasis/de-emphasis before and after the bulb so that the treble has enough energy to heat up the filament.

But for limiting speaker power, ditch the light bulb IMHO. A fuse would be your best bet if you want to keep it low tech, but you'll need spares. My experience is that the sound will degrade and distort well before the peak power in the speaker is reached if the bulb is sized to protect the speaker.

:2c:
 
I think the series light bulb idea is nothing but a compressor with a lot of distortion and will seriously affect the frequency response by increasing the impedance driving the speakers and in a very non-linear way. A series power resistor is a much better way IMHO.

But nobody mentioned a parallel resistor. By placing about a 4 ohm resistor in parallel with the 8 ohm speaker the resistor will consume 2/3 of the power (the speaker and resistor will see the same voltage) and the amp will be forced to current limit if you attempt to exceed the rating of the amp, thus protecting the speakers. This assumes overcurrent protection or clipping of the amp to limit the power.

This method wastes power but it won't screw up the frequency response (assuming the amp can drive 3 ohm loads) and it won't distort like the light bulb.

A light bulb is good for a compressor, but even then sized too small and it will sound horrible. Sized too big you have no real protection. Finding the perfect bulb will be very difficult IMHO. If you want to make a limiter using a light bulb you really need an adjustable output to size the output power to the light bulb, not the other way around. I'll bet you either end up with distorted sound or no effective protection unless you're very lucky.

I've been playing around with incandescent compressor/limiters and it's tricky to get it right. Using only an op-amp and a flashlight battery can yield a useable compressor, but the signal amplitude has to be just right. There's also the problem of more bass compression than treble. The filament has a thermal time constant and won't respond to fast transients as well as it will to bass, so the sound can get harsh without a treble pre-emphasis/de-emphasis before and after the bulb so that the treble has enough energy to heat up the filament.

But for limiting speaker power, ditch the light bulb IMHO. A fuse would be your best bet if you want to keep it low tech, but you'll need spares. My experience is that the sound will degrade and distort well before the peak power in the speaker is reached if the bulb is sized to protect the speaker.

:2c:

Hi at All,
I think you mix the use of the bulb (old school) can be used as compressor, with effective behavior of the bulb used in output.
yes, R can change,but between 1 and 2R max. Just the slow response, in this case to reduce the power to the speaker. (the peaks are not dangerous) but prolonged period of power. 1 or 2 R, does not cause any distortion, whereas it changes only when the power is still high.
(would be worse if he changes very quickly) because listening to the amplifier constantly has different characteristics from 4R. for small corrections to power or to protect the speaker, the equivalent would be a very complex circuit (if it is to limit the output current) with low distortion introduced). (Is very important to chose precise current of bulb)
for recording levels, ...today are very various.🙂

Regards
 
Last edited:
(Is very important to chose precise current of bulb)
for recording levels, ...today are very various.🙂

Regards

That's my main point. Choosing the bulb for optimal performance is tricky - good luck.

I've tried this method and the results I got were poor. Perhaps I was too conservative in choosing the lamp rating and should have used a higher power lamp.

I've also experimented with line level devices that use a flashlight battery in the same way but with a purely resistive load. The drive can (and subsequently the amount of compression) be adjusted. It's an unusual type of compression. I find for mixes it doesn't sound too good but for some solo instruments it can sound good.

This is not the same as the "optical" compressors that use lights or an LED to illuminare an LDR. I'm talking about compressors that function solely by using the filament resistance of an incandescent bulb. They DO work!
 
Status
Not open for further replies.