Advantage in designing an amplifier to except higher than average input voltages?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Nothing Comes for Free

Is there a difference implied... only designed to handle as opposed to an actual 15Vpk input?

I think lower THD and higher SNR are possible, but not automatic.
But you can get a better qualified answerer.

You are correct. A low-gain design doesn't necessarily mean that the amplifier will have lower noise or lower distortion. However, there are many advantages to building an amplifier with low gain.

At Benchmark we designed the AHB2 with low gain so that it could accept full studio-level balanced inputs without the use of a pad. Most power amplifiers have way too much gain and this makes it difficult to properly match the gain with the upstream device.

Studio D/A converters are normally calibrated to produce +24 dBu when fed with a full-scale (0 dB FS) digital input. +24 dBu is 12.28 Vrms and is much higher than the 2 Vrms unbalanced output used on most consumer products. Studios use high-amplitude interconnects to reduce noise in the signal chain.

We designed the AHB2 to clip at an input level of +24 dBu. This leaves +2 dB
of headroom for the D/A converter, and this is a comfortable operating point that makes nearly full use of the D/A converter's dynamic range while leaving just enough headroom.

We included a gain switch on the AHB2 which can be used to boost the gain when it is being used with lower-level inputs. Keeping the internal signals high and the amplifier core gain low helps to keep the noise low.

The real key to low noise design is understanding thermal noise (also known as Johnson noise). Every component contributes some thermal noise. This noise is reduced when impedances are kept very low. High signals keep the signal further above the thermal noise. In order to achieve the 132 dB A-weighted SNR of the AHB2, the noise contribution of every device in the signal path needed to be calculated. We do these noise calculations using our own custom software because we have not been able to find an off-the-shelf simulator that does everything we need.
 
a really quiet home listening room might get down to NC-20, middling high sensitivity 90dB/W speakers and kW amplifiers still gives only ~100 dB S/N at the listening position

132dB might be a cool number for the marketing guys but is never present in a commercial music recording, reproduced in the home
 
a really quiet home listening room might get down to NC-20, middling high sensitivity 90dB/W speakers and kW amplifiers still gives only ~100 dB S/N at the listening position

132dB might be a cool number for the marketing guys but is never present in a commercial music recording, reproduced in the home

I agree that the AHB2 amplifier is way quiter than necessary for speakers with a 90 dB/W sensitivity, but the situation is different for people who have high-efficiency horns.

We have a customer who has Avantgarde Trio Omega horns with a 109 dB/W efficiency. Obviously this is unusually high efficiency, and it presents some unique problems when selecting an amplifier.

With these speakers driven from the AHB2 the amplifier the noise at 1 M is -6 dB SPL and is absolutely inaudible. His two prior amplifiers produced noise at 23 dB SPL and 10 dB SPL. He described the first as being "very objectionable", and the second as being "clearly audible". He has a quiet listening space, but I am quite certain that the noise in the room was significantly higher than the noise produced by either of his old amplifiers. Nevertheless, the amplifier noise from his old amplifiers was audible.

As you may know, the noise in the room has a masking effect, but it is a common misconception to think that no noise (or music) is audible below the level of the noise in the room. For example, a 3 kHz tone is audible to a level as low as 30 dB below a white-noise masking signal. This is easy to demonstrate, and it can also be calculated from masking theory. The low-level sensitivity of our hearing peaks at about 3 kHz.

Furthermore, the noise floor of many power amplifiers is dominated by line related tones (typically 180 Hz). These AC hum tones can be audible down to the listener's threshold of hearing (if the room is relatively quiet). Audible AC line-related hum can certainly detract from the listening experience.

To keep the amplifier noise inaudible, it is not a bad idea to target 0 dB SPL at 1 Meter. There is a Dolby AES paper that shows that 3 dB SPL was audible to some listeners at 3 k Hz.
 
As you may know, the noise in the room has a masking effect, but it is a common misconception to think that no noise (or music) is audible below the level of the noise in the room. For example, a 3 kHz tone is audible to a level as low as 30 dB below a white-noise masking signal. This is easy to demonstrate, and it can also be calculated from masking theory. The low-level sensitivity of our hearing peaks at about 3 kHz.
the argument doesn't help you - as it claims we already hear some tones below the broad band noise number - only the noise in the narrower critical band and a few adjacent may matter absent other masking sounds
but it doesn't say we need a lower broad band noise electrical signal in a typical home audio reproduction scenario - the narrow band limit still hits human hearing absolute noise floor in many systems
and its rather foolish to use the anechoic chamber hearing threshold curve only reached after many minutes of accommodation for predicting what may interfere with commercial recorded music reproduction in a domestic listening room


Furthermore, the noise floor of many power amplifiers is dominated by line related tones (typically 180 Hz). These AC hum tones can be audible down to the listener's threshold of hearing (if the room is relatively quiet). Audible AC line-related hum can certainly detract from the listening experience.

a better argument - higher signal V can cut through some system noise influences, but a power amp with PS noise at human hearing threshold in its output may only sometimes improve with lower gain, higher input
if the noise is from poor internal signal, PS and feedback ground routing it may be nearly independent of amp closed loop gain - if your amp makes too much noise with shorted input and insane sensitivity speakers then you may have to replace the amp or speakers
 
Last edited:
We have a customer who has Avantgarde Trio Omega horns with a 109 dB/W efficiency. Obviously this is unusually high efficiency, and it presents some unique problems when selecting an amplifier.

With these speakers driven from the AHB2 the amplifier the noise at 1 M is -6 dB SPL and is absolutely inaudible. His two prior amplifiers produced noise at 23 dB SPL and 10 dB SPL. He described the first as being "very objectionable", and the second as being "clearly audible". He has a quiet listening space, but I am quite certain that the noise in the room was significantly higher than the noise produced by either of his old amplifiers. Nevertheless, the amplifier noise from his old amplifiers was audible.
Since I could still hear what ought to amount to ~7 dB SPL, I can understand this. Ambient noise tends to have a power spectrum that exhibits significant decay towards the higher frequencies, while amplifiers usually contribute noise that is essentially white. Hence the levels of the two are not directly comparable. These are the perils of cramming complex things into a single number, be it noise levels or distortion.

However, there are considerably more efficient ways of solving a problem like this. After all, nobody truly needs an instantaneous dynamic range of >130 dB, which is easily sufficient to cover everything from silence to ear-damaging volumes. As such, the top 20 dB or so are just going to be wasted now. Chances are that the inclusion of some attenuation between pre and power amp would have dropped noise to inaudible levels while still maintaining maximum levels in excess of 100 or even 110 dB... and an attenuator tends to be an awful lot cheaper than a reference-class preamp.

Of course doing things the "brute force" way still has its uses. Lower levels in the preamp = lower distortion after all (even if only measurably). Any tuner frontend designer will know this "trick". And occasionally there's some non-audio application which will benefit from hardware like this.
It is, however, a good thing that it is possible to "cheap out", otherwise our portable audio devices would be a lot more power-hungry than they are. High instantaneous dynamic range e.g. in a DAC output stage is always tied to power consumption in one way or another, be it due to higher supply voltages or higher current consumption at the same distortion levels. (Shouldn't be surprising since dynamic range is a power ratio after all.) If you can get by with less dynamic range that can be shifted up and down as needed, significant power savings are likely to result.
 
Cherry has recommended an input sensitivity of a around 0.3 Vrms instead of the more standard 1 Vrms to lower any common mode distortion.

A low gain amp has to handle a high input voltage. It requires an input stage which can handle it gracefully. Either an inverting input or a cascode stage where the common base voltage is refered to the input tail fullfills this requirement.

If a voltage of less than 8 Vrms (many professional processors like BSS, DBX, XTA deliver this at FSD) is sufficient to feed the drivers (which is often the case for mediums and tweeters), the power amp output can be a simple voltage follower.
 
There are a few renowned Designers that recommend the opposite for best amplifier performance.
Lower input voltages move the input stage over a smaller range of voltages and currents and thus achieves better linearity (lower distortion).

If the differential amplifier that compares the input signal to the output signal sees a common mode signal, and this differential amplifier also has a common-mode distortion problem, then this is true. But, if an inverting input buffer and an inverting main stage are used, this problem is eliminated. It is also relatively easy to design a differential amplifier that doesn't have a common-mode distortion problem.

High signal levels make it easier to get a low-noise signal from one audio box to the next. For this reason, high level balanced interconnects are universally used in professional applications. High-level signals also make it easier to get a low noise signal across a printed circuit board.

But SNR is a function of power. A high SNR can be achieved with high current and low voltage if you don't want to use high voltage signaling. Either way, you will burn the same amount of power in a given resistor for a given SNR due to Johnson noise.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.