when you decrease gain, you decrease noise
Yes, because you don't amplify the noise as much.
because the same modulated noise will be carried by a lower voltage.
What voltage are you talking about? The signal amplitude, or the power supply / available output swing voltage?
If you lower the gain, it will of course lower the output signal amplitude / voltage *for the same input signal setting*, but it doesn't lower the *available* voltage swing. And neither "voltage" corresponds directly to the noise level.
If you want to reduce the output level of a DAC, you have 3 solutions
I don't think we were talking about reducing the output level of a DAC specifically, we were talking about reducing the overall gain of the chain.
1- Set an attenuator. (But you will increase output impedance after-it).
Agree.
2- Reduce the level in the digital domain, but it will reduce the definition (less bits)
And that only applies if your only source is a DAC.
3- Reduce the analog voltage reference: each bit will have a lower voltage, but you will keep the definition.
I guess that applies to a DAC. It does not apply to a nCore.
Your number is a little exaggerated 🙂most speakers loose 99% of whatever you feed them
Damn right. People fighting everywhere with THD, while annoying distortions are the various modulation ones (IM and PM)....seems like the overall sport in hi-fi engineering these days lies in solving errors everywhere else than where they occurs in the first place
I wonder too why so much progress has been done on electronic and sources since the 70s, while the worse part of any audio system (the electro acoustic transducers) had just decreased in overall quality.
Julf, if people complain about audible noise of an amplifier with > -120 db of signal/noise ratio, with high efficiency loudspeaker, it is obvious the amp has both too much gain and too much available power.
Because if they can hear the noise, in their very quiet 30dB spl listening room, it means the max level will be lot above 120db spl, max level that human ears are able to suffer.
The first reasonable thing is to chose a less powerful amp with less gain.
I just said that reducing the voltage applied to the switching device of a class D amp provide the both in the same time, without reducing the definition.
I said too i would prefer to use a passive attenuator between the amp and the compression driver.with some benefits.
No need to turn circles on this subject.
Because if they can hear the noise, in their very quiet 30dB spl listening room, it means the max level will be lot above 120db spl, max level that human ears are able to suffer.
The first reasonable thing is to chose a less powerful amp with less gain.
I just said that reducing the voltage applied to the switching device of a class D amp provide the both in the same time, without reducing the definition.
I said too i would prefer to use a passive attenuator between the amp and the compression driver.with some benefits.
No need to turn circles on this subject.
isn't the basic mechanism the same, non-linearity that is, just different ways of looking at it?
what do you mean that drivers have gotten worse?
what do you mean that drivers have gotten worse?
Last edited:
Julf, if people complain about audible noise of an amplifier with > -120 db of signal/noise ratio, with high efficiency loudspeaker, it is obvious the amp has both too much gain and too much available power.
Too much gain, yes. Too much power? Don't know - some people prefer to have headroom reserves.
Because if they can hear the noise, in their very quiet 30dB spl listening room, it means the max level will be lot above 120db spl, max level that human ears are able to suffer.
I guess that is their choice an their ears 🙂
I just said that reducing the voltage applied to the switching device of a class D amp provide the both in the same time, without reducing the definition.
Still not clear - are you speaking of the voltage that gets switched, or the comparator reference voltage in your simplified description of a class D amp, the input voltage to the switching device, or something else? All those voltages are "applied to the switching device".
If you reduce the voltage that gets switched, then you only lower the clipping point, not the gain.
We forgot 1 solution: doubling the listening distance will reduce the noise level with 6db!😀😀😀😀
You do not need headroom with compression drivers:
"5.1.1 - Throat Power Vs. Size Vs. Frequency
For what it's worth (and because you'll find very little on the Net), the maximum acoustic power into the throat depends on several factors. First is the relationship of actual frequency to the horn's cutoff frequency. As the ratio of f/fc (frequency divided by cutoff frequency) increases, so does distortion for a given acoustic power per unit of throat area. A sensible upper limit for throat acoustic power is around 6-10mW/mm², meaning that a 25mm (1") throat should not be subjected to more than 3-5W. A 50mm throat can take 4 times that power, or 12-20W acoustic (see graph [1]). The amount of acoustic power that can be accommodated decreases as frequency increases. For horns intended for operation from (say) 800Hz and above, the normal rolloff of amplitude with frequency (as found in most music) means that the acoustic power also falls with increasing frequency.
If the conversion efficiency of a compression driver is (say) 25%, this means there is absolutely no point supplying more than 20W (electrical) to a compression driver on a 25mm throat, or 80W for a 50mm throat, allowing for a sensible distortion of 2%. Past a certain limit (which varies with frequency vs. horn cutoff), supplying more power creates no increase in SPL, but simply creates more and more distortion. The maximum power must be reduced as frequency increases. CD horns require HF boost, so can easily be pushed much too hard at high frequencies, resulting in greatly increased distortion.
Quite obviously, any horn that has a small throat must have limited power capability, and providing amplifiers that are (much) larger than needed for "headroom" is a completely pointless exercise. It is both convenient and accurate to consider the effect as "air overload".
According to a technical note from JBL [2], the situation is actually worse than the above graph shows. A 200Hz horn at 10kHz can readily generate 48% second harmonic distortion, with as little as 2.5W (electrical) input - a mere 0.75 acoustical Watts. As noted in references 1 and 2, this information was first determined in 1954, but over time seems to have been lost. As you can see, I'm determined that this will not happen."
Source:
PA Systems
"5.1.1 - Throat Power Vs. Size Vs. Frequency
For what it's worth (and because you'll find very little on the Net), the maximum acoustic power into the throat depends on several factors. First is the relationship of actual frequency to the horn's cutoff frequency. As the ratio of f/fc (frequency divided by cutoff frequency) increases, so does distortion for a given acoustic power per unit of throat area. A sensible upper limit for throat acoustic power is around 6-10mW/mm², meaning that a 25mm (1") throat should not be subjected to more than 3-5W. A 50mm throat can take 4 times that power, or 12-20W acoustic (see graph [1]). The amount of acoustic power that can be accommodated decreases as frequency increases. For horns intended for operation from (say) 800Hz and above, the normal rolloff of amplitude with frequency (as found in most music) means that the acoustic power also falls with increasing frequency.
If the conversion efficiency of a compression driver is (say) 25%, this means there is absolutely no point supplying more than 20W (electrical) to a compression driver on a 25mm throat, or 80W for a 50mm throat, allowing for a sensible distortion of 2%. Past a certain limit (which varies with frequency vs. horn cutoff), supplying more power creates no increase in SPL, but simply creates more and more distortion. The maximum power must be reduced as frequency increases. CD horns require HF boost, so can easily be pushed much too hard at high frequencies, resulting in greatly increased distortion.
Quite obviously, any horn that has a small throat must have limited power capability, and providing amplifiers that are (much) larger than needed for "headroom" is a completely pointless exercise. It is both convenient and accurate to consider the effect as "air overload".
According to a technical note from JBL [2], the situation is actually worse than the above graph shows. A 200Hz horn at 10kHz can readily generate 48% second harmonic distortion, with as little as 2.5W (electrical) input - a mere 0.75 acoustical Watts. As noted in references 1 and 2, this information was first determined in 1954, but over time seems to have been lost. As you can see, I'm determined that this will not happen."
Source:
PA Systems
You do not need headroom with compression drivers
The distortion caused by compression in the driver/horn might still be softer/more pleasant than the amp hard clipping.
A 200Hz horn at 10kHz can readily generate 48% second harmonic distortion, with as little as 2.5W (electrical) input
But would you feed 2.5W of 10 kHz signal to a 200 Hz horn? Even if you have a full-range horn, you might need power and headroom in the lower frequency range, despite the fact that your music doesn't contain much energy at 10 kHz.
The distortion caused by compression in the driver/horn might still be softer/more pleasant than the amp hard clipping.
Page 4 fig 9 :
http://www.jblpro.com/BackOffice/ProductAttachments/tn_v1n08.pdf
Comparing overall quality of old JBL, Altec, Klipsch, Tannoy etc. with all those cheap modern devices, even from the same manufacturers...what do you mean that drivers have gotten worse?
I don't see any real major improvement.
Well, i don't think we are in theme with this subject, here.
http://www.hypex.nl/component/weblinks/weblink/24-datasheets/28-nc400-datasheet.html
See para 9.2., THD+N vs power...
http://www.hypex.nl/component/weblinks/weblink/24-datasheets/28-nc400-datasheet.html
See para 9.2., THD+N vs power...
If you use a 113db driver, the ncore will always operate at the left side of the THD+N vs power diagram. The ncore has the lowest THD in the range from 10-50 watt. So it is better to do some power soaking after the amp!!!!!😛
THD+Noise will be lowest in the 10-50 W range, but below 10 W there is just noise, and really no distortion. In reality, THD will be lower at lower power outputs.
THD+Noise will be lowest in the 10-50 W range, but below 10 W there is just noise, and really no distortion. In reality, THD will be lower at lower power outputs.
Exactly. And not only that - it is also somewhat funny to use an extremely efficient amp, and then burn off a fair bit of power in a resistor - when the result can easily be accomplished by reducing the gain.
Can we please put an end to this useless debate by agreeing to the fact that lowering the gain makes sense for some users, while others might choose to go for other solutions?
According to Bruno it does not do anything with the soundquality:
"The amp certainly isn't going to get worse if you decrease gain (other than SNR it's not going to get better either). "
"The amp certainly isn't going to get worse if you decrease gain (other than SNR it's not going to get better either). "
According to Bruno it does not do anything with the soundquality:
"The amp certainly isn't going to get worse if you decrease gain (other than SNR it's not going to get better either). "
Just to pick nits, I think it is not true to say "it does not do anything with the soundquality", as SNR will improve, and nothing else will get worse.
Just to pick nits, I think it is not true to say "it does not do anything with the soundquality", as SNR will improve, and nothing else will get worse.
Better start a debate with Bruno then, it are his words and he designed this amp.
Better start a debate with Bruno then, it are his words and he designed this amp.
I agree with your quote from Bruno, but not with your conclusion/summary.
Your number is a little exaggerated 🙂
.
How sad it might be , its true none the less, direct radiator loudspeakers have a efficiency of around 1%
it puts a few things in perspective
How sad it might be , its true none the less, direct radiator loudspeakers have a efficiency of around 1%
it puts a few things in perspective
And that is sadly primarily the case for PA drivers.
So called hi-end low distortion HIFI drivers typically are a magnitude or two less efficient...
No DSP or other fancy-pancy additives can unfortunately make up for that bottleneck.
- Status
- Not open for further replies.
- Home
- Amplifiers
- Class D
- Hypex Ncore