If a multi-second clip is analysed, the harmonics in a Fourier spectrum would be fractions of 1Hz. So we're not really talking about musical 'harmonics' per se, just a very closely spaced series.
1, being able to determine that distortion has occurred at all.
2, Locating it and somehow figuring out that it was not present in the original recording.
The latter seems like some "fuzzy logic" / magical thinking, from mixing and matching various sources of possible interference.
I think a part of the confusion is that there are 2 parts to it:So 1 % distortion gives simplified 1 % of the fundamental as added volume to the harmonics. No wonder why it is difficult to hear in real music.
1, being able to determine that distortion has occurred at all.
2, Locating it and somehow figuring out that it was not present in the original recording.
The latter seems like some "fuzzy logic" / magical thinking, from mixing and matching various sources of possible interference.
In the equations above i used x. This eqation shows what hapens if sin is squared. You get a cos at double freq and a DC component. That cos can also be seen as a shifted sin (x+90 degrees)
The matematical inclined will of course think this is simple, but I think it is intriguing and gives me a clearer view of what HD is.
Of cource.If a multi-second clip is analysed, the harmonics in a Fourier spectrum would be fractions of 1Hz. So we're not really talking about musical 'harmonics' per se, just a very closely spaced series.
But the discussion is HD. And that is instant. The ear dont integrate over seconds when hearing HD or normal harmonics. Its instant. At least as instant at the note itself.
I agree if we are looking at signals with more than 1 % distortion. Then phase shift and other artifacts occur, But at lover levels. That is the distortion has less amplitude than 1/100 of the original signal , the distortion component is too weak to affect the phase of other signals (that can be heard). Remember masking and that distortionsignal is 40dBs under the fundamental. (MP3 works in my opinion). Another example is a xover for a speaker. The element 40 dB down no longer affect the acoustic phase or amplitude of the speaker in any meaningful way.Also, phase of each harmonic has some effect on the resulting time domain waveform. In some cases the phase relationships can be audible, which is to more or less in keeping with the idea of a threshold of audibility for group delay.
Not sure where you get that from?That is the distortion has less amplitude than 1/100 of the original signal , the distortion component is too weak to affect the phase of other signals (that can be heard)
I mean, some people can hear if a CD was produced without dither. In that case the distortion is given as around -93dBFS. The point is that in some cases very low levels of distortion can be audible to at least some fraction of the population, and phase would seem likely to be a factor in that audibility. The TD waveform has a sharply defined staircase waveshape.
Last edited:
Jan,The one with the narrow OL bandwidth will have huge more loop gain at lower frequencies thus less distortion at lower frequencies than the wide bandwidth thing. That's good.
Nice essay! The point of input stage distortion and its dependence on the loop gain is well taken. I need to sign up for the newsletter!
Two observations though:
1. Less distortion at lower frequencies has the sonic signature of unnaturally tight and impossibly controlled bass (often attributed to a massive power supply) coupled with "glassy" mid-highs. So technically, yes, lower LF distortion is better, but sonically, it adds character. Some love it, some don't.
2. The high open-loop gain that rolls off at a low frequency and moderate loop gain that stays flat up 20kHz are not the only two choices. In particular, you can have high open-loop gain up to 20kHz and then quickly roll it off. Hypex NCore is one example with 50dB+ of loop gain across the audio band. Omicron is another, with 100dB+.
There's all kinds of people that loves all kinds of things.Less distortion at lower frequencies has the sonic signature of unnaturally tight and impossibly controlled bass (often attributed to a massive power supply) coupled with "glassy" mid-highs. So technically, yes, lower LF distortion is better, but sonically, it adds character. Some love it, some don't.
And that's fine, to all their own, we can't all be married to the same wife.
But I don't think it adds character. The lower the nonlinearity, the more faithful the reproduction is.
The character is added by the equipment that adds stuff or detracts stuff, the amp with less linearity, i.e. more distortion.
Jan
That's a good point, because all musical instruments generate tones with massive harmonics, except that musicians call them overtones.1, being able to determine that distortion has occurred at all.
2, Locating it and somehow figuring out that it was not present in the original recording.
The latter seems like some "fuzzy logic" / magical thinking, from mixing and matching various sources of possible interference.
In fact, it's the overtones/harmonics that determine the type of instrument we hear. The same basic tone on a piano and say a contrabass can only be discerned because of the difference in their harmonic spectrum. Take the harmonics away and they will sound the same!
But if an amplifer generates 2nd harmonic distortions, it does that not only to the fundamental of the instrument but to all tones in the instruments's spectrum.
So it really is 'distorting' the instrument music.
Jan
Nonsense. That oversimplified model would be like the earliest and crudest attempts and synthesizing instrument sounds. Didn't fool many people. What about adsr envelopes, along with many other things.The same basic tone on a piano and say a contrabass can only be discerned because of the difference in their harmonic spectrum.
Of course. IMD needs multiple frequencies to, ehh, intermodulate with each other.Thank you very much for posting that table! It clearly shows that there is HD without IMD, again, thank you for confirming that!
Intemodulation is nothing less than generating sum and difference frequencies.
The usual HF IMD test uses 18 and 19kHz tones and looks for the 1kHz difference frequency. Because the difference is 1kHz it's easy to measure.
The sum freq of 37k is much more difficult to measure with simple/low cost equipment.
A sound card with 48kHz sample rate can measure up to 22kHz on a sunny day before the anti alias filter kicks in.
Edit: ... and of course the IMD is generated by the exact same nonlinearity as the HD.
The amp doesn't suddenly change it's behaviour because you look at it differently. Unlike people. 😎
Jan
Last edited:
This is particularly interesting as "distortion" (is it?) is integral part of many modern music sounds.1, being able to determine that distortion has occurred at all.
2, Locating it and somehow figuring out that it was not present in the original recording.
If a solo electric guitar starts playing it's nearly impossible to determine whether it was distorted prior to recording or if the playback system distorts.
But as soon as there are other instruments joining in it becomes obvious, depending on the imd products.
Edit: just saw that jan was quicker ...
This thread is going in circles. Everyone needs to be correct in his analysis. Maybe we should just say everyone is right and start a new ambiguous thread like what is beyond our universe.
Think we have a little different approach to maths and complex multiplying or summing of signals.Not sure where you get that from?
I mean, some people can hear if a CD was produced without dither. In that case the distortion is given as around -93dBFS. The point is that in some cases very low levels of distortion can be audible to at least some fraction of the population, and phase would seem likely to be a factor in that audibility. The TD waveform has a sharply defined staircase waveshape.
I state 1% make very little difference. You state that 0.001% can make a differense that can be heard by some.
As we talk of HD there must be signal present and harmonic noise that is masked by ear or has to small value to make any phase change to original signal.
So I still stand firm on the 1%, maybe down to 0,1%.
I have worked with a MP2 codec so that is why I believe masking exists
I also believe there are badly implemented dithering around that gives artifacts that can be heard
I agree it changes the tones, but at what degree. Its not like the 16th harmonic gets huge because it gets the distorion from the 2. 4. 8 harmonics. And is it recursive so it generates 2 harmonic that again generates 4 harmonic that again genarate 8. harmonic and so on?But if an amplifer generates 2nd harmonic distortions, it does that not only to the fundamental of the instrument but to all tones in the instruments's spectrum.
The case i can think of is heavy clipping that makes kind if a square wave. Those high harmonics can fry the tweeter.
An example is this piano note
a1 and a2 is about the same. I agree that the harmonic noise doubles at 1 % distortion in amp, but will that harmonic noise go far above say 2 %?
And how many like valued ans are needed to get to 10% harmonic noise from 1% distortion in amp.
Guess the question is when the HD shanges the spectre more than the natural change between to notes at same volume.
That said it is of course no need to discuss this if any form of compression is applied. Then the spectral signature is changed anyways
a1 and a2 is about the same. I agree that the harmonic noise doubles at 1 % distortion in amp, but will that harmonic noise go far above say 2 %?
And how many like valued ans are needed to get to 10% harmonic noise from 1% distortion in amp.
Guess the question is when the HD shanges the spectre more than the natural change between to notes at same volume.
That said it is of course no need to discuss this if any form of compression is applied. Then the spectral signature is changed anyways
Last edited:
- Home
- Member Areas
- The Lounge
- When low HD is low enough?