What harmonic does to sound?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Why not? Take all the samples at 44.1KHz you have, transform them with FFT into spectrum (you'll have helluva lot spectral lines - as many as there are samples).

Hmm correction, now I see what you mean. If you take ALL the samples over say half an hour, you are indeed able to reproduce that half an hour. But I don’t see why you should first take the hassle of first take the FFT of half an hour and use that to define a huge number sines if you can use the samples directly.

But this far off from my intention of my first remark about this issue.

Cheers ;)
 
Hmm correction, now I see what you mean. If you take ALL the samples over say half an hour, you are indeed able to reproduce that half an hour. But I don’t see why you should first take the hassle of first take the FFT of half an hour and use that to define a huge number sines if you can use the samples directly.

With some additional tricks such things are used for image compression. :cool:

Regards

Charles
 
Pjotr said:


Hmm correction, now I see what you mean. If you take ALL the samples over say half an hour, you are indeed able to reproduce that half an hour. But I don’t see why you should first take the hassle of first take the FFT of half an hour and use that to define a huge number sines if you can use the samples directly.

You can also splice the signal into small windows (for example, one millisecond) and perform Fourier transform on those. After that rip out the least significant coefficients and compress the rest by means of some compression algorithm and voilá, you have something that resembles MPEG audio codec :)

But this far off from my intention of my first remark about this issue.

Surely it is ;). However, some MPEG compression software (I don't remember the name) is claimed to be using some psychoacoustical model for dropping the least significant information. I don't know whether the authors are just relying on the frequency-dependent sensitivity of the human hearing. Now, if they aren't, it suggests that there is something interesting going on...

Anyhow, I will be checking Steven's link as soon as I have some spare time.
 
It is almost impossible to answer this question. Remember, we are concerned here with sound REPRODUCTION. This implies that we don't want ANY change of the audio harmonics. However, sometimes a little 2'nd harmonic can bring to life a compromised recording. Still, we cannot depend on the need for added distortion to make everything sound its best.
 
Agreed. Harmonics and distortions (intended ones) should be in the CD itself. The power amp has nothing but to reproduce it CLEANLY.

The reason I threw the question, some amps are said to be nice (like tubes), some are said tobe too sterile (ones with too good measurement figures). Mr. John C, what is your personal opinion on tubes (with big second harmonic)?

I suspected 2 things. TIM related to harsh sound, and certain kind of harmonic (not intentionally added, just pop out from the cct) that makes different amp sound different.

So the question will be how to design an amp that sounds good, regarding the harmonics that will occur.

A differential pair is said to cancel even harmonic. But I see Mr. John Curl likes to use full complementary differential, while JLH are not using differential at all, just single transistor for its front end.

BTW, Mr. John Curl, is it really you in http://www.diyaudio.com/forums/showthread.php?threadid=28065goto=newpost
 
I recall reading somewhere that IM and TIM tend to correlate roughly with THD. I.e., given two topologies that differ in THD by a significant degree, say a decimal place (.1% vs. .01%) will tend to have IM and TIM measurements that stand in a similar relative relationship. I've no idea if this is something that has been verified or if it was just an opinion. Anyone familiar with this and whether there is anything to it?
 
sam9 said:
I recall reading somewhere that IM and TIM tend to correlate roughly with THD. I.e., given two topologies that differ in THD by a significant degree, say a decimal place (.1% vs. .01%) will tend to have IM and TIM measurements that stand in a similar relative relationship. I've no idea if this is something that has been verified or if it was just an opinion. Anyone familiar with this and whether there is anything to it?


No, IM will correlate with THD, but TIM will not.
Intermodulation distortion is caused by the same non linear behaviour of the amplifier that causes harmonic distortion. It can happen in amplifiers with and without feedback.
Transient intermodulation distortion is caused when an amplifier with feedback is driven by a high frequency signal and is slew rate limited. This slew rate limiting results in a large differential signal around the difference amplifier that compares the input signal with the feedback signal. If that difference amplifier gets overloaded because of that, the result is called TIM. So TIM can only happen in feedback amplifiers. But if you make your amplifier with an input stage that can deal with a large enough differential input signal (i.e. input signal minus feedback signal), no TIM will occur. Still the amplifier can have bad THD or IMD figures. Also the other way around: maybe an amplifier uses a lot of negative feedback and heavy lag compensation because of that, then it can have very low THD and IMD (at low frequencies), but chances are that it will produce a lot of TIM.

Steven
 
phase_accurate said:
Sometimes 2nd harmonic distortion is even intentionally added BEFORE the music gets on a record !!!

Or during the record process. During the 60-ties en 70-ties it was a habit to push the analog tape far in the headroom intentionally to enrich the sound, it was done mainly in Britain and is part of what was called the “British” sound. In the US using the tape-headroom was avoided as much as possible giving a much “cleaner” sound.

Cheers ;)
 
No, IM will correlate with THD, but TIM will not. Intermodulation distortion is caused by the same non linear behaviour of the amplifier that causes harmonic distortion. It can happen in amplifiers with and without feedback.
Transient intermodulation distortion is caused when an amplifier with feedback is driven by a high frequency signal and is slew rate limited. This slew rate limiting results in a large differential signal around the difference amplifier that compares the input signal with the feedback signal. If that difference amplifier gets overloaded because of that, the result is called TIM. So TIM can only happen in feedback amplifiers. But if you make your amplifier with an input stage that can deal with a large enough differential input signal (i.e. input signal minus feedback signal), no TIM will occur. Still the amplifier can have bad THD or IMD figures. Also the other way around: maybe an amplifier uses a lot of negative feedback and heavy lag compensation because of that, then it can have very low THD and IMD (at low frequencies), but chances are that it will produce a lot of TIM.

I read somewhere that it is TIM is the problem in power amp reproduction, while Harmonics are sometimes desired.

Steven, to reach low TIM, is there any particular front stage configuration? What about a simple differential+CCS?

Maybe this is why there is infinite number of different power amplifier brand and models. In the published measurement figures, all of them have to perform good. (But we cannot see the harmonic properties and TIM behavior)

But why people likes a certain amp than the other, while the figures are not so different? What makes the sound "better" to consumer's ear, inspite of the average measurement figures? Some amps are selling better than the other. Is that perception and brand are more important to consumer than the reproduction itself?
 
Thanks, Stephen, for the TIM vs. IM explanation. If I understand the issue correctly, sufficient slew rate should avoid TIM.

But now a further question: take a given max required peak -to-peak requirement and a slew rate which is adequate to swing that up to 20kHz, if that is not fast enough for a 30Hz signal will that create TIM artifacts in the sub 20kHz band? I suspect I know the answer to this but would like to be sure I'm thinking right.
 
take a given max required peak -to-peak requirement and a slew rate which is adequate to swing that up to 20kHz, if that is not fast enough for a 30Hz signal will that create TIM artifacts in the sub 20kHz band?

If you really mean 30 Hz then it will for sure. If you mean 30 kHz then it won't.
One simple thing to reduce TIM is an input lowpass filter.

Regards

Charles
 
Actually, analog tape saturation adds predominantly third order harmonic distortion, although with smaller signals superimposed on larger ones, it may be, in some cases, effectively even order with the polarity changing depending on the phase of the larger signal.

I also wonder if the phase of the harmonic distortion relative to the fundamental affects the perception of its nature. For instance, air nonlinearity in compression driver horn throats result in primarily second order harmonic distortion, but I've never heard anybody say anything good about its effect on the reproduced sound. In fact, it's regarded as a significant limitation in HF horn SQ at higher SPLs. However, I have seen evidence on an oscilloscope that at least part of this distortion leads the fundamental by 90 degrees, tending to turn a sine wave into more of a sawtooth, as compared to straight in phase amplitude compression second harmonic distortion that an amplifier may produce near clipping, which simply 'squashes' one of the peaks.

The mechanism I heard proposed for the horn distortion is that the compression half of the waveform results in minor localized heating of the air which actually causes a slight acceleration of the speed of sound in that microenvironment. Accumulated over several waveform lengths and at higher spls, the effect is of pushing the compression peak ahead in time and the rarefaction peak back in time, thus the tendency toward a sawtooth. I don't know how prevalent this mechanism might be in horn throats vs simple compression air nonlinearity which would seem to cause an effect more like that of a SE power amp near clipping.

Perhaps the more negative opinion of horn 2HD distortion may be partly due to the use of CD horns in prosound which tend to maintain a more constant radiation pattern with rising frequency, requires more eq to sound flat on axis as compared to a standard flare horn and thus may be prone to something more approximating a 6db /octave increase in 2HD with constant spl once equalized (this is because since the acoustic waveforms become shorter, there are more cycles within the critical area for the distortion mechanism to operate on the waveform at higher frequencies).

Also, the above makes me wonder idly if the 2HD produced by a SE amp would give a different effect if it occurred more on the compression or rarefaction half of the sound that is ultimately produced in the room by the speaker.
 
lumanauw said:

Steven, to reach low TIM, is there any particular front stage configuration? What about a simple differential+CCS?

Actually, the demand on the input stage is quite simple if TIM is to be avoided and the input stage is the stage where also the global feedback signal is going to. As long as the input stage will not clip for an input signal that is twice as high as the maximum peak-to-peak input signal for full output, no TIM will occur. This is simply because no slew rate limiting will occur, even in this absolutely worst case scenario. I think it was Leach who derived this in some AES paper. The Miller-C of the VAS will not be charged infinitely but also not with a fixed maximum current. The result is normal high frequency roll-off, instead of sine waves turning into triangles.
Having an input stage that can handle such a large differential input signal (a couple of Volts) requires a lot of emitter or source degeneration. This is not good for noise but just acceptable in power amplifiers. An advantage of this degeneration is that the loop gain is reduced and the lag compensation can be smaller, so the maximum slew rate is increased, which in itself lowers the risk of TIM. This makes input stage degeneration a double-edged sword.
The worst case differential voltage of the input stage (the error voltage) will be a lot smaller if the input signal itself is already high frequency limited. Because then the output stage will have less problems to follow the input, the error voltage will be smaller, and the demands put on the voltage capability of the input stage will be lower.
Another way to avoid or minimize TIM is to have most of the voltage gain after the dominant pole of the feedback amplifier, but that is difficult in practice, at least for power amplifiers.

sam9 said:
Thanks, Steven, for the TIM vs. IM explanation. If I understand the issue correctly, sufficient slew rate should avoid TIM.

But now a further question: take a given max required peak -to-peak requirement and a slew rate which is adequate to swing that up to 20kHz, if that is not fast enough for a 30Hz signal will that create TIM artifacts in the sub 20kHz band? I suspect I know the answer to this but would like to be sure I'm thinking right.

Avoiding TIM in a feedback amplifier is a combination of large enough closed loop bandwidth, sufficient error voltage handling of the input stage and low pass filtering at the input.

If an amplifier is TIM free for a full output 20kHz sine, this is no guarantee that it will not create TIM for a 30kHz full output sine wave. Therefor low pass filtering at the input is good practice.

But to avoid linear phase distortion, the corner frequency of the LPF should be above 100kHz (for a first order filter) to keep the phase shift less than 10 degrees in the band up to 20kHz. More is said to be audible. This requires already a pretty fast amplifier and an input stage with some degeneration.

Steven
 
It seems to me you should be able to neutralize most of the audible band phase and amplitude response deviation from ideal of a first order pole bandwidth limited amplifier up to around half that frequency by adding a 2nd order lp filter at the input of the amplifier with the filter having a Q of about 1 and a cutoff (from mental guesstimation) at or a little higher in frequency than the closed loop -3db point . The slight rise in both amplitude response and group delay of the filter as you go from from dc up to about half its corner frequency should largely complement that of the feedback pole within the audible frequency range. The result should be an amplifier that sounds similar to one with several times its closed loop bandwidth with the additional advantage of a rapid rolloff beyond the audible band.


Having an input stage that can handle such a large differential input signal (a couple of Volts) requires a lot of emitter or source degeneration.

Or a differential long tailed 12ax7 type input stage like I plan to use in my current 6 channel hybrid amp project, heheh.
 
Could well be. In most cases just a passive filter is used at the input and if we only use R's and C's (no L's) we get a low Q (0.5 or lower). Low Q filters shift phase quite early. My example was for a first order passive low pass. Higher Q filters have a steeper phase change in a more narrow frequency band around the corner frequency, but they need to be active or RLC.

Steven
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.