A friend of mine who went to (tertiary) music school once presented some tracks they made, and they applied compression liberally, just because it was available as one of the many mastering tools. Presumably, it was originally used to boost AM reception, but the original reasoning was long forgotten and replaced with B...S... mythology about selling more records.I find this the most offensive distortion in the whole recording/playback chain. You don't need to have a particularly good hifi system to easily distinguish between a good recording and a poor one.
Bob Cordell has long advocated that if you want to measure HD, measure it at 20kHz instead at 1kHz.
Amplifiers that measure the same at 1kHz but sound different often measure (very) differently at 20kHz.
Jan
Amplifiers that measure the same at 1kHz but sound different often measure (very) differently at 20kHz.
Jan
What can I say, other than I believe you "hit the nail on the head".Presumably, it was originally used to boost AM reception, but the original reasoning was long forgotten and replaced with B...S... mythology about selling more records.
Last year, I sort of systematically went through my FLAC collection of about 2000 albums, and listened to high crest factor tracks (i.e., the DR Database ratings as calculated by the foobar2000 plugin), listening to the point at which it becomes clear that I'm listening to "recordings" instead of something more like "live music". I found that once the DR ratings dropped below a crest factor of ~17 dB--and I found over 670 such tracks in my collection that I felt were representative of non-noise tracks--I began to be constantly aware that I was listening to a recording instead of something like "the real thing". I know that many genres of music do not inherently have crest factors of 17 (that is, before compression is used on the recording or on microphone outputs), but the tracks that I consistently could identify as "near-live sounding" mostly seemed to have this 17 crest factor rating or higher. At lower crest factors, not nearly as much. (YMMV.)
The music DR ratings from AM radio days typically have crest factors of 10-12 dB...for reference.
Chris
Last edited:
A 20 kHz fundamental has no audible harmonics. Most people can't even hear the fundamental itself.Bob Cordell has long advocated that if you want to measure HD, measure it at 20kHz instead at 1kHz.
Amplifiers that measure the same at 1kHz but sound different often measure (very) differently at 20kHz.
Jan
True, unless perhaps some very nonlinear process produces subharmonics. However, isn't the point that measuring HD at 20kHz may give some useful insight into circuit behavior as loop gain falls off? Maybe something about that will be audible to someone, even if it isn't exactly POHD (Plain Old Harmonic Distortion)? Maybe ultrasonics will affect some real world downstream gear?A 20 kHz fundamental has no audible harmonics.
Last edited:
True, but music playback have limitless of fundamentals and harmonics, and harmonics and fundamentals above 20 kHz can very much be audible as IMD during playback of music.A 20 kHz fundamental has no audible harmonics. Most people can't even hear the fundamental itself.
Nelson Pass paper on Distortion and feedback have some nice examples about what happends with distortion when several sine wave tones are played at the same time. https://www.passdiy.com/project/articles/audio-distortion-and-feedback
So keeping harmonics low several octaves above our hearing limits are essential for low IMD. In my opinion anyway.
Gerald Stanley & David McLaughlin of Crown empathized the importance of IMD over THD way back in the 70'sI fully agree with the statement and please let me share some thoughts on this.
IMO we have inherited that strong desire for 0.000...% THD from the era of magnetic recording technologies - tapes and phono.
In those days it was a common practice to measure THD at 1 kHz and IMD with 2 tones: one like 60 Hz and second of some kHz.
Despite good figures some devices sounded good while others did not, but why?
Magnetic technologies have inherent property: pickup device (magnetic head or MM/MC cartridge) output voltage is higher for high frequehcies.
Thus input circuits for head/cartridge preamps have to be very sensitive in low band and have to deal with relatively high level signals in HF area - RIAA curve is self-explaining. If this dilemma for input stage is not addressed properly by engineers the preamp may have a respectful figures for THD and IMD (when measured as described above) but have "dirty", unpleasant sound.
When the phenomena was discovered later a dual tone IMD measurements were introduced (e.g. 19+20 kHz), it effectively reveals this flaw giving huge IMD figure when input stage cannot cope with HF bursts.
But what we had before dual tone IMD measurement become widespread? The devices where engineers invested in linearity of input stages may exhibit moderately better THD (let's say 0.03% vs.0.1%) but dramatically better sound because real IMD (available with dual tone measurement only) was lover by orders!
This still to be true for power amps - old power transistors were slow, amplifiers were hugely corrected to ensure stability with NFB, thus NFB was shallow in HF area - again, we may have a good THD @ 1kHz and lots of "hidden" IMD, not revealed by old measurement methods. But when engineer invests in bandwidth and linearity - you can get much better sound with a humble improve in THD.
Well, world have changed. Now we have DACs with vanishingly low distortions and PAs hitting 100kHz power bandwidth.. For me, I see no reason to stick on THD measurement @ 1 kHz anymore, dual tone IMD test gives comprehensive linearity estimation.
Also I think I wouldn't try to "squeeze" better than 0.01% IMD (100 ppm) out of any circuit, BTW my hearing declines over age )))
https://www.gammaelectronics.xyz/audio_02-1972_imd.html
Its a measurement, not a listening session.A 20 kHz fundamental has no audible harmonics. Most people can't even hear the fundamental itself.
Different things.
The point is that THD-20k correlates much better with how it sounds than THD-1k
Jan
You (and the second article) are conflating different things. The article "Modulation Distortion In Loudspeakers" discusses both amplitude modulation distortion (caused in any device due to a nonlinear transfer curve) and frequency modulation distortion. This last one ONLY happens in loudspeakers due to the speed of cone movement becoming significant in relation to the speed of sound. It never happens in electronic devices.I don't believe that this is really the main point to be made, but rather the relative levels of harmonic and modulation distortion... (see first article below). Modulation distortion is much more audible.
What gave you the impression that I was talking about electronics...?...This last one ONLY happens in loudspeakers due to the speed of cone movement becoming significant in relation to the speed of sound. It never happens in electronic devices.
Chris
A couple of observations:
When discussing HD, the relative amplitudes of successive harmonics is as important as the THD. For example, if 2nd is below 3rd, and 5th,there is a propensity to be harsh. Monotonically reducing harmonic series tend to be discerned as more pleasant.
As far as levels that are detectable, there was a comparison between a straight wire and a OP-Amp with very low distortion, using matched levels, and played with FUBAR 2000. I don't remember who posted the files. I participated in the test and was astonished when I was able to tell the difference between the files with over 90% confidence. My hearing sucks, and I was only able to do this with an O2 headphone amp and Sennheiser HD600 headphones. I could not tell the difference with my stereo system (Klipsch Heresy 1977, Onkyo TX-NR838). My understanding was that the Op-Amp THD was well below -100dB.
YMMV.
When discussing HD, the relative amplitudes of successive harmonics is as important as the THD. For example, if 2nd is below 3rd, and 5th,there is a propensity to be harsh. Monotonically reducing harmonic series tend to be discerned as more pleasant.
As far as levels that are detectable, there was a comparison between a straight wire and a OP-Amp with very low distortion, using matched levels, and played with FUBAR 2000. I don't remember who posted the files. I participated in the test and was astonished when I was able to tell the difference between the files with over 90% confidence. My hearing sucks, and I was only able to do this with an O2 headphone amp and Sennheiser HD600 headphones. I could not tell the difference with my stereo system (Klipsch Heresy 1977, Onkyo TX-NR838). My understanding was that the Op-Amp THD was well below -100dB.
YMMV.
I think that a post that I had written got inadvertently deleted. Sorry about that. It's been a very bad week here.
The thread of rationale that I was projecting was this:
1) harmonic distortion (HD) is not the form of distortion that is really heard subjectively--rather it's modulation distortion (MD) that goes along with direct radiating drivers in loudspeakers which becomes the issue, such that the discussion is really about the audibility of modulation distortion, not HD.
2) both HD and MD are tied together in terms of their mechanisms underlying mechanisms.
and here's where I diverged...
3) loudspeaker modulation distortion typically exceeds electronic modulation distortion by orders of magnitude greater than electronic circuit distortion, so really, if you're worried about the subjective effects of HD/MD, circuits aren't really the place to start, but rather the loudspeakers.
4) there are choices that you can make in loudspeaker design that virtually eliminate the audible effects of modulation distortion (but not modulation distortion. What you have left over when using this different loudspeaker technology is harmonic distortion, not modulation distortion. When you hear these loudspeakers, the harmonic distortion is so inoffensive that few people bother to even consider the subjective effects of HD (lower order HD, that is).
So the logic to talk only about loudspeaker and not circuits was rooted in the knowledge that the real source of subjective sound quality problems related to HD is not HD at all, but modulation distortion. And if you select a certain loudspeaker technology (i.e., horn loading), then the modulation distortion basically disappears from the scene, and all you have left is HD, which when divorced from modulation distortion, ceases to be a real issue.
Circuitous logic, admittedly, but not invalid.
Again, my apologies for the "missing link" (not explaining the logic).
Chris
The thread of rationale that I was projecting was this:
1) harmonic distortion (HD) is not the form of distortion that is really heard subjectively--rather it's modulation distortion (MD) that goes along with direct radiating drivers in loudspeakers which becomes the issue, such that the discussion is really about the audibility of modulation distortion, not HD.
2) both HD and MD are tied together in terms of their mechanisms underlying mechanisms.
and here's where I diverged...
3) loudspeaker modulation distortion typically exceeds electronic modulation distortion by orders of magnitude greater than electronic circuit distortion, so really, if you're worried about the subjective effects of HD/MD, circuits aren't really the place to start, but rather the loudspeakers.
4) there are choices that you can make in loudspeaker design that virtually eliminate the audible effects of modulation distortion (but not modulation distortion. What you have left over when using this different loudspeaker technology is harmonic distortion, not modulation distortion. When you hear these loudspeakers, the harmonic distortion is so inoffensive that few people bother to even consider the subjective effects of HD (lower order HD, that is).
So the logic to talk only about loudspeaker and not circuits was rooted in the knowledge that the real source of subjective sound quality problems related to HD is not HD at all, but modulation distortion. And if you select a certain loudspeaker technology (i.e., horn loading), then the modulation distortion basically disappears from the scene, and all you have left is HD, which when divorced from modulation distortion, ceases to be a real issue.
Circuitous logic, admittedly, but not invalid.
Again, my apologies for the "missing link" (not explaining the logic).
Chris
I think that a post that I had written got inadvertently deleted. Sorry about that. It's been a very bad week here.Well, the whole thread was about electronics until you posted.
The thread of rationale that I was projecting was this:
1) harmonic distortion (HD) is not the form of distortion that is really heard subjectively--rather it's modulation distortion (MD) that goes along with direct radiating drivers in loudspeakers which becomes the issue, such that the discussion is really about the audibility of modulation distortion, not HD.
2) both HD and MD are tied together in terms of their underlying mechanisms.
and here's where I diverged...
3) loudspeaker modulation distortion typically exceeds electronic modulation distortion by orders of magnitude greater than electronic circuit distortion, so really, if you're worried about the subjective effects of HD/MD, circuits aren't really the place to start, but rather the loudspeakers.
4) there are choices that you can make in loudspeaker design that virtually eliminate the audible effects of modulation distortion (but not harmonic distortion). What you have left over when using this different loudspeaker technology is harmonic distortion, not modulation distortion. When you hear these loudspeakers, the harmonic distortion is so inoffensive that few people bother to even consider the subjective effects of HD (lower order HD, that is).
So the logic to talk only about loudspeaker and not circuits was rooted in the knowledge that the real source of subjective sound quality problems related to HD is not HD at all, but modulation distortion. And if you select a certain loudspeaker technology (i.e., horn loading), then the modulation distortion basically disappears from the scene, and all you have left is HD, which when divorced from modulation distortion, ceases to be a real issue.
Circuitous logic, admittedly, but not invalid.
Again, my apologies for the "missing link" (not explaining the logic).
Chris
Basically a good point between 20k and 1k measurements.
then tossing in that .01% is likely threshold actually only audible for people with very good ears.
In a generalized manner of having a amp that is actually stable.
And actually achieves .001% distortion at 1k
more than likely only .01% at 20k
So basically if your amp hits the .01 % at 20k
your all done and playing the numbers game after that.
It is the numbers game on paper/ bragging rights.
Its the difference between a 35 mph / 56 kph speed limit
in a used car with old tires, or a million dollar super car with greatest tires ever.
your still going 35 mph light to light to light.
And I just Dont care.
Far as TIM , WHIM, ZIM , GLIM, BLIM, IMD, IUD
or any other distortion stuff.
Eventually leads to slew rate
there is minimum, then bragging rights.
When I see amplifiers that actually do .001% at 20k
Im not imagining or making up krap as why it sounds better.
It is just very impressive, because it is hard to do and make a
amplifier stable at that point.
Not worried about making up krap to explain " its better"
you dont hear that. It is just impressive as far as design.
And respected as a accomplishment.
then tossing in that .01% is likely threshold actually only audible for people with very good ears.
In a generalized manner of having a amp that is actually stable.
And actually achieves .001% distortion at 1k
more than likely only .01% at 20k
So basically if your amp hits the .01 % at 20k
your all done and playing the numbers game after that.
It is the numbers game on paper/ bragging rights.
Its the difference between a 35 mph / 56 kph speed limit
in a used car with old tires, or a million dollar super car with greatest tires ever.
your still going 35 mph light to light to light.
And I just Dont care.
Far as TIM , WHIM, ZIM , GLIM, BLIM, IMD, IUD
or any other distortion stuff.
Eventually leads to slew rate
there is minimum, then bragging rights.
When I see amplifiers that actually do .001% at 20k
Im not imagining or making up krap as why it sounds better.
It is just very impressive, because it is hard to do and make a
amplifier stable at that point.
Not worried about making up krap to explain " its better"
you dont hear that. It is just impressive as far as design.
And respected as a accomplishment.
Last edited:
I get it.I think that a post that I had written got inadvertently deleted. Sorry about that. It's been a very bad week here.
The thread of rationale that I was projecting was this:
1) harmonic distortion (HD) is not the form of distortion that is really heard subjectively--rather it's modulation distortion (MD) that goes along with direct radiating drivers in loudspeakers which becomes the issue, such that the discussion is really about the audibility of modulation distortion, not HD.
2) both HD and MD are tied together in terms of their underlying mechanisms.
and here's where I diverged...
3) loudspeaker modulation distortion typically exceeds electronic modulation distortion by orders of magnitude greater than electronic circuit distortion, so really, if you're worried about the subjective effects of HD/MD, circuits aren't really the place to start, but rather the loudspeakers.
4) there are choices that you can make in loudspeaker design that virtually eliminate the audible effects of modulation distortion (but not harmonic distortion). What you have left over when using this different loudspeaker technology is harmonic distortion, not modulation distortion. When you hear these loudspeakers, the harmonic distortion is so inoffensive that few people bother to even consider the subjective effects of HD (lower order HD, that is).
So the logic to talk only about loudspeaker and not circuits was rooted in the knowledge that the real source of subjective sound quality problems related to HD is not HD at all, but modulation distortion. And if you select a certain loudspeaker technology (i.e., horn loading), then the modulation distortion basically disappears from the scene, and all you have left is HD, which when divorced from modulation distortion, ceases to be a real issue.
Circuitous logic, admittedly, but not invalid.
Again, my apologies for the "missing link" (not explaining the logic).
Chris
After just hearing a basic metal dome the first time was a significant difference.
Then hearing the clarity and low distortion of a good horn, heck even a good planar or AMT
went far far beyond imaginary amplifier magic specs.
Heck was pretty opinionated about early and still many class D amps
not doing much better than .01 to .1% distortion.
And " hearing" the difference.
Told myself many times you can, reality most times
many cant tell the difference either. And finally accepted
it is just not very audible.
And yes realizing big difference was speakers.
And most the time was closer to 4% to 6% distortion.
Even with very good systems.
And once you hear it drop down to 1 or 3% with the detail
of drivers that exceed our hearing.
Was rather impressive
Re the 20k measurement... I think one need to realise that 20-20k is not a closed box... design wise we need to make room for this range so that double the range is still behaving very good. Thats why I think it sonds plausible that in order for an amp/dac to sound effortless, it need to be very good not just up to 20k but probably to 40.
//
//
3) loudspeaker modulation distortion typically exceeds electronic modulation distortion by orders of magnitude greater than electronic circuit distortion, so really, if you're worried about the subjective effects of HD/MD, circuits aren't really the place to start, but rather the loudspeakers.
The amplifier and speaker performances are also intertwined. There are big improvements to be had by raising the output impedance of the amplifier. The speaker itself tends to have stray inductance, which gets modulated by changes in cone position, among other things. Crucially, it is modeled in series with its main resistance. What this means is that voltage control across the speaker can do nothing to fix it.
To improve it, there needs to be another impedance added in series, so that stray inductance gets divided down to a smaller fraction of the whole. Those who have taken a course on electronics may recall, that you can model a current source by starting with a voltage source and adding a large resistor in series. That's all there is to it. But at this point, the whole thing feels a bit surreal. It's like you've discovered this major bug in Windows 95, and Microsoft's helpline responds with "thank you for bringing this to our attention. Windows 95 is now out of support and we won't be fixing this." (But Windows 11 still has the exact same bug!!.. Microsoft: "Yeah, we don't care." ) I can understand Esa Meriläinen's frustration.
The weird thing is, I can sort-of see how some people may also prefer higher IMD, especially in the 10kHz+ range, because it gives better audibility. Sibilant 's' etc and crash cymbals get more 'bite' because some of the higher frequencies get modulated down and make things sound fuller. At least that's my guess.
- Home
- Member Areas
- The Lounge
- When low HD is low enough?