How much tweeter distortion is audible?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
breez said:
Anyone using foobar's abx tool and having concerns about loudness differences may run replaygain on the files and enable replaygain processing during ABX.

Replaygain analyzes audio based on some model of perceived loudness and generates a gain value to match different audio files. Offline tools are also available so anyone supplying distortion comparison could preprocess the files to have same perceived loudness.



Foobar comes with mono to stereo converter DSP, which just copies the track to the other channel (not a phony pseudostereo algorithm). Foobar DSP chain can be used with the ABX tool too.


This is precisely what I did to discover the gain difference, and is how I replayed the tracks during the tests. I tried it both ways, the time I got the better than chance results was, I believe, without the replaygain, but I don't recall, and I can't find it in the printout.

Dr. Geddes, I got my GF to try it because it challenged her. I told her there was a difference she should be able to hear, but that I bet she couldn't. I showed her how it worked, and she made it a sort of mission to try and find the differences. Since she isn't as into audio as me, I first let her play around on her own, I wanted to see if she would pick it up on her own. Then I played the regained difference track so she could hear just the distortion, I showed her which file had distortion, let her listen, and which file didn't, and let her listen. Then I loaded it into FooABX. The best she ever did was 12 out of 20. I also was able to convince a few office mates to try it, my brother, my sister, and some friends.

As for the sure headphones, the distortion isn't as good as the HD600's, but the third harmonic is still at -60db's or so, I don't imagine that's a big issue. The highs are more rolled off than the HD600's, but the isolation is considerably better.

Using the method I used, I used the headphone amp built into my processor, which is really quite good. It's an opamp based unit of pretty considerably size, given that it was just thrown in on the pre/pro. In general, the signal path is quite clean doing things this way. As for my Brother's setup, he was using either Audio Technica ATH-A700's or Etymotic ER-4P's with a PIMETA headphone amp I built him. His source would probably be his Mac laptop running windows and Foobar2000. I'd imagine his setup is pretty clean as well.
 
Matt

I couldn't get Lidia to help me with a test unless she could get a publication out of it!:) (Thats tenure track academia for you.)

I'd also like to say that there is a big difference in trying find the maximum detection capability for nonlinear distortion and trying to find a useful limit or target. The two things can be very very different. For example, its fairly easy to detect THD on a pure sine wave when the playback system is very clean and its done at a low SPL level in a very quiet environment. The detection could easily be below 1%. But this is simply not a good target for a practical design because it's an ideal threshold.

What one needs is a "scale", not a threshold. A number at which average or normal hearing people will find nonlinearity audible on typical program material. I think that its absurd to chase the "absolute". You will get all tied up in things that matter very little and mis those things that matter most.
 
breez said:
Anyone using foobar's abx tool and having concerns about loudness differences may run replaygain on the files and enable replaygain processing during ABX.

Replaygain analyzes audio based on some model of perceived loudness and generates a gain value to match different audio files. Offline tools are also available so anyone supplying distortion comparison could preprocess the files to have same perceived loudness.


Ah, that's like walking out in the swamp with no map and compass. A gain matching of material that actually contains the same energy but shifted around will have different results depending on spectral content and crest factor of the material at hand. Also I bet that no two such softwares are designed the same.

I don't doubt that some levelmatching will make the files harder to tell apart but I find it hard to see that as a clever approach since degradation will take place in every link in the recording and reproduction chain. You can also turn down the volume, put on the washing machine, tap some water and turn on the TV while testing, that will make it even harder to tell the files apart. ;-)

Foobar comes with mono to stereo converter DSP, which just copies the track to the other channel (not a phony pseudostereo algorithm). Foobar DSP chain can be used with the ABX tool too.

Sorry but no software can make a stero file of a mono file. All stereo information is lost when you either start with left or right or if you mix the two together.

Recordings are done carefully (some times :) in order to capture a sense of space and ambience, and also spread sources in the soundstage. Sure you can slap some stereo reverb on a mono file but that is not what I want from a classical recording that allready has been mastered and printed in stereo.

edit: just want to make clear that it's important to gain match/calibrate when testing gear in order to avoid a false detection due to gain errors. Preferably better than 0.1dB, say 0.05dB if possible.

This should be done by printing a 1kHz sine before the test material and calibrate with that... or just feedinga live 1kHz signal depending on what kind of test is being performed.


/Peter
 
and i thought that audio was about reproducing the original recording as faithfully as possible ..


i agree with the statement that everything that alters the original signal is important ...
some might be less impacting than others ,
but i don't see how we can accept distortion,
if it can be controlled/reduce without degrading other factors



Shouldn't there be only 1 "perfect" solution theoritically?
or at least, 1 best compromise withing the current technological state ... why so many different designs,
and ways ...

pardon my ignorance ... i am an humble apprentice at best ...
 
breez said:
Anyone using foobar's abx tool and having concerns about loudness differences may run replaygain on the files and enable replaygain processing during ABX.

Replaygain analyzes audio based on some model of perceived loudness and generates a gain value to match different audio files. Offline tools are also available so anyone supplying distortion comparison could preprocess the files to have same perceived loudness.
I really do think that preprocessing with replaygain should NOT be done. In reality, at low levels, the loudspeaker will not distort and at high levels when it reaches Xmax, the distortion rises quickly and compression occurs.

When you would compare this loudspeaker with another one that can easier play loud without reaching Xmax, you would experience more dynamics from that one.

Since the compression is an integral aspect of the distortion, you shouldn't use replaygain. The first half of the Beethoven sample is essentially the same for the distorted version. There is no way someone will be able to detect a gain difference. So, when in the second half of the sample compression occurs and you are able to hear that, then you are able to detect the distortion.
It is as simple as that.
 
jeroen_d said:
Since the compression is an integral aspect of the distortion, you shouldn't use replaygain. There is no way someone will be able to detect a gain difference. So, when in the second half of the sample compression occurs and you are able to hear that, then you are able to detect the distortion.
It is as simple as that.

I don't think that its that simple at all because detecting level is different than detecting distortion and if its the level that is being used to find differences then one cannot state that they hear the distortion. I wrestled with this one for a long time and came to the conclusion that as much effort as necessary had to be made to normilize the "loudness" of the two passages in order to yield the most important data. The pure third maximizes the level detection problem, but all order have this situation and it is truely a confounding variable.
 
So that would mean that you would leave the first half of the sample unaltered and amplify the second half of the sample to account for compression. So you propose an expansion function to mitigate detectability of distortion, driving the loudspeaker even further across its Xmax?? Not practical and, IMHO, ridiculous.

To me it doesn't matter whether I'm able to hear the 3rd order harmonic in music (it's simple on a sine wave). If a loudspeaker would have 5% 3rd order harmonic distortion, it would sound compressed and that's what matters.
 
jeroen_d, matched loudness level doesn't mean an expansion function. It is a fixed amount gain being applied.

I agree with Geddes that detecting level is different from detecting the distortion/compression. Matching loudness doesn't make the compression go away. If there is audible difference in dynamics, it will be heard in loudness matched conditions. The peaks will be higher in the original.

Pan said:
Sorry but no software can make a stero file of a mono file. All stereo information is lost when you either start with left or right or if you mix the two together.

Oh, that wasn't the intention at all. The mono-to-stereo was offered as a fix for those ASIO drivers that won't accept a mono track. The mono track doubled would work just like using DS, but without the potential degradation by windows mixer.
 
Jeroen_d

Let's not complicate matters by talking about the specific situation in loudspeakers. It hasn't been ironed out in general and complicating it by talking about specific loudspeaker effects won't make it any easier.

There are far more significant compression effects in loudspeakers than nonlinearity and these, I would contend and have stated all along, are the more audible effects. When you lump "compression" of all effects into "nonlinearity" you will come to a false conclusion about linearity. "Compression" absolutely IS a major, if not the major, audible effect in a loudspeaker, BUT the portion of this that is due to a nonlinearity is small compared to that due to the other (dominately thermal) effects. Unless you seperate the compression from the perception of the nonlinearity you won't see this critical point.
 
I indeed do not see this critical point.

If compression is a somewhat steady state effect, because of increased voice coil resistance when it has turned hot, then this won't be audible, because it will just result in a gradual loss in gain factor (with increasing temperature). I would think that this effect is much slower than the almost instant dynamics in the music.

If I'm wrong, and compression is a dynamic effect, this should result in nonlinearities. I cannot understand how you can have almost instant compression effects without nonlinearities.
 
jeroen_d said:
I indeed do not see this critical point.

If compression is a somewhat steady state effect, because of increased voice coil resistance when it has turned hot, then this won't be audible, because it will just result in a gradual loss in gain factor (with increasing temperature). I would think that this effect is much slower than the almost instant dynamics in the music.

If I'm wrong, and compression is a dynamic effect, this should result in nonlinearities. I cannot understand how you can have almost instant compression effects without nonlinearities.


Actually what you doubt is exactly what does happen. The thermal effects are dynamic just as the signal is. They act almost fast enough to cause a true nonlinearity and it was shown that in some cases they CAN act fast enough to cause nonlinear effects, mostly sub-harmonics, but the VC has to be pretty small for this to occur. While the magnet is fairly steady state in temperature the VC is not. It tracks the signal RMS with only a few ms lag which is strongly dependent on the details of the VC. But its anything but steady state!!
 
I was going to post this earlier, but feel it's been covered. While, in an ideal world, it might make sense to minimize or outright mitigate every conceivable problem in the sound system. Given the reality of things, time, money, effects of one thing on others, etc. it doesn't make sense to actually do that. I think it would be naive to believe that a speaker designer isn't constantly trying to balance all the problems, since there seems to be no one solution to deal with every problem, mutually exclusive of all others. Certain problems have additional costs disproportionate to the actual audible impact. Given a certain budget or product price, you have to pick your problems to solve. It makes sense to spend most of that budget on those things that matter most, and spend less money on effort which matter less.

It's my interpretation from all of this discussion, Dr. Geddes earlier works and comments, etc. that while thresholds for distortion can be, in absolute terms, quite low, in general real world terms, they "can" be quite a bit higher. If we were to conclude about the average person, with 95% confidence, we would have to say from these files that 3% distortion, even with the 1% gain difference, is not audible, we don't have evidence to suggest otherwise. We can add to it that under certain ideal conditions certain sensitive listeners can detect this difference. Since the costs of making a speaker with truely low THD and IMD (across the board) is extremely high, but the audible impact compared with other things is fairly small (statistically insignificant for the average person), it seems a mistake to make it a major priority.

I also recall Klippel coming up, if you read Klippel's papers, pretty clear that he believes that current distortion metrics are wrong, he claims that under some conditions even 30% harmonic distortion is inaudible (sounds like a claim similar to Dr. Geddes), and that playing huge amounts of attention to a speakers distortion are not very sensible. Most of his work is of course about speaker manufacturing and QC, but the point's seem pretty clear to me about final designs too, other things will matter more on sound quality than just a percentage can give you.

Dr. Geddes, normally I'm in the same boat with getting my GF to participate in my interests, but this served a sort of mild academic purpose. That and she is very competitive. We are both in the same PhD program, and it seems a pretty common trait. I ran XC track in undergrad, I used to run a shorter dirt path type run close to 3 miles, and averaged something like 6-7 minutes, with one sub 6 minute mile once in one of my series, and I mentioned to her my best time in th event, and she wants to run a 5K with me. Mind you I probably couldn't manage 10 minutes a mile today.
 
I agree that you should, from an academic viewpoint, look at the addition of harmonics and the linear lowering effect it has on the stimulus as separate factors with respect to audibility. However, most of us being speaker designers, the two are inseparable and might in practice just as well be regarded as one single phenomenon.
 
Yes, but Earl made a point that is very enlightening for me. It is easy to keep THD well under 1% by good system design. Drivers are available with 0.3% 2nd and 3rd order harmonic distortion at 95dBSPL, while 5th order is 20dB below that, e.g. the small 5.5" Peerless 831882 midwoofer. In these cases, compression due to nonlinear distortion should not be audible.

But the thermal compression thing bothers me a lot, if it is indeed within milliseconds. I now remember that I once have performed measurements and that I measured gain loss when I used a louder MLS burst signal. This happened while the MLS burst was quite short. I thought it was a measurement fault.

I think that I'm going to perform those measurements again on my system, to find out at what SPL level compression starts.
 
keyser said:
However, most of us being speaker designers, the two are inseparable and might in practice just as well be regarded as one single phenomenon.

The point is that the two are not inseperable - they come from completely different aspects of a design.

jeroen_d said:
But the thermal compression thing bothers me a lot, if it is indeed within milliseconds. I now remember that I once have performed measurements and that I measured gain loss when I used a louder MLS burst signal. This happened while the MLS burst was quite short. I thought it was a measurement fault.

I think that I'm going to perform those measurements again on my system, to find out at what SPL level compression starts.

This has to be done carefully because its not an easy measurement to get right. I posted a signal on my web site aimed at just the kind of measurement that you propose. This signal randomly modulates the RMS level at frequencies which are below the frequencies of any signal content, but definately in the bandwidth of the thermal response. If you excite a speaker with this signal and then cross correlate the input envelope with the output envelope it will yield the Thermal Impulse response. From this you can tell the time constants of all of the thermal masses in the motor.

You could also look at the impedance using this signal into the speaker and track just the Re (which would have to be fit from the data) and its changes. By comparing this with the rms envelope you would be able to calculate the dynamic temperature of the voice coil. Comparing this with the rms level of the input and you also could calculate the thermal time constants.

I've been meaning to do these tests, but I just have not gotten around to it. Thermal dynamics have not been looked at in any detail. The steady state stuff has, but not the dynamic aspects.
 
gedlee said:


The point is that the two are not inseperable - they come from completely different aspects of a design.


Then I must have a faulty idea of what non-linear distortion is. I thought it was caused by the peaks of a wave-form being lowered. Could you explain why the lowering of those peaks and overtones are completely different aspects of a design?

Also, I too used to think thermal compression was more or less a steady state effect. If it is indeed dynamically related to dissipated power, then it wouldn't just cause linear distortion (lower gain for the complete signal), but also non-linear distortion (sub-harmonics being the most obvious the way I see it). Nonlinear distortion shouldn't be too hard to measure, should it?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.