How much tweeter distortion is audible?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Basically the transducer needs to be OK, but beyond that the system design is the more dominate factor. Obviously a "broken" transducer (high distortion, etc.) is not going to work, but I have found the excessively expensive drivers don't sound any better for a given system design. The idea of obsessing over low order THD curves is, to me, ridiculous.
 
This is the transfer curve with a -20 dB third order negative cosine. In the piano sample, no IM grunge is audible.
 

Attachments

  • transchar-20db.png
    transchar-20db.png
    1.5 KB · Views: 257
gedlee said:
Basically the transducer needs to be OK, but beyond that the system design is the more dominate factor. Obviously a "broken" transducer (high distortion, etc.) is not going to work, but I have found the excessively expensive drivers don't sound any better for a given system design. The idea of obsessing over low order THD curves is, to me, ridiculous.

Thanks for those insights. So definitely there is something that is "good enough", SL mentioned this as well I think.

Very nice to know that something good is possible without $600 tweeters :smash:

Is this valid for Dynamic drivers vs. Planars as well? I do not have planar speakers yet but people seems to report on their "transparency" etc. which is then contributed to lower distortion.
 
gedlee said:


I have a MathCAD program that will do that, but you need MathCAD to run it.


Earl,

I would be interested in seeing this, and yes I have MathCAD.

OT question - Have you made any head way on your efforts to test the desireabillity (or the opposite) for early reflections we discussed with Tool during ALMA? If I remember correctly, Floyd was going to help you and Lidia set the test up.

If so are you posting? I'd be interested to know what results show thus far....

Deon
 
Hey Deon

I've bounced some things back and forth with Floyd. No concensus though. I see his own data as contradictory on this point. He claims that all reflections are good and of equal importance. But this is contradicted by the fact that they also found that the room had a significant effect on loudspeaker rankings. Now if the later is true then the former seems questionable, because I don't see how they can both be true.
 
gainphile said:

Is this valid for Dynamic drivers vs. Planars as well? I do not have planar speakers yet but people seems to report on their "transparency" etc. which is then contributed to lower distortion.


This is precisely the point that it is the system design that matters most. Waveguides sound significantly different from piston speakers and planars/electrostats are another option that is significantly different. But different drivers on a waveguide or different transducers in the piston speakers makes far less of a difference. It's the basic topology of the system that matters most, then the drivers are a factor, but a far smaller factor.
 
gedlee said:
This is a major problem because the harmonics generated by digitally processing a file will alias if they go above the Nyquist Frequency. And since they alias durring the calculations you can't just LP filter after the calculations.


a key point!
Checking the .pdf on Keith Howards site it states that the input wave file is upsampled x24 before processing which means the 20th harmonic of 22KHz should still be in band.

great discussion!
 
It has nothing to do with room variations because all of my studies were done with headphones to elliminate these kinds of variables.

Originally posted by gedlee but was done by recording the signal in the PWT and playing it back over Etymotic ER4 Research earplugs.

I am probably way behind the play line, but I have a few questions (and excuse any idiocy)

is it valid to extrapolate from headphones (or the like) to speakers?

there has been mention in some earlier posts about sending a sound file containing 'x' amount of distortion. Is that valid? What I mean is, on a recording any amount of added distortion would be across all frequencies.

But a loudspeaker would have differing amounts of distortion at varying frequencies surely?

As an example (and this came up on a thread on avs, needless to say eyebrows were raised at any suggestion that distortion measurements 'were over-rated') this speaker shows (large?) distortion at some frequencies. Due to the non uniform nature of these distortions (as opposed to a 'blanket' x amount of distortion on a recording) is it possible that it's distortion will be audible?

http://www.soundstagenetwork.com/measurements/speakers/yg_anat_ref_main_module/

Would not the only true way of testing this would be to have a speaker with a 'distortion knob' (going up to eleven!), and being able to select the frequency at which it operates? (like parametric filters maybe, Q (distortion), amplitude and frequency)

I fully accept that I could have this totally backwards!!
 
headphones are speakers and distortion is frequency dependent. What Dr. Geddes has done in his experiments is what you are suggesting, but done with the appropriate amount of control. Headphones help to remove potential room effects and improve the test sensitivity. The only good reason to use speakers in a real room would be if you were interested in how distortion in speakers impacted the perceived sound field. Distortion is a non-linearity, it's a change to the waveform that isn't uniform.
 
Its not backwards at all and there are some valid points. The thing is that simulating a real loudspeaker type of nonlinearity is very difficult. So what we did was much simpler, namely frequency independent nonlinearity. It would have been ideal to have simulated speakers directly, but in those first studies we were just trying to understand if there was any kind of correlation betweem perception and THD. The fact that there was none indicates that a more complex and refined model based on THD was not going to work either. What was needed was a new metric - one that did correlate with perception for "simple" nonlinearities and then this approach could be further developed for the more complex situation of a loudspeaker.

That WAS our intention, but there was no interest in a better metric. The feeling was "sure we ALL know THD is not relavent, but its so easy to do and customers like it, so its fine with us".

The technology is there to do a better job for speakers, but nonbody cares because the more we study it the more we conclude that nonlinearity in a loudspeaker is a secondary effect, so what advantage is there to refining a measure of it?
 
Hi Dr. Geddes, I've talked with JJ who was with AT&T and now DTS as well as one of the researchers with Mpeg, both of which seemed to indicate that doing such research in a room would only be important if knowing the effects on imaging or sound staging were important. I've always had a problem with null results in conditions like that since the amount of potential confounding variables increases. There are labs which would probably work fine, but a lot of the AES research I've read was not done in these labs, but in real rooms. The metric we use in statistics to understand the effect of this change (headphones to speakers in a room) on test selectivity will have a an impact of increasing the denominator by a minimum of 1 (but probably much more) and the numerator will stay the same. So now if the number of experimental independent and dependent variables is 2 but the independent confouding variables increases from 1 or 2 to something like 4 or more, you would need to increase your sample size by something around 4 times what you originally had to equal the same level of selectivity. And the thing about that metric is that, you never know exactly how many denominator variables really exist in a study like this. You can do your best to find them and control them, but you really never know.
 
pjpoes said:
headphones are speakers and distortion is frequency dependent.


yeah I realise headphones are speakers, but just checkin basics ya know? I mean all of these points came out of earle checking basic assumptions. And let's face it, headphones and full range speakers are very different sonic experiences (else we wouldn't have both) and not everything is directly scaleable.

'distortion is frequency dependent' is that the point I was missing?? I just assumed adding 'ten %' distortion was across the boards (all frequencies).

Was trying hard to get an analogy that would explain what I meant (it's unclear in my own head, so it's very probably unclear to those outside it!). If we were watching tv, and you trundled off to make a coffee and whilst you were gone I turned down the brightness a tad, when you came back you probably wouldn't notice any difference in the picture. (my understanding of adding 'ten %' distortion to the recording). So the entire screen has been affected across the boards.

BUT if instead only two vertical bands on the screen (say) had the brightness turned down the same amount (two bands with 10 % distortion, analogous to the two big distortion peaks in the speaker graph above), and the rest of the screen was left at original brightness, then you probably would notice straight away. That is the recording left untouched but the speaker having distortion at certain frequencies only.




gedlee said:
So what we did was much simpler, namely frequency independent nonlinearity.

Is that saying the same thing as pjpoes??


The technology is there to do a better job for speakers, but nonbody cares because the more we study it the more we conclude that nonlinearity in a loudspeaker is a secondary effect, so what advantage is there to refining a measure of it?

You probably understood my question earlier about the recording, but if not did my crappy analogy make better sense?

Of course that analogy is based on my assumption that adding distortion to a recording IS distortion 'across the boards', but I am a bit confused on that now! (and that speaker distortion varies across the boards ie more at some frequencies than others, as in the (extreme?) example posted above, or is it not that extreme for speakers?)
 
First, in our tests the subjects compared a distorted signal to an undistorted one. So in your example there would be two TV's, one I change while you are out of the room and the other is always left alone. Now you can easily tell if one has had the brightness turned down by comparing it to the other.

We used insert earphones because they are know to have very low distortion so that their distortion was not a confounding variable.

Distortion "across the board" is what electronics tends to do. It tends to be independent of frequency. But in a loudspaeker the distortion is mostly excursion dependent and the excursion is greatest at resonance. This can be handled by converting the signal into its excursion spectrum, distorting that and then going back to the sound spectrum. Its just a lot more work that was not justified in a first test.
 
gedlee said:
We used insert earphones because they are know to have very low distortion so that their distortion was not a confounding variable.

May I ask what earphones you used?

Those I have seen measures about 30-40dB worse than the best circumaral headphones and even worse than the best loudspeakers.

Distortion "across the board" is what electronics tends to do. It tends to be independent of frequency.

Nope, most electronics show rising distortion towards higher frequencies due to decreased open looop gain.

But in a loudspaeker the distortion is mostly excursion dependent and the excursion is greatest at resonance.

Nope. In a bassreflex speker the excursion is greatly reduced at resonance which means reduced distortion compared to above and below resonance, so that depends on. Also speakers typically have increased distortion towards higher frequencies not only towards lower which means higher excursion.


Its not backwards at all and there are some valid points. The thing is that simulating a real loudspeaker type of nonlinearity is very difficult.

Klippel seem to do that fine and he has a slightly different take on the matter than you.


/Peter
 
pjpoes said:
Headphones help to remove potential room effects and improve the test sensitivity. The only good reason to use speakers in a real room would be if you were interested in how distortion in speakers impacted the perceived sound field. Distortion is a non-linearity, it's a change to the waveform that isn't uniform.

You can't take that for granted without looking at how we hear things. Speakers can have higher sensitivity than headphones depending on what you're testing.

You must look at binaural masking and such effects.


/Peter
 
Pan said:


You can't take that for granted without looking at how we hear things. Speakers can have higher sensitivity than headphones depending on what you're testing.

You must look at binaural masking and such effects.


/Peter

It's my understanding that what good psycho-acoustic testing has been done has shown that headphones offer a far more sensitive measure. I would argue that all too many of the researchers who made this claim lack the statistical understanding to see why it is. Test selectivity is greatly impacted by the number of variables controlled vs not in the model, and a room opens up a lot of uncontrolled variables. Sample size is also very important and also often under-estimated. I'm sure that the sample is largely chosen based on what can be assembled, but it will dictate which models can and can't be run. I've seen statistical models (when they even bother to present them) which should not have been used, and the non-significant results are really not surprising.

I mostly posted that as I was hoping Dr. Geddes would give us his opinion on the pluses and minuses of headphones and speakers.

None the less, as I understood Dr. Geddes article, it wasn't intended to prove or disprove the audibility of distortion (it's threshold) at various frequencies, it was designed to disprove the idea that THD and IMD were metrics with a correlational association to the perception of good sound. That's a completely different type of study, and while having controls is important, in that situation, all you have to ensure is that within a certain range distortion can reliably be detected, making sure that range stretches down to .1 or .00000001% really doesn't matter. From there you just check peoples perception of good/bad/indifferent against the percentage (THD or IMD), and look for the linear relationship. In addition, because of the way Dr. Geddes collected the data, they could also show precisely why there was no good correlation (some combination's of distortion will drive up the percentage without having a strong impact on audibility, while other things don't drive the distortion figure up, but are far more audible).
 
pjpoes said:


I mostly posted that as I was hoping Dr. Geddes would give us his opinion on the pluses and minuses of headphones and speakers.

None the less, as I understood Dr. Geddes article, it wasn't intended to prove or disprove the audibility of distortion (it's threshold) at various frequencies, it was designed to disprove the idea that THD and IMD were metrics with a correlational association to the perception of good sound. That's a completely different type of study, and while having controls is important, in that situation, all you have to ensure is that within a certain range distortion can reliably be detected, making sure that range stretches down to .1 or .00000001% really doesn't matter. From there you just check peoples perception of good/bad/indifferent against the percentage (THD or IMD), and look for the linear relationship. In addition, because of the way Dr. Geddes collected the data, they could also show precisely why there was no good correlation (some combination's of distortion will drive up the percentage without having a strong impact on audibility, while other things don't drive the distortion figure up, but are far more audible).


Matt

You have it exactly correct. No claim about the threashold of distortion of audibility was ever made in our first study. We were only looking at the measurement reliability and repeatability of THD and IMD. We found them repeatable but lacked a correlation with perception.

Headphones are very good sources for a great many tests as they take out a lot of confounding variables. Etymotic ER-4 are known as "research ear plugs" for just this reason.
 
Dr. Geddes, are you aware of anyone treating the trials within each person as nested and using nested multilevel modeling techniques. I can see a lot of advantages for more sensitive testing, but would require fairly sizable samples both in number of trials per person and number of people. As I'm picturing such a study (one like you did or even a step 2 of it) I could see the benefit be maximally controlling confounding variables in the model. If I've read the various AES articles correctly, they seem to aggregate all the trials together, sort of average the trials across participants. It's an older method that was largely abandoned in contemporary high level social science research. I say high level because when the samples are too small, you can not run MLM models like HLM without losing too much power.
 
pjpoes said:
Dr. Geddes, are you aware of anyone treating the trials within each person as nested and using nested multilevel modeling techniques. I can see a lot of advantages for more sensitive testing, but would require fairly sizable samples both in number of trials per person and number of people. As I'm picturing such a study (one like you did or even a step 2 of it) I could see the benefit be maximally controlling confounding variables in the model. If I've read the various AES articles correctly, they seem to aggregate all the trials together, sort of average the trials across participants. It's an older method that was largely abandoned in contemporary high level social science research. I say high level because when the samples are too small, you can not run MLM models like HLM without losing too much power.

We basically did have a small sample size. Its difficult to find subjects and good ones are extremely hard to find. There is no way that you can get anybody to do a test for more than about 30-40 minutes. You have to work with minimal numbers which seriuosly limits the anayysis that you can do.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.