The only ''definitive'' answer in this Subjective world is...

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Member
Joined 2014
Paid Member
I don't follow "parse".

'Higher order THD' doesn't make sense. Total is Total. Higher order HD means something, what Mark said doesn't. I believe he put an excess 'T' in.

Your amplifier comment does not mean anything as long as you are talking about THD levels (which I have to assume you are here,) because THD does not correlate to perception, so using it as a benchmark/metric/ruler is pointless. I have measured amplifiers that have very low THD plots, but high levels of very audible crossover distortion.
I meant any measurable parameter that the golden ears believe they can hear. Solved problem, transparent, anything else is in the head of the listener not the shiny box.
 
Markw4

I think that you missed much of the point that I was trying to make. Since THD is not meaningful, any test that uses it is not meaningful either. Until one has a metric that correlates well to perception no test like you suggest can be made that will have any relevant outcome. Hence all the tests that you are talking about have a meaningless result.

I don't think I missed the point exactly. It's not clear than a better metric will emerge without further hearing research occurring first.

In the meantime, any published research should probably describe any test signals used and whatever playback equipment was used. Sufficient detail should be provided so that any given study could in theory be replicated. We do know enough about human subject research to know to replication is necessary to be confident of reliable end findings.
 
I would also add that we have people still designing and building amplifiers and other audio electronics according to THD measurements. What should they do instead to assure high quality designs? It seems wasteful to keep doing the same old thing while waiting for some new metric to be developed. What can and should be done in the meantime?
 
Disabled Account
Joined 2012
.
Speakers on the other hand almost never generate significant levels of higher order distortion, because they are mechanical systems and higher orders require higher forces. Since speakers are so so inefficient, high forces are hard to come by. Hence, the nonlinearities in loudspeakers tend to be insignificant - which is why smart companies like JBL don't publish them. They don't mean anything.

It may be higher order are hard to generate...... but it doesn't follow that the distortion which exists is not detectable or significant to me . I want to see the distortion curves -- 2H and 3H IM ...etc vs freq and vs power.


THx--RNMarsh
 
Member
Joined 2014
Paid Member
I don't think I missed the point exactly. It's not clear than a better metric will emerge without further hearing research occurring first.

In the meantime, any published research should probably describe any test signals used and whatever playback equipment was used. Sufficient detail should be provided so that any given study could in theory be replicated. We do know enough about human subject research to know to replication is necessary to be confident of reliable end findings.

I assume from that you are not aware of the work of Earl & Lidia on this (also known as Dr Geddes and Dr Lee).
GedLee LLC
I would read some of these as they do offer a 'better way', but as Earl mentioned they were received with a bit of a 'meh'. Possibly because too many emperors didn't want their nudity revealed.
 
Bill, Actually I have seen the webpage and looked at some of the papers. I have lots and lots of questions about them, but for brevity I thought it might be possible to have a conversation here a bit at a time as Dr. Geddes is willing.

We may get into lots of things or find we are more or less in agreement pretty quickly. The G metric looks like it may be a good start, but until it is really run through the ringer with people in top 95% of distortion and sound quality perception (such as people like Ed Simon and JC depend on, in addition to taking measurements with instrumentation) I don't know.

Also, regarding human subject testing, it looks like there were 37 participants in one study. That is a pretty low number by current standards for human subject testing. Maybe there is some more information somewhere else that I haven't seen.

May as well chat with an expert while the opportunity exits.
 
Markw4 said:
I would also add that we have people still designing and building amplifiers and other audio electronics according to THD measurements.
Do we? A problem in these debates is that some people use "THD" as shorthand for measured nonlinear distortion; it is actually just one simple way of combining harmonic distortion figures into one number. It is used (mainly in marketing, not design) because it is simple to understand and fairly simple to get a 'good' number. Those who reject "THD" fall mainly into two camps:
1. People who don't measure nonlinear distortion at all (or anything else); they often end up preferring some added distortion. They often mention THD, and seem to imagine that the second camp are obsessed about it.
2. People who measure things, including nonlinear distortion, but know that THD is a poor way of combining measurements. Some of these may use alternative metrics, such as Shorter(?) or Geddes.
 
Do we? A problem in these debates is that some people use "THD" as shorthand for measured nonlinear distortion;

Sure, I understand there are other measurements techniques such as FFT, multi-tone IMD, etc., and I am using THD or the word distortion as a kind of shorthand for how we measure nonlinear distortion.

However, some people also perform listening tests with trusted listeners and other don't, so far as I know. I would like to get into to what if anything is being found by trusted listeners that instrumentation tests may not be good at picking up.

I also would like to find out if what we think is low of enough distortion that 95% of listeners won't hear it is really an accurate number (allowing for different types of nonlinearities and their corresponding sensitivities under different conditions). My concern here is that if ABX testing was used, and if ultra-low distortion test systems were not used, then we may be seeing some influence of testing methodology more than hearing sensitivity up around what is believed to be the 95% of the population level. Has anyone really looked at that carefully to make sure we are really doing it right? I can't tell from what I have read so far.

It seems to me some exceptionally good trusted listeners might be needed for test system verification. I don't know if that has ever been done, and if so where I can read about it.
 
Bill, Actually I have seen the webpage and looked at some of the papers. I have lots and lots of questions about them, but for brevity I thought it might be possible to have a conversation here a bit at a time as Dr. Geddes is willing.

We may get into lots of things or find we are more or less in agreement pretty quickly. The G metric looks like it may be a good start, but until it is really run through the ringer with people in top 95% of distortion and sound quality perception (such as people like Ed Simon and JC depend on, in addition to taking measurements with instrumentation) I don't know.

Also, regarding human subject testing, it looks like there were 37 participants in one study. That is a pretty low number by current standards for human subject testing. Maybe there is some more information somewhere else that I haven't seen.

May as well chat with an expert while the opportunity exits.

37 people in a blind study is enormous for audio. Most studies reported on around here use a sample size of one - non-blind ("I can hear it!!!"). 37 was certainly enough to prove the points of our paper, which were:

1) THD and IMD are meaningless measures of audible distortion
2) better metrics can be developed and should be

I contacted many measuring equipment companies about implementing better metrics. They all said the same thing "THD is the standard ... no one is asking for anything better." "But the light is better over here."

My assessment of the situation is this. The more experts looked at the distortion situation the more they have concluded that it is a non-issue. So why invest in a batter metric? In loudspeakers, my main interest, I have concluded this same thing, along with several experts at JBL. Unless a driver is driven beyond its design intent the nonlinear distortion is not a significant factor.

The only case where I have found nonlinearity to be significant where it has not been clearly identified is in amplifier crossover distortion. Using a custom test that I developed that dropped the signal level down and tracked the harmonics below the noise floor, I was able to identify significant differences in amplifiers. These differences were all masked by the standard array of tests done on amplifiers. In other words, using standard manufacturers data all the amps looked to be the same. Using my revised test there were differences.

For loudspeakers I have never found this kind of effect to be the case. The numbers for nonlinear distortion just don't mean anything and are not significant.
 
It seems to me some exceptionally good trusted listeners might be needed for test system verification. I don't know if that has ever been done, and if so where I can read about it.

One interesting effect of our study that we did not report on was the following:

In our test samples were repeated many time. If a subject was unable to repeat their responses to within a reasonable level of consistency then the trials were flagged and that subject was not used - their responses were not reliable enough to be valid.

We went to a well known manufacturer and set up our test using only this companies "expert listeners". The experts were less reliable at assessing the samples than was the general public. My take on this is that they were trying to second guess the samples to get better scores, but weren't very good at it.

Basically, to me, an "expert listener" is someone who thinks that they hear something that probably isn't there. In my years I have only ever come to discount "experts" are unreliable. Others have found this to be the case as well.
 
The only case where I have found nonlinearity to be significant where it has not been clearly identified is in amplifier crossover distortion. Using a custom test that I developed that dropped the signal level down and tracked the harmonics below the noise floor, I was able to identify significant differences in amplifiers.
I don't understand. Did the harmonics below the noise floor indicate the presence of something else?
 
Yes. In order to diagnose crossover distortion one must look at very low level signals. The harmonics of these signals usually fall below the noise floor unless one uses techniques like synchronous averaging that lowers the noise floor while enhancing the harmonics. But synchronous averaging is not easy to do - it requires the signal to be synched to the FFT time base and visa-versa.
 
diyAudio Moderator
Joined 2008
Paid Member
Conventional designs lead to 'audiophile neutrality' when done to perfection but they sound wrong.
Translation: "I don't like high fidelity sound reproduction; I prefer the sound to be modified to suit my personal taste".
No. Conventional boxes and driver selections have some issues that cannot be managed by pointing a mic at them and crossing them flat.
 
Basically, to me, an "expert listener" is someone who thinks that they hear something that probably isn't there. In my years I have only ever come to discount "experts" are unreliable. Others have found this to be the case as well.

I suspect we could identify skilled listeners through measurements. The recent Hi Res listening test thread in the forum here produced some interesting results suggesting how to identify suitable people and how to make sure they maintain accuracy over time though repeated practice and testing.

Also, it should probably be noted brains have many biases including certain listening mistakes, such as focusing attention on certain features of a sound that can lead to errors. Much work in the field of cognitive psychology has been done to identify bias effects unknown to the affected individual.

Debiasing is currently a very active area of research. I suspect listening errors in skilled listeners could effectively be debiased to a large extent, therefore making them much more reliable.

Maybe a good idea to include a cognitive psychologist in a listening test research team.
For one interesting example of applied cognitive psychology there is the Good Judgement Project which is described in the book, Superforecasting.
Superforecasting: The Art and Science of Prediction: Philip E. Tetlock, Dan Gardner: 9780804136716: Amazon.com: Books

Similar methods could probably be applied to skilled listening optimization and error reduction.

Maybe it is as you seem to be saying, the problem is not that we can't devise better distortion metrics and also make skilled listening much more reliable, the problem is that nobody is interested in doing it.

If so, I could see there might not be commercial interest, but the area seems ripe for academic research. Just need to find a grant funding source with interest in making progress. Perhaps a rich audiophile.
 
Disabled Account
Joined 2012
With test equipment, one can learn what 2H and 3H etc sounds like or other distortions.

And later you can recall a sound and know what you are hearing. I use the new Quad speakers for testing because its 'distortion" is very low. 0.1% or less depending on spl. and it has a very flat freq response and I listen in near-field conditions.

One day I wanted to know how much distortion I could hear on MY system. I used a generator with an amplifier which I could vary bias and distortion.... measure it and listen. I could get some 2H and 3H and move them up and down to have one or the other dominate. Then I just listened for a detectable difference when is switched bias levels and distortion.

I could detect 0.1% of 2H or 3H but then that is about same as speakers so maybe with better speakers, or better listener, even lower is detectable. But using 0.1% for ME and MY system, I design electronics for 10 times better. Or, < .01% thd

If I want my entire system to be undetectable in level of distortion... I need each piece of gear to be much better... maybe 5-10 times better again or <.001-.005% If the whole recording chain, production of CD and playback equipment all together to be less than .01-.1% then each stage the signal went thru must be very low in distortion.

The quick version...

THx-RNMarsh
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.