Nature of Distortion

Status
Not open for further replies.
wimms said:
When it comes to loudness, I believe it has to do with total acoustic energy delivered. There is special mechanism in ears to supress excessively loud sounds, for adaptive dynamic range.

My limited understanding is that there are loudness thresholds built into the inner ear (three?)... the perception of loudness per se is unrelated to that.

imho, sound with reasonably low levels of distortion (ur loud mid-fi for example) sounds "loud" way sooner than ur much higher SPL and low distortion studio monitor (or similar low distortion speaker system) in general.

I believe we can pick recorded from real thing thanks to another matter. There are almost no recordings these days that do not apply dynamic compression.

untrue. there are many CDs made with no compression whatsoever. not pop CDs, but many on botique labels - Chesky and Dorian are two that come to mind. The latter I am 100% certain of, knowing them personally. The former, 98%, having spoken to them. There are others...

When you go to live events, there is adequate sound reinforcement and little to none compression used. And it does sound more natural, despite all the horror stories of PA quality. When CD's are played through same systems, they sound like ducks singing.

Not true. Afaik, there is not one "pro" level sound system that does not use soft limiters to prevent system clipping and usually compression/limiting/dessers/feedback squashers etc. on the vocal mics and maybe on all the line feeds. What you are hearing is a combination of PA + stage amps... if you found a band that played only through the PA, I guarantee that you could not tell the difference between a Digital Recording of them on that stage and them playing live... (assuming there was not two passes through the effects/signal conditioning chain)...

_-_-bear :Pawprint:
 
Francis_Vaughan said:
Worth reading.

http://www.its.caltech.edu/~musiclab/feedback-paper-acrobat.pdf

A rather nicely done paper. They are pretty careful with their conclusions too. A model of how to do it. The authors are an interesting pairing. This is the sort of stuff you can do when you are eminent in your own field 🙂

Hi,

The paper concludes with a discussion on page 41...

In all cases, with and without feedback, intermodulation terms
dominate the harmonic distortion terms of the same order. We all
know this must be true, but we usually forget it.

I thought this was good independent verification of the results
in the Czerwinski paper. The latter goes into the math behind why
IM is much worse. The math wasn't that bad at least in the
middle sections anyway. I think because the authors did such a
good job of explaining it.

reminders on live vs recorded sound:

The reason the human ear automatically detects the
difference between live and recorded so easily is because
recorded passes thru a non-linear transformation (with all the
associated IM and HD products), while live doesn't do that.
The ear picks up all these signal correlated products and
dissonances in the recorded sound (even if rather low in level)
and concludes it must not be real.

The reason a recorded sound is "louder" at lower volumes
then the same sound played live is because of all the extra
signal correlated IM and HD products which are rather dissonant
to the ear. The same effect occurs with sound in a musical
instrument. A brass instrument (for example) playing at louder
levels produces more higher order (read dissonant) harmonics in
addition to the increased volume level.

The reason most of these signal correlated products have been
escaping our view is because we didn't have the proper
conceptual framework and we didn't have a good enough
magnifying glass (multitones) 🙂.

Mike
 
Hi Mike,

A couple of years back, one of our UK Hi-Fi magazines reported on a blind A-B test of live versus recorded music.
I believe it was Hi-Fi news; I think one channel and electrostatic panel per string quartet instrument were used; possibly Master tape recording.
Must have been a fortune in equipment.
The experienced listening panel could not tell the difference!


When sound is reproduced at lower volumes, surely any audible difference is due to our ears having different high/low sensitivities at the playback level, as compared to live instument playing where the full bandwidth amplitude cannot be linearly turned down!


First watt listening can be satisfactory because our ears remain deaf to some errors that have not become raised above their threshold of audiblity for a given frequency. Satisfactory - maybe; realistic - not.



Thus it is clever to talk about harmonic distortion masking, but is it sensible?
Who is going to set the 'standard' listening level at which this masking is going to be applicable?
Headphone listening, bedroom listening, lounge listening, theatre listening, arena listening; all requirements are different.
Indeed are the CD 'standards' not where the clever people have already let us all down?


Experienced Hi-Fi shops sell amplifiers to match specific loudspeakers and a specific customer's requirement. I don't think any more theoretically based standards are going to make the slightest difference.


Cheers ........... Graham Maynard.
 
Thus it is clever to talk about harmonic distortion masking, but is it sensible?

I think overall is is. Although there are a few assumptions being made. The most important one is that the shape of the masking curve is level independent. This is not too unreasonable, but does take a bit of a leap of faith.

The origin of the the entire masking function is, it seems, very much open. Personally I am of the opinion that it, and some other phenomena, which are described by a template model, all have their roots in the process of learning to hear. That is that masking is a learnt mechanism, and results from continuous exposure to real life sounds.

The interaction with distortion mechanisms (at least as posited by Cheever) is that if we match that template (implicitly by matching the masking function's shape) the ear/brain will elide the harmonics, because the template will still fit. One assumes the mechanism involves the ear/brain level shifting the template to a point where the harmonics are least in evidence. This is not too unreasonable an idea.

There is an ethically impossible experiment that could be used to verify the learnt nature of the template. Fit a new born child with earphones, and play them all the sounds they would ordinarily hear, but processed in such a way that the frequency spectrum was stretched. I would predict that if you did this, once they had learned to hear and talk, they would function perfectly, but if you removed their headphones, and tested their hearing, they would perceive tonality the same way as we, do, but only find consonance in their stretched spectrum. We would have programmed a different template into their hearing. - Total speculation on my part, and I don't have enough knowledge of the field to even know if this is not an old idea.

Back to the issue of sound level. There does often appear to be a sweet spot for the level of reproduction of music. However I think there are a great number of factors that come into play. Friday night I went to a performance of Beethoven's 9th (with full 100 strong chorus.) Wonderful. But I was late in getting a ticket, so I had the worst of the good seats, and was some distance away. As good as it was, it wasn't nearly as loud as I play it at home in my headphones. But there was no way one could imagine it was coming out of any HiFi system I have ever heard. And interestingly enough, a few of the attributes that give away the live nature of the sound are the lack of some of the attributes many people seem to like to ascribe to really good HiFi. For instance the idea that the sound comes out atop a velvet blackness of silence. It doesn't. The reverberation time of a real venue will never let that happen. And pinpoint imaging. Nope. Real music in real venues does not sound like HiFi. But a great deal if this is rooted in our inability to reproduce the real 3D sound field. That is a whole other discussion.
 
a passing thought on live v recorded...

There isn't a microphone that doesn't sound like a microphone.

Microphones don't hear like ears.
Speakers don't not impart their own imprint.

So, no matter what, the result is going to be different, even if you ignore the limitations of "stereo" or even multi channel reproduction.

The best case to consider might be the digital recording of a band that plays direct to the board (no amps) vs. the same performance in the same venue "live." But even that comparison is fraught with uncertainties and is hard to be blind.

Graham, as far as those "can't tell the difference" tests goes, that harkens back the famous AR tests at Grand Central Station (NYC) back in the late 50s or early 60s (can't recall now). No one could tell the difference. BTW, Edison did the same thing. (really)

It is vaguely possible to "get close" in a very controlled environment, I suppose. But I strongly doubt that anyone has actually pulled off the "trick" thus far... despite the published reports. 😀

My own pet hypothesis has long been that the difference between "live sound" & a great "stereo" vs. a "not so great stereo" can been seen as the difference in effort in decoding the sound.

One doesn't just "hear" - it's an active process of integration over time of raw data by the brain. The fewer "brain cycles" it takes to first recognize and identify what is going on, the more processing can take place on the subtleties, which are all transient (time limited). What we perceive is not what the "sound" actually is - it's an "image" created and synthesized from cues extracted from the raw sound data that our ears pick up.

To that extent, I agree it is a learned process to a great extent, but there is also built-in primordial and primative "sub routines" at play. Those sounds that are presented that *fit* closest to the underlying "sub routines" process most effortlessly and quickly - leaving in essence time to literally pay attention to the other details going on.

For this and other reasons, people like professional musicians actually hear music very differently than the average joe. Similarly, these effects can be readily experienced when you tune your car radio into the middle of an otherwise well known piece and you don't quite know what it is instantly - it takes a few seconds of listening before your mind can provide the necessary context which permits you to make sense of just what that sound is!!

Imho, simply stated the definition of good sound or a good system is directly correlated to how little brain processing it takes the brain to make sense of it because the cues are in their proper and natural '"positions." (and not interfered with or masked)

Discovering the details of how this mechanism works, I think is what we're really talking about.

_-_-bear :Pawprint:
 
Hi Graham,

I can't quite say what makes the piano so relatively different from other instruments. On my system, for instance, string instruments sound perfectly sweet and quite realistic, and voices too. I used to dislike the sound of strings on CD on a typical HiFi system, but here at least my homebrew system satisfies me.

One thing I have heard about detectability of any instrument: even experienced musicians have serious difficulties to detect the instrument if you cut off the inital transient of a recording. That means people detect the sound from the harmonics within the first transient - and in many instruments the harmonics come close to or exceed the fundamental in volume. Taking into account the equal loudness curves, they would often sound louder than the fundamental anyway.

So if an instrument has a very complicated harmonic structure such as the piano, and as for the piano, its sound consists of mainly transients anyway, we'd have the case where the correctness of the harmonic elements of the first transient matters the most. In other words, the piano may stand as the worst case scenario for a reproduction system.

Re: Cheever paper:

I found the Cheever thesis very interesting, but I also have troubles with it. Given the narrow dataset he presents, he gives a good read but many questions remain. His passionate tone glosses over things a bit too much. For instance, I can see how the absence of lower order distortions would lead to clear audibility of higher order elements (masking), but that leaves open the solution of a few lower order distortions and none of higher order - nothing to mask, no problem. Also, in one of his figures you can see that the harmonic envelope of the 30-35 dB feedback case returning to his "ideal" - yet at much lower levels, which should after all make for the most transparent sound.

And finally, he does not seem to consider the equal loudness curves and their implications for distortion detection, frequency wise. Higher order distortion may become less and less relevant once the equal loudness curve tilts back up. Higher harmonics should therefore do most damage for fundamentals in the lower 100's of Hz - because then the higher harmonics fall into the critical 2000-4000 Hz range. In other words, high order HD of a 1000 Hz fundamental may not matter much at all. It may matter the most for 200-500 Hz fundamentals.

Assuming low HD of any amp leads to better transparency (desirable) but has a trade off in high order HD mess (undesirable for the midrange only), that leads to a solution I have long thought about for those who like me, use bi- or tri-amping.

The tweeter will reproduce higher order HD pretty much outside the hearing range, so one could use a standard amp with low THD, nevermind high order byproducts, for transparency reasons.

The woofer would not be capable of rendering the high order HD of its low frequency range anyway so one could also use a standard amp.

So that leaves the mid amp and driver liable to greatest vulnerability. Solution one would be to use a "low order HD" amp, maybe class A SE etc. Solution 2 would be to use a standard amp, but to combine it with a passive LP after the power amp as part of the X-0 in spite of multi-amping. That would take out the high order HD of the power amp produced in the mid band. Since the tweeter has its own amp, the high order HD would just dissipate in heat.

I still intend on trying that one day to check if it makes a difference. Right now I use boring identical chip amps for all 3 bands...

As good as it was, it wasn't nearly as loud as I play it at home in my headphones. But there was no way one could imagine it was coming out of any HiFi system I have ever heard. And interestingly enough, a few of the attributes that give away the live nature of the sound are the lack of some of the attributes many people seem to like to ascribe to really good HiFi. For instance the idea that the sound comes out atop a velvet blackness of silence. It doesn't. The reverberation time of a real venue will never let that happen. And pinpoint imaging. Nope.

I completely agree. "Good" HiFi imaging has something very artificial about it compared to a real large scale concert. Besides, in a large scale concert only a tiny fraction of the SPL you hear comes to your ears directly. You actualy hear almost exclusively the reverberated soundfield. And this problem has no solution in HiFi: for such material, you'd need ambient recording played back by speakers with no room interaction at all to make it realistic. Maybe the audiophile preference for small scale music comes from this issue - here, the imaging does a realistic job. Since for small scale music / single instruments / studio recordings, you'd need omnidirectional speakers with maximum room interaction, we can only seek some kind of compromise to make all sorts of recordings and material sound reasonably good.

In the same vein, I wonder why nobody (to my knowledge) has ever brought up the following massive absurdity of stereo reproduction:

Think about it - if you listen at the ideal position, equilateral triangle w/respect to speakers, center instrument imanges perfectly in the middle etc. Then you move to the left, or just wiggle your head. What happens? SUddenly the sound seems to come more form the *left*!

In stereo, as you move "away" from any sound's phantom image, you start hearing it louder, and from the wrong direction. I find that really vexing, because another artificiality of reproduced listening to me consists in sitting still in your seat (lest the whole sound quality you worked so hard for gets corrupted).

Just some ramblings...
 
Hi,

Graham Maynard said:
A couple of years back, one of our UK Hi-Fi magazines reported
on a blind A-B test of live versus recorded music. I believe it was
Hi-Fi news; I think one channel and electrostatic panel per string
quartet instrument were used; possibly Master tape recording.
Must have been a fortune in equipment. The experienced
listening panel could not tell the difference!

When sound is reproduced at lower volumes, surely any audible
difference is due to our ears having different high/low
sensitivities at the playback level, as compared to live instument
playing where the full bandwidth amplitude cannot be linearly
turned down!
...

Graham, Bear, Others...

Although it seems the discussion has moved on to talk
about masking and imaging, I'm still stuck back on IM distortion
and how this has been relatively neglected. I do admit that my
previous post oversimplified the nature of live vs real, but I'd like
to try and keep focused on IM and HD for at least one more point.

The example above provided me with another AHA regarding
IM. Thanks for that. These types of realizations hit me about
once every couple of years. Two in the same thread (thanks
jcx for the first) is bizzarre.

I like this experiment because it allows imaging, localization,
reverberation to work as they normally would (as opposed to
listening to a recording thru two channels in a smallish room).
Each sound source is placed where a player would normally sit on
stage, and the test is conducted from a seat in the hall. Thus
the hall can contribute as it normally would to the overall sound.
The distance from the listener to the sound source can reduce
the effects of imaging and localization caused by the source
coming from a speaker versus an instrument.

That is how I understand this test to work.

There are probably a number of factors at work which
fool the audience into thinking the speakers as sources are real.

With the hall contributing as it normally does, as Bear says,
the ear/brain doesn't have to work very hard at deciphering this
type of information, so it can get on with examing other aspects
of the sound.

With the placement of the speakers, the imaging and localization
is approximately the same as a real source, so the ear doesn't
have to work very hard at deciphering this, and can examine
other aspects of the sound.

To get back to IM...if this test were done through a pair of
speakers, we would have all instruments playing thru the same
non-linear transfer function. Recall from a previous post that IM is
influenced by 3 things:

1) the order of nonlinearity in the transfer function
2) the number of tones on the input
3) the amplitude of the input

A pair of loudspeakers has one non-linear transfer function with
N tones playing thru it. This test has N non-linear transfer
functions with 1 tone playing thru each (the fundamental plus its
overtones is equivalent to 1 tone).

Thus the amount of IM products will be greatly reduced. So the
test eliminates another large amount of deciphering that the
brain would otherwise have to do by eliminating IM effects.

From Czerwinski:

It is interesting to see how the number of distortion products
grows with a further increase of the number of initial tones. For
nth-order nonlinearity and r number of initial tones, the number
of all combinations of multinomial terms is described by the
expression [103]

(A1 + A2 + ... + Ar) raised to the nth power

...where A1, A2,...Ar represent the initial tones...


For this test we have instead:

A1 + A2 + ... + Ar, each raised to the nth power

which is a lot less IM products.

So I think that this test provides more supporting evidence that
IM elimination has a lot to do with making a recording sound real.

Mike
 
I think the issue about how much effort the brain needs for processing to sort out the sound is a very good point. It is something I see quite often in a related field. We do a bit of visual virtual reality work here. We have a couple of large rear projected systems, with stereo vision. Using different tricks the eyes see the appropriate channel, and synthesise a 3D visual field. It can be very impressive. However there is no doubt that it is also rather false, and after a while it gets at first tiring, then eventually one gets a headache. The reasons are well understood. One thing that is missing are any focus related depth cues. Even though an object might be placed a short distance in front of you, or right on the virtual horizon, it still focusses on the screen. One of the systems uses a rather clever set of interference filters to split the light into different thin spectral bands, different for the left and right eyes. However it cannot do a perfect job because the eye's colour receptors overlap in response, so it is intrinsically impossible to avoid the two eyes seeing the object in different shades.

Despite this, the illusion is very good. But it takes a little time to adjust to, and really does take a lot of effort to keep up.

I find the same thing with some recorded music, and certainly some HiFi systems. It is a difficult thing to quantify, but there is a long history of issues with systems that one simply finds hard to listen to for extended periods.

Fooling the ear/brain to find pinpoint stereo imaging seems to fit into this. Intrinsically a two channel system cannot contain enough information to reproduce the sound field, and so we end up with synthesising the effect with a reduced set of cues. Little more than level and to a limited extent phase. So many recordings do nothing more than pan a source to one point. Much like the visual stereo effects - the gross mechanism is taken care of, but all the remaining cues are simply missing.

Distortion artefacts may well fit in here too. The brain may well spend a lot of time battling to make sense of what are conflicting cues about the nature of the sound. The gross sound may meet all the requirements, but the second level of cues may be all over the shop, and so eventually the illusion breaks down, simply due to fatigue.
 
From feedback-paper-acrobat.pdf, local degenartion RE is considered as feedback.

What is the difference between this feedback with feedback like used in differential (going to base)? Is the later worst than the former in terms of IM? Or they both produce the same IM artifacts?

If the harmonic pattern that the ear likes have different form than the harmonic pattern naturally developed in electronics (microphones, power amplifier), this means recorded performance will never be exactly the same as live performance. Limited rather by the natural behavior of the electronic components itself.
 
The feedback is essentially the same thing, if you build the circuit from first principles you should see that in the end the EB voltage simply has the feedback imposed across it. There is a simplification made (which is OK) whereby the feedback has already corrected any imbalance - this allows some second order terms to be dropped. But the simple answer is that for the purposes of this discussion the nature of the feedback is the same no matter how applied.

As to the pattern of harmonics - exactly, that is the entire point of the Cheever hypothesis. However there is a further corollary that can be made. If the reproduction chain contains a low level of unbalanced, with respect to the ear, harmonics an amplifier that has significantly larger quantities of balanced harmonics might overwhelm the preceding harmonics enough to sound OK again. I'm sure this will be the nub of the single ended triode adherents viewpoint. 🙂 It might even be right :bigeyes:
 
Or, the speaker itself may contribute the necessary "dirt" in the lower harmonics 😉

I want to add that higher order harmonics per se don't necessarily sound bad. My violinmaker friend told me once that the better Italian violins have more of the "middle harmonics". In the context of violin making, that means the 10th to 15th harmonic (!!!). The "higher" harmonics in violin making would be the 15th to 25th.

Essentially those better violins actually sound worse up close, to the extent that inexperienced musisians have problems with them, but sound better from a distance. They also "carry" further, i.e., they sound louder in a room.

That supports the thesis that high order HD makes a system "louder". I can easily see how the equal loudness curve in itself explains why that would happen - the 10th harmonic of 440 Hz, e.g., falls nicely in the most loudness sensitive region of the ear. I can also easily see the severe consequences for close miking recording techniques - in essence, counterproductive.
 
Much fun to be had here.

I have always felt that there was always a correlation between the nature of an amplifier or speaker, and the sort of instrument that sounded the most "live". Funnily enough horns sound good on horns, and I have always been amazed at the realism of drums on Magnaplanars. Overall it is perhaps not too surprising, in a sense those speakers were likely adding additional distortion components that mimicked those instruments. Nothing really startling here, and I'm sure I'm not alone in such observations.

Getting back to Cheever, the thesis is much more clear. The right profile of harmonics is magical - if the system has them the brain elides the distortion.

Now there is an interesting further implication. Say we had a SET amplifier, and believed that indeed if it swamped the existing harmonic profile with its own, the brain will only hear the fundamental, on music played on instruments with very subtle harmonic nature, the brain might suddenly perceive an instrument with no harmonic structure at all! The counter argument is that we would have never designed such an instrument, because it would have sounded lifeless played live anyway. It would suggest that real musical instruments must all actually avoid the masking profile of harmonics in order to have any individual harmonic timbre, and that that comparative timbre analysis is best done after the masking profile is subtracted.

More food for thought.
 
I'm a bit behind the postings, but here goes.

Hi Francis,

Maybe I should have phrased my earlier question differently.

When you listen to a good resistor loaded amplifier via headphones it sounds clean.
However, when you simultaneously drive a dynamic loudspeaker, the phones reveal the effect the loudspeaker has upon the waveform, and thus the loading and reaction it has caused upon and within the amplifier itself.

Given that any family of harmonics for a 0.001% thd figure is humanly undetectable, does it make sense to talk about distortion masking when it is dynamically induced amplifier-loudspeaker interface distortion that is the most significant, for it is the reactive loudspeaker that alters the drive, and thus the harmonic nature of the amplifier distortion being generated?

Do any of the suggested tests mention loading an amplifier with various crossover/cable/loudspeaker loads, and if not, does this not suggest that proposed testing is not sensible?


Hi MBK,

Does the equilateral listening head movement scenario outlined in your Post #67 not show that we really are capable of detecting high frequency timing relationships as we establish a binaural image, ie. phase relationships above 5kHz?


Hi Mike, (mfc)

Yes multi-tone amplifier testing does illustrate product generation, and yes we do need to get round to doing this with realistic loudspeaker loads, or at least using good models made up from passive components.
Maybe then the test results will start relating to what our ears tell us anyway.

See the reviews in Hi-Fi mags, and how different amplifier characteistics are ascribed to chassis that commonly measure <0.01% thd using a resistor load, and how recommendations are often given regarding the nature of the loudspeaker they would best suit.
A Sugden A21SE has just been reviewed in extremely favourable terms - pure class-A - yet it only manages 0.3% @ 10kHz, 33W-4R. What value <0.01% with specific loudspeaker recommendations then?
Also, as Nelson Pass has just suggested in the 'Distortion Microscope' column, simulated examinations will not accurately predict real-world results.


Cheers .......... Graham.
 
Hi Graham,

Does the equilateral listening head movement scenario outlined in your Post #67 not show that we really are capable of detecting high frequency timing relationships as we establish a binaural image, ie. phase relationships above 5kHz?

Hard to tell - it may come from a lower frequency perception. I just wanted to point out the in-principle "wrongness" of 2 speaker stereo reproduction per se.

Francis you brought up another in-principle flaw of stereo recordings - panning the source between channels ignores phase and timing clues.

So we have 3 possible clues for 3D perception

1- loudness
2- phase
3- timing (group delay - mostly acoustic, real distance to speaker).

Stereo uses mostly #1 for stereo effect. #2 gets a chance at reverberant effect ("spaciousness"). #3 in stereo reproduction depends highly on listener position and makes the system very vulnerable to listening artefacts as I pointed out before.

Unfortunately you can't even simply mix your stereo into mono, because you lose 1/2 of the out-of-phase reverberant signal, making the sound flat.

You can only get around these artefacts IMO with a complete reflecting system a la Bose 901 (hear hear!) - at the price of maximum room interaction and colouring, and thus minimum detail resolution.

Francis, good points about listener fatigue and "stunning" effects that wear off with time due to fatigue as well. I believe this affects heavily our listeig experience. Massive detail and imaging that needs considerable listener concentration and immobility in one physical location will get you tired after the initial awe wears off.

I suspect that the same holds true for amp distortion. The usual arguments about THD and insignificant differences under ABX testing show that most standard amps seem to sound fairly similar under typical ABX conditions - low significance levels if any difference detected at all. They all get the "gross" elements right.

In fact I mostly hear differences between amps only for the first few tones. Then I get used to the new sound and it sounds "the same" - I often can't detect differences in blind tests afterwards anymore. But long time listening, at least in my experience, sets the record straight again - some amps evidently do sound better than others.
 
How about this : Textbooks said that human can hear 20-20khz. This is for young, healthy, normal people. More age, the bandwith will narrowened. This is also why audio CD's only brickwall their bandwith to 22.05khz.

What if people can feel (not hear) bandwith more than that? Like from 1hz to 100khz? Maybe that's missing in reproduction material. Tube SET is not completing the original harmonics, but re-creating a new set of harmonics that somehow matches the pattern that human ear likes, but different from original musical instrument's harmonics.

Have someone tried recorded a live music from 1-100khz full bandwith (recording all the musical insturments + their complete harmonics 1hz-100khz), store it in a very good data system, and then re-play it with a very fast digital amp (Mhz or Ghz speed, can reproduce any detail up to 100khz without any harmonics or modulation AT ALL)? (PMA should design this amp 😀)

Maybe in a short time we can buy a perfect reproduction / like live performance system for home usage.
 
Not sure if the frequency extension matters as such. I can't hear more than 14 kHz consciously, and I had my first hearing test at 17 and never went to loud clubs before that. Still, I notice some HF distortions that bother me.

I see only one good reason for frequency extension in the highs and lows beyond the hearing range - phase shift. To minimize this, one should have 3x the useable frequency range. It could potentially matter, however, I am afraid that the vast majority of recorded material has already undergone dramatic phase shifts of all sorts, including ADC filters, that we can't hope much for added realism here. An "extreme frequency range" system will sound different, sure, but probably not more realistic, than the 20-20000 Hz systems of today...

Plus, don't forget that the high order harmonics will also beautifully reproduce

:cannotbe:
 
lumanauw:

To be honest I don't buy the ultrasonic hearing ideas. There is neither any demonstrated physical mechanism in our ears, nor any reliable test that has shown the ability to hear past 20kHz. Indeed a great many speakers don't go much past either, which makes worrying about an amplifier going much past a bit moot anyway. There have been tests where some sort of an effect has been perceived, but typically these effects are quite adequately explained by either intermodulation effects, or experimental error (usually through incorrect level matching because the comparison of complex waveforms was misunderstood.)

There are issues about brickwall filtering, or any sort of filter that results in either passband ripple, or group delay artefacts in the audio passband. These can be ameliorated by moving the filter further away, but they do not provide evidence of perception of ultrasonics themselves.

With a SET amplifier you would want to be able to show that these ultrasonics can make it through the transformer too.

Infrasonics however is sort of real. You feel that with your stomach 🙂 There have been persistent conspiracy theory stories about infrasonic weapons. For music there is scant reason to think it useful. You would want to find a musical instrument that can create infrasonics. And no, a very large pipe organ isn't it. Interestingly, and harking back to where this discussion came in, the very low registers of a pipe organ create the vast majority of their energy at harmonics of the notional root note.

Graham:

I think the issue of appropriate load is often undervalued. It is interesting to notice that the Baxandall comparison test does indeed specifically give the option of placing the amplifier under test on a real speaker. Other than that most of the other test regimes are simply mute on the idea of a real world load. There is no intrinsic reason why they should not be performed with one. Although it may muddy the waters so much that the results are hard to interpret - which rather supports your thesis in the first place. However a minimum phase convolution of the output of an amplifier - which is really what a lumped RLC in parallel will become - has usually been argued to be benign. Indeed there was some work about decade ago deriving the complex impedance of speakers from the published measured impedance graphs in some of the magazines, and it showed that, in general, most speakers presented a quite mild load. This work essentially proved that the idea of a need for massive over-provisioning of low impedance capability (i.e. Krell) in order to satisfy the "peak loads" of a real speaker was a myth.

I'll still stick to my guns about THD and the value of 0.001% as undetectable. I don't think either the absolute value, or indeed the metric are useful.

I think the issue with binaural imaging - at any frequency - is that there is simply not enough information in a stereo recording to do anything but be a gross approximation to the real sound field. Maybe at one particular spot relative to the speakers a particular image does gel, but there is no evidence it is anything more than a coincidental artefact of the listening room. In a real venue there a constantly changing 3D sound field that we sample as we move our head within it. It is this sampling as we move our head that has been shown to provide significant spacial clues. (And which further does not require synchronous resolution of supra 5kHz tones, since each ear is independently sampling the standing waves.)

MBK:

The issue of listener fatigue is very interesting. It would be quite possible to build tests to quantify it - I wonder if there is much in the literature. There are arguably two distinct, but overlapping effects. For reproduction that has a difficult to resolve harmonic pattern, the fatiguing pattern, we can posit that the ear just gets tired and wants to stop. The other effect is a learned template. As you listen to more and more tones the ear may start to adjust to the sonics, and build a new masking template, one which normalises the sonics - and as it does so you lose the ability to distinguish. This idea is in accord with the accusations placed at the door of many of the "burn in" claims made.

A test for these effects would be a really interesting and valuable thing. Real science for a change. 🙂
 
Status
Not open for further replies.