Feedback artifacts, cars and semantics

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Christer said:


I'm tired too, but wouldn't an LP filter make the comparison more
fair. If som amps start to fall off to much already in the audio band
it won't help, but then, isn't that a problem with them that should
show up in the test?

Unfortunately almost always phase difference residuals between the two subtracted channels are bigger than distortion residuals after subtraction. So it is difficult to evaluate the result.

We have to distinguish between linear distortion (frequency char., phase char.) and non-linear distortion (THD, IMD, SID, TIM etc.).
 
I wonder how many of you have given any serious thought to lessons learned from audio compression world. Just a glimpse at the issues discussed here would make one dead sure that compression codecs are completely out of question when it comes to accurate audio reproduction, that they are in completely different league. Yet, the developments there have been big, and lessons learned might be useful and applicable to even high-end audio.

Perceptual codecs are relying on psychoacoustical model of human hearing. Alot of research have gone into it, and although our understanding is still far from perfect, alot of interesting stuff have been learned.

I don't know how many of you have tried blind ABX'ng between original source and material gone through best of lossy codecs, but general state seems to be that we now have codecs that are completely transparent to listening tests, even though waveforms differ enormously.

And this is one important lesson - distortion is not necessarily resulting audible difference. Infact (lossy) codecs that try to reproduce closest to original waveform have all failed (note _lossy_) in comparison to codecs that don't even try to reproduce original waveform. It might be hard to accept that different waveforms can't be surely distinguished, but it apparently seems to be the case. SACD seems to make heavy use of this for eg.

There are lots of things learned during developments that now seem to be of benefit in assessing nature of distortions that we can hear and what we can't. This in itself suggests that in evaluating equipment of high-end there is no reason to discard this knowledge but instead use it for gaining further understanding.

http://www.personal.uni-jena.de/~pfk/MPP/audiocoder_english.html

This codec is generally praised for its transparency at high rates, supported by blind tests that show only random chance not distinguishing from original even on very good equipment and well trained ears.

Now that we are getting closer to truely transparent with lossy compression, it becomes useful to use that gained knowledge to compare equipment by weighting its specs through psychoacoustical model of human hearing. What remains can then be compared by bringing measured distortions to "magnified" scale. So to say electronic equivalent to human ear covering not average population but sharpest. This account of physiology of ear can bring up what really matters when it comes to distortions in equipment and perhaps allows to hint on measurements that need to be done to evaluate it, making it possible to finally bring objective measurements to the same level with subjective listening tests. Attempts in creating "digital ear" have been done, see as eg. http://www.ff123.net/earmodel.html where application was to compare lossy codecs. I wonder if same has been tried to deal with analog equiment?

While predicting audible performance of equipment by measurement instruments its impossible to look past necessity to overlay physiological peculiarity of human hearing onto results of measurements. So basically, we should measure not what equipment does, but instead what human ear would hear.


One peculiarity of human ear is that it does not hear instant frequencies or phases or events, but instead needs cumulating effect (as if gathering data for analysis), and is constantly adapting in time. Thus any measurement with static content is really futile. Signal masking makes alot of stuff inaudible for us, and yet dynamics of content can cumulate to get above hearing floor. In combination with brain processing things that don't happen in reality stick out like distracting alot more than you'd ever think by looking at the scope.

Ear does not hear single event if it is shorter than threshold. We need repetitive events to hear them. That must include nonlinearities of amps as well. Linear signal passes through different regions of linearity curves of amps constantly, and this repetition is function of source signal. From here results simple idea that to get around those nonlinearities of amps we'd want to make sure that no harmonic of source stays long enough at any particular nonlinear slope of the amp to become audible. Obvious solution is to make amp linear. Less obvious solution is to make nonlinearity inaudible psychoacoustically.

Is this possible? How about mixing strong ultrasonic noise into the signal before amp and cancel it out after? This might have effect of distorting harmonics produced by amp itself so much that they no longer produce repetitive events long enough to be perceivable. In effect, additions of amps are destroyed into background noise.

Just a freak idea. I wonder if this makes sense and if anyone has tried this and what effect this might have on sonics?
 
but general state seems to be that we now have codecs that are completely transparent to listening tests, even though waveforms differ enormously.
Big claim. Who was listening, what reproduction equipment was used?
I was once assured that MD compression was inaudible. I took part in a blind test running a CD and MD through the same DAC. I was able to spot which was the MD within 3 seconds of any music program.
I agree that certain types of distortion are not so important to the ear/brain. However, I think you have to drug me up before I could believe that a CODEC can do accurate enough surgery not to degrade the music.
 
AX tech editor
Joined 2002
Paid Member
wimms said:
[snip]One peculiarity of human ear is that it does not hear instant frequencies or phases or events, but instead needs cumulating effect (as if gathering data for analysis), and is constantly adapting in time. Thus any measurement with static content is really futile. Signal masking makes alot of stuff inaudible for us, and yet dynamics of content can cumulate to get above hearing floor. In combination with brain processing things that don't happen in reality stick out like distracting alot more than you'd ever think by looking at the scope.

Ear does not hear single event if it is shorter than threshold. We need repetitive events to hear them. That must include nonlinearities of amps as well. Linear signal passes through different regions of linearity curves of amps constantly, and this repetition is function of source signal. From here results simple idea that to get around those nonlinearities of amps we'd want to make sure that no harmonic of source stays long enough at any particular nonlinear slope of the amp to become audible. Obvious solution is to make amp linear. Less obvious solution is to make nonlinearity inaudible psychoacoustically.

Hi Wimms,

Coincidently, I am currently just reading up on these phenomena. What you say about the built up of a perception taking some time is heavily supported by research. The bandwidth of the brain sound perception is much lower then what the ear can throw at it. So all these inputs are funnelled and lead to what is often called a "landscape" of sound perception. Whenever you can recreate this landscape, you have the same perception, even if the origanal inputs leading to the landscape were (totally) different. This is in line with your remarks on perceptual codecs.

There is another aspect to this. Memory, and most importantly, anticipation can also generate autonomously such a landscape. So, if you are anticipating a certain sound, the brain already moves towards the expected landscape, and only a few supporting inputs are needed to complete it. To facilitate that, the brain heavily increases the gain, so to say, of those input channels that would be required to fire to complete the anticipated landscape, AND it also turns down the gain of those channels that would NOT support the anticipated landscape. If you project this type of processing on subjective listening, you can readily accept that you quite literally hear what you want to hear, and not hear what you don't want to hear. And all this has only a loose connection with the actual physical activity of the ear itself.

I like your way of thinking to couple these phenomena to listening tests. It's quite a new view, needs a lot of digestion I think.

Jan Didden
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.