Linkwitz Orions beaten by Behringer.... what!!?

Administrator
Joined 2004
Paid Member
But I don't think that any one of us can be fooled into believing that a reproduced instrument through the entire playback system has the same sound as a live instrument in a room. Close perhaps, but there are just those subtle clues that always give it away.
If you don't have the direct comparison, the illusion can be darn good. But as soon as you hear the real thing next to it, you know. We should be glad the illusion works as well as it does. I, for one, certainly enjoy it. :)

Funny related story:
I was working in a new convention center & expo hall last week. They were very proud of the ceiling speakers and had been blasting them all day. Imagine what that sounded like in a big, concrete hall. A not very happy soup of sound.

Next morning I came in looking for breakfast and they were blasting the ceiling speakers with jazz, again. But they sounded better than they had the day before and I was a bit puzzled, wondering what they had fixed. It was at that point I pushed thru a curtain and almost tripped over the stand-up bass player. :eek: It was a live jazz trio. I had thought it was just the speakers sounding really nice that morning.
 
I think that this misses the point. It is an experiment that helps to explain "image" and how lateral reflections and cross-talk can have a strong effect. It is, of course, a completely contrived effect, not one that the producers intended, so its no different really than "EQing to taste", but it is an interesting experiment.

Hi Earl, I think something like what this guy has in mind would be a lot more "WAF-y" than the barrier:

Title Subjective evaluation and electroacoustic theoretical validation of a new approach to audio upmixing Creator Usher, John S. Date 2006 Abstract Audio signal processing systems for converting two-channel (stereo) recordings to four or five channels are increasingly relevant. These audio upmixers can be used with conventional stereo sound recordings and reproduced with multichannel home theatre or automotive loudspeaker audio systems to create a more engaging and natural-sounding listening experience. This dissertation discusses existing approaches to audio upmixing for recordings of musical performances and presents specific design criteria for a system to enhance spatial sound quality. A new upmixing system is proposed and evaluated according to these criteria and a theoretical model for its behavior is validated using empirical measurements.

The new system removes short-term correlated components from two electronic audio signals using a pair of adaptive filters, updated according to a frequency domain implementation of the normalized-least-means-square algorithm. The major difference of the new system with all extant audio upmixers is that unsupervised time-alignment of the input signals (typically, by up to +/-10 ms) as a function of frequency (typically, using a 1024-band equalizer) is accomplished due to the non-minimum phase adaptive filter. Two new signals are created from the weighted difference of the inputs, and are then radiated with two loudspeakers behind the listener. According to the consensus in the literature on the effect of interaural correlation on auditory image formation, the self-orthogonalizing properties of the algorithm ensure minimal distortion of the frontal source imagery and natural-sounding, enveloping reverberance (ambiance) imagery. Performance evaluation of the new upmix system was accomplished in two ways: Firstly, using empirical electroacoustic measurements which validate a theoretical model of the system; and secondly, with formal listening tests which investigated auditory spatial imagery with a graphical mapping tool and a preference experiment. Both electroacoustic and subjective methods investigated system performance with a variety of test stimuli for solo musical performances reproduced using a loudspeaker in an orchestral concert-hall and recorded using different microphone techniques. The objective and subjective evaluations combined with a comparative study with two commercial systems demonstrate that the proposed system provides a new, computationally practical, high sound quality solution to upmixing.

DigiTool - Results - Full
 
And you have proven this in blind tests? Otherwise its just a biased opinion and likely intrackable subjective effects. No one else has been able to hear this stuff in blind tests.
I agree this is a big problem. Only in totally optimum circumstances, at the moment anyway, can one achieve this high standard of subjective replay; as Pano said, it requires heroic efforts. Which means one can't pull an "example" of such out of the hat when one feels like it, let alone set it up in a test situation when called upon to do so.

How I've proven for myself, is that the slightest abberation in the electronics is sufficient to damage the effect -- all I have to do is resolve the electrical issue to restore the SQ ... I've been doing this for 25 years, on and off, so I guess I have a tendency to believe in the thing ... :D

Frank
 
In what way does it make no sense? Care to elaborate?


With pleasure. Some of it is based on accepted science, and some on conjecture which can be made plausible with a small experiment.

Everybody not aware of the Haas effect should read or Google up on that, but the crux is that direct sound suppresses reflected sound, much like the masking effect between nearby frequencies. This is hard wired auditory processing by the brain. We cannot percieve many of the signals that enter our ears, by the mere fact that much is filtered out before these inputs reach levels in the brain of which we are aware.

In that sense the quote mentioned earlier in this thread that "reality is the best illusion" is completely correct. What we have learned to experience as reality stands apart from the raw sensory information we receive. The neural filtering that converts raw auditory input into perceived reality is not completely a white spot on the map. For sound, a central role is played by the superior olivary complex in the brain stem, at close proximity to the cochlea. It conducts cross-linking between the inputs from both ears on phase and loudness differentials. This area is the best candidate to explain in neurological terms the Haas effect.

Redundant information will be filtered out, most likely at this point in the brain, and gets completely lost for all other auditory centres that process sound further up in the brain.You just cannot perceive a masked tone, nor a first reflection, the information is no longer there by the time it reaches levels where perception takes place. Now, it is good to realize that this filtering process combines inputs from both ears.

Here comes in the little experiment. Close one ear. A lot of the auditory filtering that enables intelligibility suddenly gets lost, almost completely. You immediately start to hear the room relfections, room modes, echo's and reverberating noise and all of this easily starts to overwhelm the primary sound source. Listen with both ears, and the acoustic reality, or the illusion that is created of reality, returns with a snap.

Now back to the crosstalk between ears listening to stereo. The ear closest to the speaker will receive the signal first, which will inhibit perception from the other ear. It's like the Haas effect, but in this case the time delay comes from one ear to the other, and not a reflection.
 
Last edited:
Pano,
Thank you for that last reference to a horn player in the room and nothing sounding quit like that in reality. Even a horn player played back through an all horn system doesn't have that same transient response, the fast rise times and the directivity of a real instrument. When you can get an instrument let alone an entire band to have that sound you will be a world closer to the live event. But I don't think that any one of us can be fooled into believing that a reproduced instrument through the entire playback system has the same sound as a live instrument in a room. Close perhaps, but there are just those subtle clues that always give it away.
Kindhornman, you and others are gonna hate me, but I'll beg to differ. Those subtle clues are what it's about, and they can be eliminated. Again, hard, heroic, etc, etc, ... but achievable ...

Two key things: distortion has be subjectively low through the whole volume range, obviously this needs to be the case at conventional levels. But, it still needs to be so during those transients, and this is where the electronics must work properly, to deliver the dynamic peaks cleanly.

A saxophone blast in your living room should subjectively feel as if it's ripping the top of your head off, the bite of it should have that impact. Now, that is something that is quite straightforward for an audio system to do, but the electronics must be up to it! I've heard hundreds of systems where the electricals are not good enough, and so they always fall short ...

Frank
 
Last edited:
I think that this misses the point. It is an experiment that helps to explain "image" and how lateral reflections and cross-talk can have a strong effect. It is, of course, a completely contrived effect, not one that the producers intended, so its no different really than "EQing to taste", but it is an interesting experiment.

It hints at what is possible, and shows that there is more information on a two channel recording (stereo, even!) than one normally hears with a stereo triangle.

Here is a preprocessed file of mine for 'Ambio' configured speakersl, from the original binaural version on Freesound: Freesound.org - dwareing

https://www.dropbox.com/s/k0dzhjli29ghlxo/bonfire ambio.mp3

It does not quite make it to the 'intended' 3D on speakers, but it should image well beyond the speakers..
 
It hints at what is possible, and shows that there is more information on a two channel recording (stereo, even!) than one normally hears with a stereo triangle.

Here is a preprocessed file of mine for 'Ambio' configured speakersl, from the original binaural version on Freesound: Freesound.org - dwareing

https://www.dropbox.com/s/k0dzhjli29ghlxo/bonfire ambio.mp3

It does not quite make it to the 'intended' 3D on speakers, but it should image well beyond the speakers..

Cool site there David!
At what angle should the speakers be placed? 20? 30?
 
And you have proven this in blind tests? Otherwise its just a biased opinion and likely intrackable subjective effects. No one else has been able to hear this stuff in blind tests.
I like Douglas Self's view in that the goal of audio design is to reduce the problems to minimum. Some things did not matter to me during listening tests until I reduced problems in other areas.

Remember at one time I felt that certain recordings sounded better in reversed polarity, having taken some effort to improve the electronics, all music now sound best in non-inverting polarity.
 
Last edited:
attachment.php
This is one reason I feel earphones have greater potential. With speakers, it makes no sense to design for one listening position.
 
Now back to the crosstalk between ears listening to stereo. The ear closest to the speaker will receive the signal first, which will inhibit perception from the other ear. It's like the Haas effect, but in this case the time delay comes from one ear to the other, and not a reflection.

I agree with everything you said, but I am not sure you can apply it to CTC.. have you tried the "board" experiment? As Markus said, the problem is.. it works!
 
I like Dough Self's view in that the goal of audio design is to reduce the problems to minimum. Some things did not matter to me during listening tests until I reduced problems in other areas
I second that ... getting good sound is an iterative process: you do something that allows one to increase the volume, say, without the sound subjectively worsening, then you realise that there is another form of subtle distortion intruding that you were completely unaware of before. And so the bell rings for the next round of the audio "fight" ...

Frank
 
Constant directivity is also very important when you want consistent experience when shifting listening locations since will allow ITD and IID relation remain the same offer the audio spectrum.

Some impression of compression can be caused by various harmonic combination as well as phase shifts. Some ideas are going into designs just to verify this, and hopefully will yield improvements.
 
Cool site there David!
At what angle should the speakers be placed? 20? 30?

I think it is the old 3-sample delay. It is a long time since I played with it.
Anyhow, I have the speakers about one foot apart and about three feet away. If you outstretch your arm and fingers, you should get a tweeter on your thumb and little finger! There is little added reverb as the recording is very dry and would probably work better outdoors where it was recorded. There is a fair bit of Zuccarelli, cos I like it! It sound a bit crisper and better as the original, with your head sandwiched between speakers, but it is not so easy to sit down..

There are some good binaural recordings by others on that site.
Mine are public domain so they can be used for whatever. Others may not be.
 
I agree this is a big problem. Only in totally optimum circumstances, at the moment anyway, can one achieve this high standard of subjective replay; as Pano said, it requires heroic efforts. Which means one can't pull an "example" of such out of the hat when one feels like it, let alone set it up in a test situation when called upon to do so.

How I've proven for myself, is that the slightest abberation in the electronics is sufficient to damage the effect -- all I have to do is resolve the electrical issue to restore the SQ ... I've been doing this for 25 years, on and off, so I guess I have a tendency to believe in the thing ... :D

Frank

But didn't you also say that you never had that "magic moment" again? So maybe, just maybe it's not the electronics?
 
Now back to the crosstalk between ears listening to stereo. The ear closest to the speaker will receive the signal first, which will inhibit perception from the other ear. It's like the Haas effect, but in this case the time delay comes from one ear to the other, and not a reflection.

This would sugest that putting a barrier in front of a listener would have no effect at all ("crosstalk is filtered out") but that is not the case.
If what you say would be true, summing localization based on time (phase) differences couldn't work but if I listen to a sound over headphones, delaying one channel will move the sound accordingly. This proves that there's no inhibition process.
Natural sounds create 2 (delayed) signals at our ears, 2-speaker stereo creates 4. The barrier removes the 2 wrong ones.
Maybe I just mis-read your post?

Have you tried the barrier technique?
 
Last edited: