Linkwitz Orions beaten by Behringer.... what!!?

Markus,

the Sengpiel site is a very good read, once again I am impressed by the rational approach to engineering questions typically taken by our good neighbours :). It's an immediate bookmark for future reference. Unfortunately for many, most of it is in German, but at least this handy list is in English too:
www.sengpielaudio.com/ComparativeRepresentationOfSoundFieldQuantities.pdf

As to your remark that "Let's not forget that interchannel time panning doesn't go along well with the notion that amplitude panning already results in interaural phase differences caused by interaural crosstalk.", I am not that convinced. It is a topic that came up before in this thread, but I think there are good reasons to believe that interaural cross talk is suppressed by the Haas effect. I addition, I fail to understand how interaural crosstalk results in interaural phase differences.
 
As to your remark that "Let's not forget that interchannel time panning doesn't go along well with the notion that amplitude panning already results in interaural phase differences caused by interaural crosstalk.", I am not that convinced. It is a topic that came up before in this thread, but I think there are good reasons to believe that interaural cross talk is suppressed by the Haas effect. I addition, I fail to understand how interaural crosstalk results in interaural phase differences.

It's described in Blumlein's "stereo patent". Another description is given here (see 4.3.2):
http://www.phaedrus-audio.com/stereosonic.pdf
 
You mean his speakers don't create enough reflections that could mask the inherit problems of stereo?

Stability of image starts with direct sound. Problem is distinct reflections/diffraction from speaker itself.

Reflections do not improve phantom image. Coherent reflections are easily masked out by perception because they correlate with direct sound, whereas reflections with differing phase across differing spectrum are not recognized as part of direct sound This ties directly to fine structure of speakers polar response. For fairly typical stereo listening triangle, ears of listener span angle <2 degrees from perspective of any point of speaker. From each ears perspective sound is vector sum of all radiating points of speaker. The more divergent speaker is from point source, the more different sound that each ear receives, leading to perception of a speaker in space, instead of just intended signal.

This is easy to measure. Place measurement microphone at listening position, record IR, rotate speaker 2 degrees and measure 2nd IR. Alternately leave speaker fixed and move microphone to 2nd location corresponding to 2nd ear. For looking at spectral/phase differences IR is gated/windowed to remove all room reflections. With good DAW it is possible to look at relative timing between zero crossings of the two measured IR in region prior to contamination with room reflections.


Small tweeter on circular baffle produces severe diffraction pattern along listening axis. Conical horns are very similar in this regard.

At lower cut off frequency of horn, horn has little control, and effects of volume expansion take over. Yes, a 1" compression driver mounted in conical horn with 12" circular opening is not all that different from 1" tweeter mounted in center of 12" circular baffle for 750Hz-1.5kHz range.
 
Reflections do not improve phantom image. Coherent reflections are easily masked out by perception because they correlate with direct sound, whereas reflections with differing phase across differing spectrum are not recognized as part of direct sound This ties directly to fine structure of speakers polar response. For fairly typical stereo listening triangle, ears of listener span angle <2 degrees from perspective of any point of speaker. From each ears perspective sound is vector sum of all radiating points of speaker. The more divergent speaker is from point source, the more different sound that each ear receives, leading to perception of a speaker in space, instead of just intended signal.

Why would a speaker performing as a perfect point source be less localizable? I think the opposite is true. A point source is easier to localize because spurious localization cues are removed.
In my mind this is the reason why such a speaker is capable of projecting a sharp phantom image. The ear has to deal with less spurious cues.
 
Your last 3 examples clearly demonstrate how important coherent panning of all frequencies is. It doesn't help to obsess about higher frequencies in speaker design when there are large spatial errors at lower frequencies.
Yes, it is sad if people refuse to look into Griesingers papers and then give counterproductive advice regarding time panning of this experiment. :rolleyes:

Once again Griesingers argument in a nutshell:
Every harmonic is "glued" in phase to its fundamental frequency. You can see the "beat" of the fundamental frequency in each and every filter bank of the basilar membrane, where a harmonic exists. This means that the brain can (and will) reconstruct the frequency and phase of the fundamental (and possibly lowest harmonics too) from the higher harmonics, if the fundamental is missing or of doubtful phase.

By first changing the loudness relation (which is a change in phase already) and then changing the phase relation between <700 Hz and <700Hz we have done the best to destroy this mechanism. :eek:

Rudolf
 
Yes, it is sad if people refuse to look into Griesingers papers and then give counterproductive advice regarding time panning of this experiment. :rolleyes:

Rudolf

Funny, I understood it completely.

One point on the sphere absorption calculation presented earlier, its wrong. The SPL reaching the boundary of the larger sphere has dropped by a large amount (depends on the radius) and so taking 1% of this smaller amount results in a smaller loss of energy not a greater loss. The whole derivation is wrong.
 
Too cryptic for my simple brain.

Yeah, I don't know what that means either. :xeye:

I thought of this:

I only quickly went through Greisinger's website, because as soon as a writer displays a genuine lack of knowledge, there are better things to do with my time. ...
Pano, do you know what the SOTA is when it comes to panning not just on loudness, but also on time delay? If you have the ability to pan simultaneousy loudness and time delay, I would make me happy to do some simple math and provide you with a recipe.
 
On the Griesinger topic, I had a long conversation yesterday with a couple of well inform audio engineers about this exact topic. They were not nearly as negative about the idea as many of those here, but what we did conclude was something that has not come up here at all. And that is that in music, the waveforms are changing constantly, impulsively. This means that good localization has to be done "as fast as possible". The longer it takes the more quickly reverberation and reflections will swamp out the effect and create what Griesinger calls "muddiness" - something that I implicitly understand. In this context then even a small advantage in the rapidity of how fast we can localize could be come a major asset to our "perception".

So the question is not "can we localize on this or that frequency range", but "what frequency range does this the fastest". I think that there is only one answer to that question.
 
Yes, it is sad if people refuse to look into Griesingers papers and then give counterproductive advice regarding time panning of this experiment. :rolleyes:

Once again Griesingers argument in a nutshell:
Every harmonic is "glued" in phase to its fundamental frequency. You can see the "beat" of the fundamental frequency in each and every filter bank of the basilar membrane, where a harmonic exists. This means that the brain can (and will) reconstruct the frequency and phase of the fundamental (and possibly lowest harmonics too) from the higher harmonics, if the fundamental is missing or of doubtful phase.

By first changing the loudness relation (which is a change in phase already) and then changing the phase relation between <700 Hz and <700Hz we have done the best to destroy this mechanism. :eek:

Rudolf

Since I now understand this was directed at me, let me again state that Griesinger is kitchen sink science, and this quote only proves it. No way each harmonic is glued to its fundamental frequency since all speaker systems exhibit phase shifts. Experiments, even with excessive phase shifts, have not demonstrated any detrimental effect on localization.

But what I really don't understand is how my advice on phase panning can be seen as counterproductive, because Pano's .wav files demonstrate I was right on the money there.

Let me suggest to you reading of the Sengpiel site, which explains many of these matters in clear and well founded terms. I prefer not to spend my time on gobbledygook.

The link to the article on Blumlein is something that requires a bit more study from my side, but as far as I can tell now he is not talking about the same thing.