Linkwitz Orions beaten by Behringer.... what!!?

Griesinger talks about natural sound sources. Now compare this to 2-speaker stereo. Huge difference!
Markus,
our ears don't know about stereo. Stereo has not been part of the evolution of hearing. Stereo only works as long as it can fake "nature". If it can't fake it, it doesn't work. Our brain will go some way to comprehend the anomalities of stereo as "distorted" natural sound - but only so far. It doesn't matter if there is a huge difference between natural sound and stereo. Relevant is the point, where our brain doesn't accept the stereo fake any longer. Can you explain where this threshold is for vertical reflections?
 
Rudolf - do you see the Greisinger papers as significant as I do?
Earl,
I'm not sure if you see them as significant as I do. ;)
At least you still have to learn how to write the name of the man correctly. :p:D
OK, that last sentence was a bit naughty. :eek: :)

Do you remember the studio monitor requirement paper posted by bodhanxxxx?
http://www.diyaudio.com/forums/multi-way/103872-geddes-waveguides-586.html#post3429123
On that AES committee writing group you find Günther Theile next to David Griesinger.
In his doctoral thesis from 1980 Theile already explained, why summing location etc. is not as easy as the early work of Haas, Blauert and Zwicker (to name only some German contenders) suggests. There is much more top-down work and masking involved than the old experiments with "clinically clean" signals in anechoic rooms could resolve.
While the "old men" might have said "listening is hearing", I believe that Griesinger would define it as "listening is masking". That's what I feel comfortable with.

Couple the Griesinger papers with the paper that you posted on resolution of amplitude modulations in the ear and one begins to see how potent the Greisinger hypothesis could be.
In my advertising job I liked the joke about the agency that was asked to develop a campain adressed at blind customers. After two months of internal debating the agency had to renounce the offered job. The creatives could not agree upon whether they should do the TV spot in color or b/w.

We all should concentrate even more on the receiver side of listening (the hearing process) than the source/transmitter side.

Rudolf
 
Last edited:
Rudolf -

"I before e" - I keep saying that to myself and I keep getting it wrong. :)

In most of my career I did not pay much attention to perception. Loudspeakers back then were so bad that just getting the engineering right was just about all that could be handled. But now things have changed and we are at a point where we need to decide what is the right approach and most importantly the right compromise (sorry Markus, compromise is part of the problem.)

While Griesinger and I are both physicists he was immediately doing the perceptual stuff and I was doing the loudspeaker engineering stuff. So compared to him I am quite new to the perceptual end of the problem. I was thinking of doing a Second Edition of my book and it dawned on me that it really should lead off with what we know about perception and then lead into how one designs for those requirements. I came to realize that I was a little unclear on several things, things that the Griesinger papers make very clear to me now. The whole process is now clear as a bell in my mind. I now feel confident in writing the first chapter using the theories of Griesinger. I was stalled until I read this work. Of course not everyone will agree, that never happens, but I see where even Toole is softening his beliefs on a lot of stuff that I had trouble with. Things that Griesinger has now explained.

I still don't understand the 10 uS thing in that AES paper. How can that be for multiple listeners?

By the way I was good friends with Prof. Zwicker. He was a guest at my home a few times and I spent a week with him as his host on a US lecture tour. He was a very nice man who I had tremendous respect for. I also know Claus Genuit quite well. Both of them were from my noise control days. Claus is a really sharp guy. And very wealthy now as I understand.
 
Administrator
Joined 2004
Paid Member
More frequency shifting

OK, I've done some more testing, this time with a voice I'm sure you'll recognize. I have attached 4 files below, they are MP3 files zipped.

File A is mono for calibration.
Files B,C,D are panned L-R or R-L in different ways.
There is one pan of the entire spectrum
One pan of only the content >700Hz
One pan of only the content <700Hz
I'll not tell which is which, but you should be able to hear it easily.

The pans consist of the same 2 spoken phrases both starting center, the first phrase pans in one direction, the second phrase pans the opposite way.

I can definitely hear what Griesinger is talking about with the >700Hz dominance, but there is more to it than that. Listen for yourself.
 

Attachments

  • A.zip
    418.2 KB · Views: 57
  • B.zip
    893.4 KB · Views: 54
  • C.zip
    893 KB · Views: 59
  • D.zip
    887.9 KB · Views: 50
Markus,
our ears don't know about stereo. Stereo has not been part of the evolution of hearing. Stereo only works as long as it can fake "nature". If it can't fake it, it doesn't work. Our brain will go some way to comprehend the anomalities of stereo as "distorted" natural sound - but only so far. It doesn't matter if there is a huge difference between natural sound and stereo. Relevant is the point, where our brain doesn't accept the stereo fake any longer. Can you explain where this threshold is for vertical reflections?

I was exactly talking about those distortions that are inherent to 2-speaker stereo sound reproduction.
 
You're welcome!
Basically the take away was:
  • Neither of us had any trouble locating full range noise. No real surprise.
  • Neither of us had trouble locating bandwidth limited noise all the way down to <50Hz when the upper ranges were not present. Not the conventional wisdom.
  • I was much better at locating bass <120Hz when the upper registers were present.
  • Widely differing bass and upper register locations were confusing to me.

Further testing will involve splitting noise and music signals at in half 700Hz and moving the two halves separately. I do not expect the top half to be difficult to locate. I don't know if the bottom half will be or not. A lower split point could also be used.
I am not fan of noise not related with what we experience in relation with identifiable objects, but you list is similar to what I would expect. But I would be interested to what kind of excursions the drivers were driven to in relation with the BL curves, and what kind of harmonics can arise as a result.
 
Thank you Pano,
this sounds like a valid test. Probably I should not write, what is what, but it seems clear to me. I can't identify the content <700 Hz moving as such, but I hear that something weired is happening.

Now we have to ask ourselves if there is ever a good reason in "real" recordings to pan one part of the spectrum in such an extreme way against the other part of the spectrum.

Rudolf
 
Thank you Pano,
this sounds like a valid test. Probably I should not write, what is what, but it seems clear to me. I can't identify the content <700 Hz moving as such, but I hear that something weired is happening.

With my fantastic Geddes speakers I can hear it moving ;)

Now we have to ask ourselves if there is ever a good reason in "real" recordings to pan one part of the spectrum in such an extreme way against the other part of the spectrum.

The point is that such spatial distortion happens because of modes and boundary effects. It's not an error on recordings but an error induced by the listening room.
 
OK, I've done some more testing, this time with a voice I'm sure you'll recognize. I have attached 4 files below, they are MP3 files zipped.

File A is mono for calibration.
Files B,C,D are panned L-R or R-L in different ways.
There is one pan of the entire spectrum
One pan of only the content >700Hz
One pan of only the content <700Hz
I'll not tell which is which, but you should be able to hear it easily.

The pans consist of the same 2 spoken phrases both starting center, the first phrase pans in one direction, the second phrase pans the opposite way.

I can definitely hear what Griesinger is talking about with the >700Hz dominance, but there is more to it than that. Listen for yourself.
Ha, the imbalance of my ears make me feel that you mix the spectrum of the pans in a single track. The recording sounds like there was a shield between the mic and the person, to keep the mic clean? Oh, the bit rate is really low. Maybe that's what's wrong.
 
Last edited:
The point is that such spatial distortion happens because of modes and boundary effects. It's not an error on recordings but an error induced by the listening room.
I agree that bass can be "positioned" by small room acoustics. But it will stay at its place regardless of volume change. It might move with frequency. In both cases we would have to ask again: Do we hear it when listening to music/speech in a reverberant room?

Rudolf
 
That's not the "physics of the situation". Greisinger posts ILD's and ITDs for the head somewhere on his site (no I don't have the link!). These functions get vanishing small below 700 Hz. Nothing at all compared to > 700 Hz. That IS the physics.

I only quickly went through Greisinger's website, because as soon as a writer displays a genuine lack of knowledge, there are better things to do with my time. The quotes that did it for me are "For example, the frequency selectivity of the basilar membrane is approximately 1/3 octave (~25% or 4 semitones), but musicians routinely hear pitch differences of a quarter of a semitone (~1.5%). Clearly there are additional frequency selective mechanisms in the human ear." and "The fundamentals of musical instruments common in Western music lie between 60Hz and 800Hz, as do the fundamentals of human voices. But the sensitivity of human hearing is greatest between 500Hz and 4000Hz, as can be seen from the IEC equal loudness curves. In addition, analysis of frequencies above1kHz would seem to be hindered by the maximum nerve firing rate of about 1kHz. Even more perplexing, a typical basilar membrane filter above 2kHz has three or more harmonics from each voice or instrument within its bandwidth. How can we possibly separate them"

The basilar membrane is the wrong place to understand the frequency selectivity of the ear; you have to look at the Corti organ, let's say the pick up element sensing the basilar membrane. It consists of an array of hair cells, constituting band pass filters.We have even come to know with great precision what the filter slopes are of these tuned filters (very steep and assymetrical). If Griesinger missed out on this, what could I ever learn from him?

This being said, Earl, let's come back to your statement on the physics of the situation.

Interaural time delay is completely unrelated to frequency from a physical point of view. It is fully determined by the shape of the triangle formed by the ears and the sound source, in addition with the speed of sound.

ITD is exactly the same for all frequencies. It is just that for higher frequencies, the ITD at some point becomes smaller than the acoustic distance between the ears. At this point, localization by ITD breaks down. At lower frequencies, however, ITD becomes the dominant localization mechanism.

Also, let's not forget that 700Hz is already 5 octaves into the audible frequency range. Do you really think the brain throws away half of the audible spectrum for localization? No way.
 
I agree that bass can be "positioned" by small room acoustics. But it will stay at its place regardless of volume change. It might move with frequency. In both cases we would have to ask again: Do we hear it when listening to music/speech in a reverberant room?

Rudolf

Yes we hear it, it's part of a room's distinct acoustics but the goal of sound reproduction is to enable the recording to project different aural spaces. Spatial errors induced by the listening room prevent that from happening. The effect is not black and white, there are rooms that are better and rooms that are worse.
 
vacuphile, I think it is not reasonable to consider ITD with steady state waves. Rather it is a sudden change of spectrum content that is associated with ITD. Once the spectral content is sort of steady, the shift in time actually should give an impression of change in sound rather than change in localization, and the intensity projects left and right location. Now if this ITD and IID do not match up for a same recorded instrument, then we will feel the image sort of shifts and sways.
 
Pano, it is an interesting test, which to my ears prove that both panning the low and the high end, can make the image move. Yet, there is a fair amount of cognitive dissonance in those cases. On the other hand, this is the natural state for the auditory system; it always has to select from ambiguous cues where a sound source originates from. It is not just some automatic process that happens in the brain on the basis of auditory cues. Head movements play an important role, and so do visual perceptions. The ventriloquest effect is well researched.

Pano, do you know what the SOTA is when it comes to panning not just on loudness, but also on time delay? If you have the ability to pan simultaneousy loudness and time delay, I would make me happy to do some simple math and provide you with a recipe.

p.s. Sorry my earlier post became such a typographic mess, but I found no way to knock it into shape.