Putting the Science Back into Loudspeakers

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Surely it's due to interaural cross talk. Even slightest head movements reveal the speaker locations. Or at least directions. Because distance perception can be skewed depending on source material. Anyway, be it a location or a direction it is no good, and something should be done about it.

it is a direction but yes, something should be done and I think that lowering the IACC is the way, a flooder as such helps but needs good positioning too

of course nothing can beat stereolit-like single bipolar in that regard, with direct sound blocked, with or without additional center channel

But, I wouldn't give too much emphasis on visual cues here, because on the other experiments where one creates an appropriate set of early reflections the speaker becomes impossible to localise even it is just 2 m in front of you and you are staring at it. For some people it might be hard to override the visual cues, but I think I'm not one of them.
- Elias

not when the visual localisation cue and the auditory direction cue reinforce each other - because You can see a speaker in the direction, with eyes closed you wouldn't hear sound coming from the floor - from the speakers
 
Last edited:
fortunately music is composed of transients, not just of ongoing tones

I just set up a demo in our warehouse of a very wide screen projection sytem. Image was the priority but I added audio to complete the effect. It was amazing how little you noticed the warehouse acoustics on music. When dialog came on it was instantly "a warehouse" but music was poor in revealing the heavily blurred acoustics.

I have notice the same thing with electronic ambience synthesizers: you add what you think is a low or subtle amount on music and if an announcer comes on (say from FM radio) then he is swimming in reverb.

I find the same when I go to classical concerts. 90% of the time you can't hear the acoustics of the hall, continuity and slow decay of the music will hide it. You have to wait for just the right percussive bits to sense any decay.

This is also why all acousticians have switched over to EDT (Early Decay Time: decay rate for the first 30dB) as the universal measure of reverberence.

Voice is more revealing because it is more intermittent and also because we are used to listening to a talker from 3 ft. away. Raise its level 20dB and project it from a PA 60 ft away and it can't sound natural, no mater what the accuracy of the system is.

Regards,
David S.
 
For this to work the source has to be very close to ones head. I once had a dipole sub directly behind the listening seat. Very deep and clean bass (nearly no modal effects). The problem is, as soon as you increase SPL, modal effects ruin the response.



I would have to re-read the Griesinger paper. Guess you're talking about "Loudspeaker and listener positions for optimal low-frequency spatial reproduction in listening rooms"? All I remember is a lot of "ifs".

Within about a foot and a half, where you can see the driver, and lifted off the floor a bit with a platform stand. You still get modal effects, that's not the point. The point is that stereo separation is reasonably presented. The freq. can always be eq.'ed to linear, and besides - you need to compensate for delay anyway.


As for the other method - that I experimented with before Greisinger's paper, and it wasn't exactly the same as what he mentions (if at all). I do remember though that it sounded "similar", and *might* provide a plausible explanation. I think that's one of the papers.
 
Within about a foot and a half, where you can see the driver, and lifted off the floor a bit with a platform stand. You still get modal effects, that's not the point. The point is that stereo separation is reasonably presented. The freq. can always be eq.'ed to linear, and besides - you need to compensate for delay anyway.

Problem is, even if you EQ the response of each sub, if there are strong modal effects then you won't get much stereo anymore because of how the two sources (and the room) combine.
 
Problem is, even if you EQ the response of each sub, if there are strong modal effects then you won't get much stereo anymore because of how the two sources (and the room) combine.

No.

The proximity (at this range) trumps modal effects for stereo presentation.

While modes will coincide with the output they are practically speaking "on top" of the source (..i.e. NOT providing an alternate greater intensity "source" that competes with the direct sound). While there will be other areas in the room providing alternate "sources" - their intensity and time will be different enough to be distinguished and "resolved" by the listener.

Again, (if possible) try it (..with a proper recording).


Perhaps another "tack" should be questioning *why* room modes detract from stereo separation?
 
Last edited:
Scott,
It would seem that we are discussing the stereo image being localized by the low frequencies rather than at the higher frequencies. I agree that with a source very close to the ears that the first wavelengths are going to be perceived before the reflected modal waveforms and even then the reflective wave will have been attenuated to some degree. Perhaps at a high SPL level in a steady state condition it would become harder to discern the directionality of the low frequencies, but are we talking about that or at a normal listening level?
 
I agree that with a source very close to the ears that the first wavelengths are going to be perceived before the reflected modal waveforms and even then the reflective wave will have been attenuated to some degree.

Reflected wave at low frequencies? Even before a single period has passed we perceive already the room response. Direct and indirect sound has to be perceived as one in acoustically small rooms. That's why I don't really follow Scott's argument.
 
Markus,
What I am not sure about is if we need to hear a complete beat frequency before we can identify the rising rate of the waveform. If we need the completed waveform then your assumption would hold true. If we can perceive the rising rate of the physical waveform then we don't have to wait for the completed wave to identify the directionality. Can someone answer this question. I will be gone for a few hours and look for the answer.
 
Direct and indirect sound has to be perceived as one in acoustically small rooms.

That's why I don't really follow Scott's argument.

Is that correct for open-air headphones? (..or closed back headphones "pulled" away from your head?)

In the proximal region under a meter in length (and particularly at near 90 degree angles "facing" the ear), it's no longer about diffraction from HRTF's but rather intensity when referencing distance between the ears. (..which is a "winded" way of saying intensity from direct sound dominates at distances less than a meter.)


Of course there are also several errors with the perception on body diffraction as the primary indicator of interaural level differences at non-head shading freq.s as well.. but that's beyond current theory.
 
Last edited:
Scott,
I wasn't thinking about headphones when I posted my question, but it does go to what I was saying. Obviously there is not room in a headphone or the distance to the ear for a 40hz waveform to develop. We detect the frequency and pressure wave just the same. So I would have to agree that we are going to hear the proximate nearfield wave before the reflections from any other surface. And I do think that the secondary waveform will be attenuated to some degree as there are no perfectly rigid boundary layers in a room. If there were we could block all sound transmission out of a room and we all know that is not possible even in an isolation chamber.
 
Scott,
I wasn't thinking about headphones when I posted my question, but it does go to what I was saying. Obviously there is not room in a headphone or the distance to the ear for a 40hz waveform to develop.

We detect the frequency and pressure wave just the same.

Yes, headphones can provide a useful "check" under certain circumstances.

While our detection results in the same apparent manner, the current thought is that way we resolve it is different at lower freq.s for near-field vs. mid-field.

I think I remember it (the way we primarily resolve location at lower freq.s as an "overlay" to intensity) being expressed as head/torso diffraction element for mid-field, and ear distance "pinging" (or largely a timing element) in the near-field.. or something like that.

IMO, while both have some basis for inclusion, neither are really correct. :eek:
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.