Localizing Bass

Status
Not open for further replies.
But why

It will be a slowly rising and falling signal, much like bass in music.

Steady state is just far too easy a test. Music is never steady state.

and

And try the test not in the steady state.


Because

I think that a strong case could be made that we hear LF ONLY in the steady state.

and

I only ever look at steady state signals at LF because I am convinced that this is all that we can perceive.


:scratch:
 
Art

Resonable comments.

But I fail to understand the mechanism by which one could localize a LF sound. There is virtually no SPL differences at the ears (maybe there are. - but how could that be?) and the phase differences have to vanish. So what is the mechanism? If its true then the theory of sound localization as I know it cannot be correct.
 
For anyone interested in low freq localisation, take a look at

Human Sensitivity to Interaural Phase Difference for Very Low Frequency Sound, M. Irfan Jan Mohammed and Densil Cabrera, Proceedings of ACOUSTICS 2008.

Their results show localisation capability down to about 30 Hz.


- Elias
 
Elias

You are confusing time windows here. 2s IS steady state in a small room as far as a measurement is concerned. But it is NOT steady state as far as a difference between a signal being on for 2 seconds and continuously in a localization test. Different tests different time windows principles.

One comment referes to the audiblity of room reflections of a few ms at low frequencies and why we cannot detect these reflections and the others refers to the duration of a sound in a localization experiment. They are not the same things and completely different time intervals apply.
 
Elias

ABSTRACT
Recent studies using subwoofers have provided consistent evidence that localisation along the left-right axis can occur for sound in the frequency range below 100 Hz, and even includes signals in the lowest octave of human hearing (however, front-back localisation fails for low frequency sound). If such left-right localisation is possible, the most likely explanation is a surprisingly acute sensitivity to interaural time or phase difference in the very low frequency range. The present study investigates this hypothesis using stimuli presented via headphones in a quiet anechoic room. Stimulus signals consisted of 1/3-octave noise bands centred on frequencies from 20 Hz -100 Hz with interaural time differences ranging between ±650 microseconds. The stimulus duration was 800 ms and was multiplied by a hanning window resulting in a smooth fade-in and fade-out (with the two channels faded together, regardless of the interaural time difference - hence this might be thought of as a frequency-dependent linear phase shift rather than a simple time difference). Tested on a head and torso simulator, the presentation sound pressure level was 40 dB(A), and distortion and background noise were both negligible. The subjects' task was to identify, on a scale from left to right, the location of the auditory image (i.e. the task was 'lateralisation' rather than localisation). Results show mild lateralisation for frequencies at and above 31.5 Hz with the lateralization of the image becoming clearer in the higher frequencies and the higher time delays across the frequency range tested, and so support the hypothesis.

Doesn't quite say what you are saying it does, but it is interesting and a very recent finding. Even the authors seem to find it "surprising", just as I do.
 
Art

Resonable comments.

But I fail to understand the mechanism by which one could localize a LF sound. There is virtually no SPL differences at the ears (maybe there are. - but how could that be?) and the phase differences have to vanish. So what is the mechanism? If its true then the theory of sound localization as I know it cannot be correct.
Good question, and I don’t have a definitive answer as to the mechanism of LF localization.

Although there is little LF SPL difference between ears, there is still is a time difference of arrival between the two ears, the time difference equal to the distance around the head.
On my head, that distance is 14.5”, 11.5” around the face, and 9.7” around the back. Arrival time and order, more than level, make localization possible.

Your speakers were just congratulated for their PRAT (pace, rhythm and timing, or self-righteous dumbass, LOL) factor, as you have noted for some time, the hearing system is very sensitive to arrival time differences.

Sound from different angles takes different time intervals to arrive at the ears. The time distance around different portions of the head, and ear shape make three dimensional location from two sensors possible.
The ears are “cross wired” to opposite sides of the brain.
Low frequency sound often is said to be “felt” as much as heard, certainly there are chest resonance and vibrations in our feet that can be felt, and the time differential to those parts of our body may also be linked in our brain to provide LF locational clues in tandem with our ears cross wiring between brain hemispheres.

Anyway, I don’t know exactly why I can’t hear above 16K, or why I have the same threshold of hearing for 30 Hz tones as 4K, or why I can identify LF directionality, but it doesn’t change those facts.
While I am curious as to the exact mechanism that makes LF localization possible, I resign myself that there are many things I will never fully understand.

Art Welter
 
Last edited:
Elias

You are confusing time windows here.

No, I'm not confusing anything. I just cannot accept "steady state is all we hear" type of claims.


Elias



Doesn't quite say what you are saying it does, but it is interesting and a very recent finding. Even the authors seem to find it "surprising", just as I do.


Well, the dublex theory of hearing by Rayleigh describing ITD has been around more than 100 years. I would not call it very recent, but I guess it depends on the perspective.


- Elias
 
Doesn't quite say what you are saying it does, but it is interesting and a very recent finding. Even the authors seem to find it "surprising", just as I do.

It's interesting to see this studied but a couple of possible flaws in the study (at least based on the quoted abstract) stood out to me;

1) If an anechoic chamber was used as a quiet test environment, why was the test done with headphones with simulated ITD variations instead of simply setting up a circle of subwoofers hidden behind a curtain ?

It's well known that a significant portion of the sense stimuli for low bass frequencies comes from vibrations picked up by the rest of the body other than the ear drum, who is to say that this is not also related to and important for localisation of low frequencies ?

In room bass pressure waves also propagate into the breathing passages that link up with the Eustachian tubes at the back of the ear drum, whereas bass provided only in the ear canals does not, the result of the two factors being that bass from headphones never sounds the same as in room bass.

If they're trying to isolate whether ITD at the ear drums on its own is enough for localisation at low frequencies fair enough, but if the object is to determine whether low frequencies are localizable at all, and to what degree of accuracy at what frequency, the test seems flawed if it was only done in a simulated fashion with headphones. The fact that they got any sort of positive result is encouraging though.

2) Even if a study is carried out with headphones and/or subwoofers in an anechoic chamber to establish the level of innate ability our hearing system has to localize very low frequencies, what relevance, if any, does it have to localising bass in a living room where we are surrounded by a multitude of reflections.

If the mechanism at work in an anechoic environment is very small ITD's it seems likely that this weak correlation will be completely overwhelmed by the mass of conflicting reflections and the spurious inter-ear amplitude differences at some bass frequencies caused by lateral standing waves.

In other words its an interesting experiment but it may have no real relevance to bass localisation in a small room.
 
Last edited:
It's interesting to see this studied but a couple of possible flaws in the study (at least based on the quoted abstract) stood out to me;

1) If an anechoic chamber was used as a quiet test environment, why was the test done with headphones with simulated ITD variations instead of simply setting up a circle of subwoofers hidden behind a curtain ?

For one thing, most anechoic chambers aren't anechoic below 60 Hz. A 20 Hz wave is almost 60 feet in length. 😱
 
Art

Resonable comments.

But I fail to understand the mechanism by which one could localize a LF sound. There is virtually no SPL differences at the ears (maybe there are. - but how could that be?) and the phase differences have to vanish. So what is the mechanism? If its true then the theory of sound localization as I know it cannot be correct.
Earl,
I would say that one should not restrict our perception to only what enters the ear canals directly. Rather we should assume that body perception might play a role in direction cues. Assume you have good L/R-symmetry for LF signals at the L.P. we can constuct wavefields with two (or more) speakers (non-mono, of course) that literally shake our head and body differently even though SPL mag and phase difference at the eardrums might be below thersholds. Technically, our head see front-to-back rocking forces with the M content and side-side with the S content, when thinking in MS-signals. This might be nocitable and could be a valid clue, after sort of a teach-in period.

Additionally, with crosstalk-canceling we can extend the angle of perception significantly even for low MF, I see no reason why this should stop down lower. Even more so, the smallest change in SPL mag by some miniscule amount of succesful canceling might suffice for a localisazion cue just because with real-world single bass sources such a situation will never occur.
If you have an ill-balanced room at LF between L and R signals at L.P. (eg from an asymmetric setup) one can exactly hear how strong this effect can get with out-of-phase bass signals and one ear happens to find itself in a deep null.


Of course chuffing and distortion will be the most typical invalid reasons for sub location issues, but we should not make assumption of a source material "mono bass" by default, at least in a 50..150Hz range. At 20Hz, I won't argue. And if this lead to a bad LF response as the compromise to be made I won't support the idea of stereo bass and give away the known good effect of multisubs for nothing. A good way could be 2-->3 rematring (á la Gerzon Trifield) for the bass, one could try balance the side-to-center relative levels (for mono source) for optimum LF response and retain a bit of any directionality built into the source material, too.

- Klaus
 
Another interesting paper on LF localization :

SPATIAL AUDITORY DISPLAY USING MULTIPLE SUBWOOFERS IN TWO DIFFERENT REVERBERANT REPRODUCTION ENVIRONMENTS

ABSTRACT
Spatial auditory displays that use multichannel loudspeaker arrays
in reverberant reproduction environments often use single subwoofers
to reproduce all the low frequency content to be presented to the
listener, consistent with consumer home theater practices. How-
ever, even in small reverberant listening rooms, such as those of the
typical home theater, it is possible to display a greater variety of
clear distinctions in resulting spatial auditory imagery when using
laterally positioned subwoofers to present two different signals.
This study investigated listeners’ ability to discriminate between
correlated and decorrelated low-frequency audio signals, emanat-
ing from multiple subwoofers located in two different reverberant
environments, characterized as “home” versus “lab.” Octave-band
noise samples, with center frequencies ranging in third-octave steps
from 40Hz to 100Hz, were presented via a pair of subwoofers
poitioned relative to the listener either in a left-right (LR) orien-
tation, or in a front-back (FB) orientation. When delivered via
subwoofers in the FB orientation, in each of the two reproduction
envirnoments, discrimination between correlated and decorrelated
low-frequency signals was at chance levels (i.e., the discrimination
was effectively impossible). When delivered via the laterally posi-
tioned subwoofers (orientation LR) in the acoustically-controlled
laboratory environment, the signals could be perfectly and easily
discriminated. In constrast, when tests were run in the small and
highly reverberant (i.e., home) environment, the decorrelated sig-
nals were not so easily distinguished from those that were cor-
related at the subwoofers, with performance gradually falling to
chance levels as the center frequency of the stimulus was decreased
below 50 Hz.
 
The possibility of structural cues for LF sound is indeed interesting - I had not thought of that, although I am well aware of the physical nature of LF sounds from studies of LF sound perception in automobiles. Structural cues may use an entirely different process for detection.

I agree that the study that was done in Aus does have some potential flaws. The study above is pretty much in line with my thinking, that our localization abilities dimish as the frequency falls. Maybe they don't vanish as I once thought, but they certainly are not very pronounced either. They appear to be only lateral, which would emphasize the mechanical aspects that are being alluded to.
 
Of course chuffing and distortion will be the most typical invalid reasons for sub location issues, but we should not make assumption of a source material "mono bass" by default, at least in a 50..150Hz range. At 20Hz, I won't argue. And if this lead to a bad LF response as the compromise to be made I won't support the idea of stereo bass and give away the known good effect of multisubs for nothing. A good way could be 2-->3 rematring (á la Gerzon Trifield) for the bass, one could try balance the side-to-center relative levels (for mono source) for optimum LF response and retain a bit of any directionality built into the source material, too.
If you want to retain multi-sub modal smoothing, but enable a stereo bass response you could just use four subs instead of three, one near each room corner with those on the left side of the room reproducing the left channel and those on the right reproducing the right signal.

For "mono" bass signals (the majority) you would get the exact same modal smoothing properties that you would with 4 subs all driven by summed L+R, however any bass which is concentrated in only left or right channels will be reproduced "correctly" only from that side of the room with the possibility of left-right lateralisation.

Modal smoothing will no doubt be less effective on bass reproduced in one channel as only three drivers along one side (counting main speaker) will be active, but it should still be a lot smoother than a single main speaker on its own, especially when a sub at the rear corner of the room far from the main is active.

Three asymmetrically located subs may be sufficient for modal smoothing of a mono bass signal but four is needed for stereo. It's on my "to try" list one day, as I'm not a fan of the idea of summing stereo bass into mono before feeding it to all subwoofers in the room, as I think some lateralisation of the low frequencies that might be perceived can be lost.

Such a set-up could also be used to evaluate the importance of mono vs stereo bass reproduction, by simply switching all the subwoofer feeds to a summed L+R signal without any other changes, which would give an identical result for central mono bass signals but not offset or out of phase bass.
 
Last edited:
Another interesting paper on LF localization :

SPATIAL AUDITORY DISPLAY USING MULTIPLE SUBWOOFERS IN TWO DIFFERENT REVERBERANT REPRODUCTION ENVIRONMENTS

Does Martens present new data from his 2004 paper? This was Toole's comment:

"Another recent investigation concludes that the audible effects benefi ting
from channel separation relate to frequencies above about 80 Hz (Martens et
al., 2004). In their conclusion, the authors identify a “cutoff-frequency boundary
between 50 Hz and 63 Hz,” these being the center frequencies of the octave
bands of noise used as signals. However, when the upper-frequency limits of the
bands are taken into account, the numbers change to about 71 Hz and 89 Hz,
the average of which is 80 Hz. This means, in essence, that it is a “stereo upperbass”
issue, and the surround channels (which typically operate down to 80 Hz)
are already “stereo” and placed at the sides for maximum benefi t."
 
The possibility of structural cues for LF sound is indeed interesting - I had not thought of that, although I am well aware of the physical nature of LF sounds from studies of LF sound perception in automobiles. Structural cues may use an entirely different process for detection.

I agree that the study that was done in Aus does have some potential flaws. The study above is pretty much in line with my thinking, that our localization abilities dimish as the frequency falls. Maybe they don't vanish as I once thought, but they certainly are not very pronounced either. They appear to be only lateral, which would emphasize the mechanical aspects that are being alluded to.
Living near Albuquerque, I have become accustomed to the site of hot air balloons, during Balloon fiesta there are as many as 1000 flying.

When the hot air burners are are lit, they emit a pink-noise like sound, at a distance of some miles in the air, air (and the balloon envelope) attenuates the HF to the point where the sound has little left but VLF.

The balloons are completely silent until the heaters go on.
When I hear the LF sound, my ears (or senses) guide me directly to the lit balloon, visually verified by the burn in a hemisphere of the other silent balloons.

The LF localization phenomenon does not appear to be only lateral to me, though small room's reverberant fields can obscure the LF location.

Art
 
Another interesting paper on LF localization :

SPATIAL AUDITORY DISPLAY USING MULTIPLE SUBWOOFERS IN TWO DIFFERENT REVERBERANT REPRODUCTION ENVIRONMENTS

I always wondered if using two laterally located velocity sources as subs can help create an artificially heightened sense of low frequency envelopment. This paper indicates that it might be possible.

Dave
 
I have done some recent experiments with decorrelated LF sources with a small reverb tail - which is what decorrelates them. This has a surprising effect on the bass perception. There are a huge number of variables involved and I am trying to sort out how to arrive at an optimum. Too many variables for brute force.
 
I have done some recent experiments with decorrelated LF sources with a small reverb tail - which is what decorrelates them. This has a surprising effect on the bass perception. There are a huge number of variables involved and I am trying to sort out how to arrive at an optimum. Too many variables for brute force.

Sounds similar to hall synthesis, which would make sense towards a perception of decorrelation.
 
Status
Not open for further replies.