Major Frequency Ranges

And about Earl Geddes's quote i interpret it differently than you: it is a case of coupling to room acoustic not an absolute range of frequency of importance.

In this case ( acoustic and loudspeaker combo) Earl quote make total sense ( RFZ are usually effective from 1khz and up sometimes an octave lower but this is rare) but if you forget the acoustic ( which seems to me from your statement) and define some frequency range as being more important to others then the inteligibility range make much more sense ( 300hz to 4,5/6khz depend where you find your info) because this is where understanding of speech reside ( telephone bandwidth).

I worded things wrongly. I should have asked what should be emphasized in the different frequency ranges. For example, we can't discern imaging in the low frequency ranges but a flat frequency response is important. That tells me not to worry about trying to build low frequencies into the front main channels because it doesn't help. Geddes indicates our ability to discern imaging isn't very good in the mid-bass. So even when we *can* start localizing we aren't very good at it until we get into higher frequency ranges.

We could build gigantic waveguides to control frequencies down to 150Hz in a search for perfect imaging but it wouldn't add value because we aren't good at imaging in that frequency range anyway.

Consequently, I think Geddes is resolving the physics of wave acoustics with psychoacoustics in a way that helps design good loudspeakers without doing stupid stuff like building giant waveguides.
 
For example, we can't discern imaging in the low frequency ranges but a flat frequency response is important. That tells me not to worry about trying to build low frequencies into the front main channels because it doesn't help.
There are two camps if you like, one side that thinks that bass should be delivered by subwoofers and crossed out of mains regardless of whether they can produce it and the other side that think that it is better to use all of the sources of bass that are available to smooth the response in the room.

There are compelling reasons to choose either approach.

Which gives the best result will depend a great deal on whether a single listening position is being optimised or an area is being targeted, the speakers involved, the room, the ability to EQ and manipulate the sources separately etc.

Here is good walkthrough where using all the sources available is used, there are more parts but this is the first one
Bass Integration Guide – Part 1
 
Forgot to add there is a hard lower limit of 80Hz. Even in a large room or outside in open spaces humans can't localize frequencies below 80Hz. So that would be another limit where you could safely ignore efforts at directivity.

at what volume would the physical sensation of sub bass allow you to approximate this below the limit you can manage it audibly?

my home setup is loud enough that i can identify the individual speakers by 'feel' if i need to from my listening position, which helps me set the volume for each low channel after i've inevitably tweaked something and can't be bothered to measure anything.
 
I forgot where I had seen this before. Finally found it again. Jim Griffin's line array paper. Design Guidelines for Practical Near Field Line Arrays


But, if we relax the c-t-c criterion, more secondary lobes would appear in the 10 to 20 kHz frequency range. Fortunately, in this octave the ear is less sensitive (per Fletcher Munson curves) so any secondary lobes likely would be less audible to the listener. Thus, if one wavelength spacing at 10 kHz is adopted as a compromise, then tweeter spacing would need to be 34.4 mm (1.35”) c-t-c apart. While more off axis secondary lobes would be generated in the far field, small flange tweeters are available to meet this dimension. The tradeoff is possible sound degradation from comb lines near 20 kHz.

In this case, the compromise is made above 10,000Hz.
 
Sound Reproduction, Toole. Chapter 14.2.3 The Effect of Propagation Distance - A Side-Channel Challenge

The fact that localization in complex sound fields is substantially
determined by high-frequency transients
means that one method of reducing
the localization distraction is to reduce the level of high frequencies by
aiming the speakers so that the more directional high frequencies fire over
the heads of listeners, or to use loudspeakers that attenuate the high
frequencies—the so-called dipole surround loudspeaker being one example.
 
Premium Home Theater Design and Construction
Earl Geddes with Lidia Lee, 2003

"Since an overwhelming amount of the sounds in our environment
lie in the region of 300–3000Hz (virtually all speech) our hearing
is most acute in this region. Music also tends to be dominated in its
energy content in this frequency band. As a result, aberrations in
this band are more detrimental than aberrations outside of this band.
Being the band of speech this is the most common usage of our
hearing mechanism." - page 57
 
Premium Home Theater Design and Construction
Earl Geddes with Lidia Lee, 2003

re: Baffle edge diffraction & frequency range

"At frequencies above the point
where its largest dimension is about the size of a wavelength (remember
that a is the radius not the diameter), the enclosing object will begin to
affect the source in ways which depend on its shape. Below this frequency
an object’s shape is relatively unimportant. For example, at frequencies
below ka=π/2 (f = c /4a), things like cabinet edge diffraction depend
almost exclusively on the enclosures volume and not on its shape. There is
always diffraction, even at low frequencies, but the amount depends on
much larger features than the cabinet edges. Above this frequency, an
object’s shape has a much larger effect and the baffle edges become major
points of diffraction." - Page 66
 
Psychoacoustic transition zone around 700Hz.

Earl Geddes https://www.diyaudio.com/forums/roo...ng-toole-geddes-stereo-setup.html#post6542945

3) I think that the precedence effect is widely misused, so I will avoid this term and its implied effects. Our hearing processes signals in vastly different ways as the frequency changes. This is due to neural response as well as cochlear effects. Our ears neural process cannot keep up with a signal above about 700 Hz synchronously and the nerves begin to fire randomly at HFs. Thus a transition from temporal detection to place theory where tone is based on the location on the cochlea. So the processing is completely different above and below this transition. This is why there is such a steep change in loudness below about 500 Hz.

The best way to understand my point is to look at the Gammatone filter responses for a human ear. You will see that they are very short at HFs and very long at LFs. At HFs a delayed signal does not "fuse" with the direct unless it is < a ms or so, while at LFs this fusion occurs for 10s of ms. There simply isn't a clear transition for a broadband signal. This is why one cannot state a delay gap that is accurate across the bandwidth. The greater the delay gap the lower in frequency fusion will not occur. Simply put all reflections are relevant but not always of the same importance. Of course at 30 ms no reflections fuse, at 1 ms virtually all of them do. In between is a mix that will depend highly on the characteristics of the signal.

The direct sound does "trump" the reflections as far as the perception of direction goes, but strong reflections will tend to blur that image making it less precise in location. The "precedence effect" does not consider "blurring" only primary direction. In other words it does not speak to imaging quality at all.