The Advantages of Floor Coupled Up-Firing Speakers

graaf said:
Therefore for an early reflection to be sorted out by our hearing with the help of the precedence effect it is not the energy spectrum of the reflections that should resemble that of the direct sound. It must be - or so it seems - the shape of the first wavefront - because it is all there is in the time frame of the precedence effect.

... seems very hard to bring in line with the acoustics of speech and musical instruments even in "non complex" acoustic scenarios like "single human speaker", "girl and guitar", "solo flute"...

The border of time intervals for summing localization turning into precedence effect of about 1-2ms seems by far insufficient to recognize even the onset of a particular instrument's sound or even to recognize most speech sounds, which also have longer duration.

Thus the time interval for the precedence effect to occur is an interval observed in structuring the acoustic input, but it is surely not the time interval needed for proceessing, which takes much longer.

When having the tolerance for certain spectral components of e.g. a flute's onset to vary up to say 50ms until fully depeloped in the direct sound (and still recognized as being a flute ...), then it seems completely implausible the erbrain relies on "close match of transients" in "every reflection from the room" to make the precedence effect work.

Interband correlations in envelope and pitch detection are the mechanisms to look for ...

If spectral variations in single room reflections would cause the earbrain not being able to catch a reflection as being associated to a specific source, than the earbrain would not even detect the onsets i presented here as "belonging to the same class of instrument".

Source separation and precedence occur due to analysis on a rather narrowbanded level.

Maybe precedence is already occuring on "constituent spectral component level" and does not even rely on the "recognition of sound" to be established fully. Most probably it is an interactive "bottom up" (stimulus driven) and also at the same time "top down" (knowledge driven) process as it is usually the case in processes of perception.

Bit even if so

> 1 Khz here, amplidude rising
> 1 Khz here too, amplitude rising, but different DOA
> try suppressing, for a while, seems reflection
> 3Khz, here, amplitude rising fast
> Top Down: Hmm maybe it is a flute's sound ? Look for component 5Khz ..
> no 5 Khz here, maybe wait some ms, i am alert
> i have 7Khz here, want to make flute sound of it ?
> Top Down: you are not paid for thinking here, just do your job or you will be kicked out ...

> ... etc.


The earbrain will never rely on "copylike" reflections from a room's walls. Especially when stereo listening to coherent loudpeakers, those may be detrimental in fact especially when level of reflections is too high.

Copylike reflections of high level will just cost "processing capacity" (in sorting out irrelevant correlations from environment) in source separation which turns into "listening fatigue" very soon ...
 
Last edited:
graaf said:
"
If we record an instrument sound and cut its initial transient, a saxophone may not be distinguishable from a piano, or a guitar from a flute This is because the so-called "quasi-steady state" of the remaining sound is very similar among the instruments. But their tone beginning is very different, complicated and so very individual. "


Surely transients are important in recognizing sounds of instruments and speech, i would never doubt that.

But transients are not just that "initial wavefront" or "peak in time domain" as you put it before. Transients from instruments and speech have often considerable duration in time and also complex spectral structure varying over time. This will contribute in taking some "processing time" to be recognized ... e.g. because of astonishing variance within the same "class of sounds" so to speak ...

This is what the examples of "flute onsets" should demonstrate.
 
Last edited:
Also "transient oscillation" in the onset sound of musical instruments (organ, flute, violin, ...) is not covered by "a first wavefront".

Precedence effect - Wikipedia, the free encyclopedia

"The precedence effect appears if the subsequent wave fronts arrive between 2 ms and about 50 ms later than the first wave front. "


Thinking this way in "wave fronts" or "first wave front" is somewhat misleading because typical oscilating transients from musical instruments will produce many "wavefronts" during a transient.

In case of a violin the wavefronts radiated to the ceiling and ariving at the listening seat by reflection have different properties than those radiated directly to the listening seat: There is even different spectral content.

The german label "Gesetz der 1. Wellenfront" ("law of first arrival") for the precedence effect is maybe a "catchy" label like "first arrival" is.

But it does by far not characterize the processes contributing to source separation, localization and precedence.

Listening to orchestral music is not like listening to silence and than having "a first wavefront" ... there are continuously "oscillating transients" from many intruments overlapping in time.

Those labels "first arrival" or "room reflection is like copy" (like it was used by Siegfried Linkwitz) are just labels, they are neither descriptions of the underlying physical phenomena ( see "onset of a flute" example above) nor descriptions of the perceptual processes envolved, when recognizing and localizing certain (groups of) instruments in an orchestra playing a phrase e.g.

We simply cannot discuss IMO those phenomena just based on the "catchy" labels some persons gave them in history, because humans needed a keyword for looking those phenomena up in the dictionary.

The peanut butter inside the glass is something completely different than the label on the outside of the glass ...
 
Last edited:
A thought experiment:

- Given the (tolerated) variance in time delay for some spectral components in the onset of a flute's sound (until fully developed) is e.g. 20ms to be recognized by some example earbrain as being within the class "flute sound".

This would be the intra class variance in time delay for "flute sounds" in the direct sound.


- is it plausible then, that the class "reflections of flute sounds" would have a variance of just say 0.25ms in the relative delay of spectral components, not even tolerating a single reflection from a wall that has diffusive structures in depth of 0.08m ?

The class "reflections of flute sounds" being about 100 times more restrictive in spectral onset pattern than the underlying class "flute sounds" ?

If you construct a recognition system this way IMO - being it biological or technical - it will even have difficulties in finding the way to the waste paper basket of evolution autonomously: Someone from outside will have to throw it into that basket.

graaf said:
Ok, but what's the point? I don't get it

It was about the plausibility in "intra class variance" regarding "flute sounds" vs. "room reflections of flute sounds".

I hope i was able to make my point more clear now ...
 
Last edited:
... seems very hard to bring in line with the acoustics of speech and musical instruments even in "non complex" acoustic scenarios like "single human speaker", "girl and guitar", "solo flute"...

The border of time intervals for summing localization turning into precedence effect of about 1-2ms seems by far insufficient to recognize even the onset of a particular instrument's sound

Well, I do not claim anything like "1-2ms seems sufficient to recognize the onset of a particular instrument's sound".


Thus the time interval for the precedence effect to occur is an interval observed in structuring the acoustic input, but it is surely not the time interval needed for proceessing, which takes much longer.

Surely, I haven't written anything to the contrary.


Interband correlations in envelope and pitch detection are the mechanisms to look for ...

Perhaps but are there any studies or is it just Your educated guess ?


The earbrain will never rely on "copylike" reflections from a room's walls. Especially when stereo listening to coherent loudpeakers, those may be detrimental in fact especially when level of reflections is too high.


But why? What is the mechanism of this detrimental effect?


Copylike reflections of high level will just cost "processing capacity" (in sorting out irrelevant correlations from environment) in source separation which turns into "listening fatigue" very soon ...

I don't know. All I can say is that I have never experienced such a "listening fatigue" with my KEF UniQ based FCUFS which produce a whole lot of correlated "copylike" reflections. Neither I have ever heard of anybody complaining about such fatigue when using omni, quasi-omni or similar designs producing such correlated "copylike" reflections. Never ever.
 
A thought experiment:

- Given the (tolerated) variance in time delay for some spectral components in the onset of a flute's sound (until fully developed) is e.g. 20ms to be recognized by some example earbrain as being within the class "flute sound".

Surely not. Not "recognized" in any conscious way as a particular kind of something. I believe I made myself clear in my previous posts that this is not my point (see the long post with many defunct links).

But "recorded" and then processed as "within a possible class of something" - why not?

And from acoustical perspective 20 ms is a lot of time. Even 1 ms corresponds to the first 90 degrees of a 250 Hz soundwave.

- is it plausible then, that the class "reflections of flute sounds" would have a variance of just say 0.25ms in the relative delay of spectral components, not even tolerating a single reflection from a wall that has diffusive structures in depth of 0.08m ?

Well, all I can say is that this is not my point.
 
The earbrain will never rely on "copylike" reflections from a room's walls. Especially when stereo listening to coherent loudpeakers, those may be detrimental in fact especially when level of reflections is too high.

Copylike reflections of high level will just cost "processing capacity" (in sorting out irrelevant correlations from environment) in source separation which turns into "listening fatigue" very soon ...

How then can it be explained that early reflections reportedly improve speech intelligibility?

At least this is what I read in chapter 10 of Floyd Toole's "Sound Reproduction":
https://books.google.pl/books?id=sG...YPMz6gKgN&ved=0CDYQ6AEwAw#v=onepage&q&f=false

On page 162:
Summaring the evidence from these studies, it seems that in small listenng rooms, some individual reflectons have a negligible effect on speech intelligibility, and others improve it, with the improvement increasing as the delay is reduced.
 
graaf said:
I don't know. All I can say is that I have never experienced such a "listening fatigue" with my KEF UniQ based FCUFS which produce a whole lot of correlated "copylike" reflections.

graaf - excuse me - but you should consider discussing with a person being rather well informed concerning currently available technology. So please don't tell me imaginary stories ...

Response of KEF "Blade" Model (comprising KEF UniQ Driver) under angles:

http://img2.audio.de/KEF-Blade-f960x960-ffffff-C-9736e6b9-71243224.jpg


KEF "UniQ" is a coaxial driver (family of design) surely avoiding discontinuities in radiaton pattern of some common coaxial drivers to some extent. But the energy response is clearly falling from upper midrange on upwards.

Room reflections will typically lack energy in the highs ...

The more we discuss these issues, the more i come to the conclusion, your sticking to your preferred setup being due to habituation ...
 
Last edited:
this one is also of interest here:

Duke's little A/B comparison (switching the upward-facing "flooders" on and off) at the RMAF confirmed what I already suspected; the apparent spectral balance didn't change much, but the spatial "size" of the ambience became much larger, growing from a straight line between the speakers to larger than the room itself, with no change in the spatial locations of the instruments. If anything, they were more firmly located in space, since you could more easily separate the wash of reverb from the instruments.
 
graaf - excuse me - but you should consider discussing with a person being rather well informed concerning currently available technology. So please don't tell me imaginary stories ...

Response of KEF "Blade" Model (comprising KEF UniQ Driver) under angles:

http://img2.audio.de/KEF-Blade-f960x960-ffffff-C-9736e6b9-71243224.jpg


KEF "UniQ" is a coaxial driver (family of design) surely avoiding discontinuities in radiaton pattern of some common coaxial drivers to some extent. But the energy response is clearly falling from upper midrange on upwards.

Room reflections will typically lack energy in the highs ...

The more we discuss these issues, the more i come to the conclusion, your sticking to your setup being due to habituation ...

C'mon Oliver - I mean "copylike" in the sense of being "correlated" ie. "bad" according to Your theory.

After all I was answering to Your post:
Copylike reflections of high level will just cost "processing capacity" (in sorting out irrelevant correlations from environment) in source separation which turns into "listening fatigue" very soon ...

bolds are mine

I have even used quotation marks around "copylike".

Or are You telling me that it suffices for a reflection to be low pass filtered to become "decorrelated" and "good"? You are not, are You? :)

Besides You have to take into account horizontal position of the speaker in FCUFS. Then lateral reflections may appear to be more "copylike" than You think ;)
 
Last edited:
First of all, i think you should cite Lynn Olson's take on that more closely to the topic:

...
Net result, most loudspeakers are more directional at HF than they are at LF, and they are aligned for flat response on-axis. This means total power into a sphere (or the room) has a falling characteristic, with some ripple in directivity around the crossover frequency. The FR of each of the first reflections (floor, ceiling, front wall, nearest side wall, and the two-bounce images) is very different than the on-axis response, and the later multi-bounce reflections are different as well. Is this ideal for music, particularly music that relies on a strong ambient impression of a hall space? That's where disagreement comes in.

I take a stance that if wider dispersion is chosen in the MF and HF region, it shouldn't be at the expense of first-arrival transient integrity. Even though the off-axis radiation of the "flooder" drivers might be 20 dB down compared to the first-arrival sound, I feel it should still be time-aligned with the first-arrival sound. I go to some trouble to suppress diffraction, and that's usually 6~20 dB lower than the direct sound from the drivers.

There's a little measurement trick I shared with Duke LeJeune at the RMAF. When you do an FFT, it's typical to use damping to absorb the floor bounce, and then set the measurement window between 4 to 7 mSec, setting the gate just before another reflection arrives. That's the basis for computing the first-arrival frequency response.

You can also measure an equivalent to a real-time analyzer (RTA) by opening the time domain out to 100 mSec, and applying 1/3 or 1/6th octave smoothing to the computed FR. That basically gets all the room reflections, along with the direct sound.

This is all standard stuff. What I told Duke was a minor variation; when you make the long-duration 100 mSec measurement, deliberately leave out the direct-sound; that is, start the gate between 3 and 5 mSec after the first sound, and let it run out to 100 mSec. This omits the direct sound, but captures all of the room reflections.

You can then overlay the FR charts for the direct-sound and the summed room reflections, and see where they diverge. By looking at the divergence between the two curves, you can then apply equalization to the "flooder" drivers to get the two curves more closely aligned. In effect, you're changing the overall spectra of the summed room reflections so they're a better match for the on-axis sound. This should make the ambient impression more true-to-life, and less like a loudspeaker.
...


It seems to me, that you are comparing very different approaches like

- multiway designs having (just or addional) tweeters "firing" to the ceiling
(i truly hate this kind of wording ... it is kind of "bull***t bingo" to me)

- widebanders "firing" to the ceiling

- coaxials "firing" to the ceiling


Furthermore you are collecting evidence for "your preferred concept"

- which of course is not a concept because parameters and properties postulated are by far too diverse -

from every study or textbook that seems to provide "some crumbs" that are likely to meet your (technically underdeterminded) personal preferences.

I personally like a more centered discussion, staying with a certain topic for some time. Switching topics and subtopics rather fast seems also hiding to some extent, you are already cought in some serious contradictions.

I suggest discussing Moulton's approach to control rooms first, and why IMO the represention of his concept on the website does not meet the true properties of such an implementation.


The "ceiling flooder" is a different kind of topic, that can be boiled down to the

Question: "How do i get a non dull sounding reverberant field without having the right transducers to achive that ?"

Answer: "Let's pimp the highs in the reverberant field by using 'side firing' or 'upfiring' drivers."

Question: "What to do if i can only achvieve that - using my preferred drivers (e.g. by belief or by their "looks") - when tolerating a falling response in the direct sound ?"

Answer: "Just get accustomed to it."


You won't even find Moulton or anybody else of the "accepted authorities" you cite so diligently (Toole, Griesinger, ...) aggreeing to your strategy in listening room setup, that is for shure.

But we could e.g. discuss Moulton's take on control room's and why i think it doesn't work in the way he claims on his website ...


We should IMO also make a distinction in further discussion regarding the design criteria for

- musical venues (concert halls e.g.)

- class rooms

- conference rooms

and

- control rooms and private listening rooms for (stereo) loudspeaker reproduction.

Because it doesn't help at all, to have all these different topics mixed up all the time, as it is typical to this "very special" thread, being a "long run, no results" thread ...

And i think it is obvious, even why this is the case.


Kind Regards
 
Last edited:
it is typical to this "very special" thread, as it is a typical "long run, no results" thread ...

No "results"? I am quite happy with the "results". And some other people too.

The aim of this thread is to find explanation so as to be able to make the results perhaps even better and also just for intellectual satisfaction.



Furthermore you are collecting evidence for "your preferred concept"

I am collecting all possibly relevant data for the discussion.

Because I am looking for a concept explaining what I can hear and what seems to be unexplainable by standard thinking.


It seems to me, that you are comparing very different approaches

- multiways having (just or addional) tweeters "firing" to the ceiling
(hate this kind of wording ... it is kind of "bull***t bingo" to me)

- wiedebanders firing to the ceiling

- coaxials firing to the ceiling

First of all - does the big box in the corner look like an "additional upfiring tweeter"?

ct6a1192.jpg



I suggest dicussing Moulton's approach to control rooms first, and why his represention of the concept on the website does not meet the factual properties of such an implementation.

Why? And how can You know it doesn't?

We can also discuss an actual implementation of the concept.

This is one of the best example of the David Moulton design approach:
https://www.gearslutz.com/board/stu...61074-now-something-completely-different.html

A renowned producer's opinion (also posted on the gearslutz forum) on how Moulton approach works in reality:

The imaging in the bare room was holographic, among the best I've ever heard


The ceiling flooder is different kind of discussion, that can be boiled down to the queston:

"How do i get a loudspeaker having smooth energy response without having transducers to achive that ?"

Answer: "Lets pimp the highs in the reverberant field by using "side firing" or "upfiring" tweeters."


Question: "What to do if i can only achvieve that - using my drivers - ehan tlerating a falling response of the direct sound ?"

Answer: "Just get accustomed to it."

What "falling response of the direct sound"? What kind of problem You mean?


But we could e.g. discuss his take on control room's and why i think it doesn't work in the way he claims on his website ...

That's quite possible that "it doesn't work in the way he claims on his website".

Question is then how it works?

Same applies to FCUFS - it is very probable that it doesn't work in the way I hypothesise in this thread because I am not a technical person and my technical knowledge is limited. I am just intuitively guessing.

Question remains - how does it work then?
 
Last edited:
graaf said:
What "falling response of the direct sound"? What kind of problem You mean?

When using the "under larger angle" (e.g. 60 degree) response for the direct sound e.g. in Kef "UniQ", you get a frequency response falling with frequency for the direct sound (as being the new "on axis response).

http://img2.audio.de/KEF-Blade-f960x960-ffffff-C-9736e6b9-71243224.jpg


Do you eq for flat "on axis response" at the listening seat ?

Seemingly you don't, at least this was not suggested in years before in this thread ...
 
Last edited:
When using the "under larger angle" (e.g. 60 degree) response for the direct sound e.g. in Kef "UniQ", you get a frequency response falling with frequency for the direct sound (as being the new "on axis response).

http://img2.audio.de/KEF-Blade-f960x960-ffffff-C-9736e6b9-71243224.jpg


Do you eq for flat "on axis response" at the listening seat ?

Seemingly you don't, at least this was not suggested in years before in this thread ...

You haven't read it thoroughly enough :p

Nowadays, the digital days, frequency equalization is a piece of cake.

Anyway, it is something to be done preferably on a line level so I do not even consider it a speaker design issue at all. And this is so easy to achieve that it is just a non-issue. Nothing to discuss about. If there is a need, then You equalize. That's all.

I myself use an old school analog parametric equalizer for that purpose :) so I do not equalize just the on-axis "direct sound".

Anyway - I don't see any point in equalization based just on the direct on-axis response. Of course perhaps there is one. I just can't see the point.
 
graaf said:
I myself use an old school analog parametric equalizer for that purpose so I do not equalize just the on-axis "direct sound".


OK, fine. Do you have any gated/ungated measurements indicating

- "on axis" response (gated in direction to the listening seat) as well as
- ungated responses at the listening seat

and maybe a sketch or photo of the UniQ driver how you set it up currently ?