The Advantages of Floor Coupled Up-Firing Speakers

Etienne88 said:

If I remember well, Stig Carlsson said in his paper that the drivers should be pointed so that they cross behind the listener. It is somewhere in this paper... From that you can construct speakers adapted to your own listening room! :cool:


Thanks, Etienne!

It seems to depend to the driver as well. The fullrange driver I use have to cross in front of the listener, independent from the vertical orientation. I think I will build an equal-sided triangle for the front baffle. What seem to be essential is the absorber in the back of the driver. The difference is astonishing. I have a thick carpet in my workshop, so there is no need for a bottom absorber.

Regards,
Oliver
 
Oliver,

Carpet is not the best absorber. It absorbs quite well at 4 kHz with absorption coefficient around 0,6 – 0,8 but as the frequency goes down, so does the absorption coefficient… They yield around 0 – 0,1 at 125 Hz, which is very close to nothing!

Nevertheless and according to the following quote from Lynn Olson website, floor reflections would assist localization.
the floor reflection actually assists in localization and the natural perception of timbre. (The BBC did a series of experiments of listening to stereo in an anechoic chamber, and found to their surprise that adding a plywood "floor" panel between the speakers and auditioner improved localization and gave a substantially more natural sound quality.)
I did not find any information about the set up used by the BBC in the anechoic chamber, but I guess that they used traditional front firing loudspeakers.
From that you can draw your own conclusions! ;)

Regards,
Etienne
 
Graaf,

I did not forget you. It is summer here and I prefer to spend some time outside before darkness falls again on Sweden!

I think you want to be right because it the set up is your idea… No offence intended but sometimes it is hard to admit you own errors!

My hobby, obviously yours as well, is called high fidelity. I like to listen to music very much but as an engineer I like technique as well, so that I make my life a little bit more complicated to ensure that the equipment I use to listen to music does that with the highest accuracy possible to the recorded support (not to the original performance!). I also include the room in the reproduction chain. This means that if I send an impulse to my preamp, an impulse should come out of the speakers. Of the 2 impulse responses I posted earlier, one almost respect that criteria and the other one is very far from it. I agree that both situations are a long way from being perfect impulse reponses. In the first chart you see an impulse (which actually look like an impulse), then some problems before the first millisecond. After that it is “more or less” quiet. On the second chart, I do not recognise the impulse typical shape right after 0 millisecond… and the rest is far less quiet than the other impulse response.

The analysis of the impulse responses is quite straight forward to me. One of the things that might be questionable is the way I measured them since I am still in the learning process of how to use a microphone. Nevertheless I followed the instruction I get with the measurement software and the only thing I changed between both measurements was the position of the loudspeakers.

Regards,
Etienne
 
Etienne88 said:
I think you want to be right because it the set up is your idea… No offence intended but sometimes it is hard to admit you own errors!


Dear Etienne,
how can I assure You that this is not the case?
I am too old for this

what "errors"? what "my idea"?

the setup is essentially borrowed from Carlsson, this is Carlsson setup combined with Beveridge idea of speaker positioning at the opposite walls

I am tired I have to admit
so I decided to leave this discussion and this thread as it appear to be pointless, there is simply no communication, no genuine interest

maybe someone will pick this discussion up sometime but I doubt...

my last word or You Etienne - do Yourself a favour, contact a typical Carlsson speaker (perhaps the OA 52.3 - Stig Carlsson's personal favourite) owner (it should be easy for You in Sweden, perhaps through carlssonpanet?), listen to it in a typical setup recommended by Carlsson and then take an impulse response measurement of this setup

look at Your results and then ask Yourself some questions - starting from "was Stig Carlsson competent sound engineer and scientist?" and so on

good bye and good luck,
graaf
 
tired

graaf, I am a diy'er following this with interest. I have been experimenting and I am impressed with the results so far. Please don't assume there is no interest. I am sure that there are others like me as well. I have no technical ability to contribute. I intend to build various arrangements to try out. OB firing vertically combined with a compression tweeter on a wave guide more or less toward the listener but off axis and angled toward the ceiling seems very promising.( all this low to the floor and against the wall) Thanks for your efforts. Jim
 
Oliver,
A 2 cm Persian might make a difference, yes! :D

Graaf,
I agree with you that our last discussion (from when I posted the impulse response) was not very productive/constructive.
Nevertheless, I found most of the discussion we had very interesting and instructive. I am thankful you started this thread, it opened my eyes to the reflexions issues which we don't seem to tackle from the same angle! ;)
I sincerely hope those will not be your last words to me and I am sorry if I expressed myself too harshly.

Regards,
Etienne
 
Re: tired

Jim G said:
graaf, I am a diy'er following this with interest. I have been experimenting and I am impressed with the results so far. Please don't assume there is no interest. I am sure that there are others like me as well. I have no technical ability to contribute. I intend to build various arrangements to try out. OB firing vertically combined with a compression tweeter on a wave guide more or less toward the listener but off axis and angled toward the ceiling seems very promising.( all this low to the floor and against the wall) Thanks for your efforts. Jim

thanks Jim :)
sincerely I think that almost everything has been already said in this thread
I mean everything needed for interested diy'er like You to start His own experiments, to have fun and to find the sound He likes better
I choose not to persuade anybody anymore
after all I am not marketing any product ;)
just sharing my experience and thoughts

I wrote above "almost everything has been said" because there is one more thing that should be "hypothesised" to make the reasoning behind my proposition complete
early lateral reflections are indeed detrimental to the sound quality in case of most typical front firing speakers
they seem not to be detrimental to the sound quality in case of omnidirectional speakers, especially of the Carlsson type
why? this is the question

best regards,
graaf
 
Etienne88 said:
I found most of the discussion we had very interesting and instructive. I am thankful you started this thread

and I thank You for Your contribution, especially for the measurements as I am not able to post any

Etienne88 said:

I sincerely hope those will not be your last words to me and I am sorry if I expressed myself too harshly.

no problem, no offence taken

best,
graaf
 
Etienne88 said:
Oliver,

Carpet is not the best absorber. It absorbs quite well at 4 kHz with absorption coefficient around 0,6 – 0,8 but as the frequency goes down, so does the absorption coefficient… They yield around 0 – 0,1 at 125 Hz, which is very close to nothing!

Nevertheless and according to the following quote from Lynn Olson website, floor reflections would assist localization.

I did not find any information about the set up used by the BBC in the anechoic chamber, but I guess that they used traditional front firing loudspeakers.
From that you can draw your own conclusions! ;)

Regards,
Etienne

As I recall once reading, the BBC Dip was developed with the LS3/5 monitor. While testing the design and developing its crossover they came up with this theory. Actually, I had always assumed it was the LS3/5, but given that they were just doing the research and development for the overall concept of a BBC standardized studio monitor, it could have really been anything. None the less, I agree that it probably was a normal front firing speaker, and would add that it probably was a very early form of the LS3/5.
 
Re: Re: tired

graaf said:

there is one more thing that should be "hypothesised" to make the reasoning behind my proposition complete
early lateral reflections are indeed detrimental to the sound quality in case of most typical front firing speakers
they seem not to be detrimental to the sound quality in case of omnidirectional speakers, especially of the Carlsson type
why? this is the question

there is certainly something in Linkwitz's reasoning:

It is possible to reproduce a stereo recording in an ordinary living room such that listeners have the illusion that the two loudspeakers have disappeared and when they close their eyes, they can easily imagine to be present at the recording site.
The vast majority of loudspeakers that have been sold - the typical box speakers - can only produce this effect to a limited degree because of a fundamental limitation: they radiate sound into the room with different intensity at different frequencies and angles, though flat on axis. Thus the many reflections from room boundaries and surfaces become sonically colored in a way that is characteristic for this type of loudspeaker and we always recognize the sound as coming from a box rather than being live. It is the generic loudspeaker sound.

Loudspeakers with frequency independent, constant directionality such as omni, dipole or cardioid loudspeakers, create delayed replicas of the direct sound in a room and fewer colored reflections. Our ear/brain perceptual apparatus does not get confused by replicas. Instead it relegates them to the earlier learned acoustic behavior of the room and readily blankets that information and thereby the room. This automatic response is part of the Precedence Effect in psychoacoustics and it is essential for creating the illusion of "being there" and not just in your living room.

It has been a fascinating journey for me to come to this understanding. Early on, electrostatic panel loudspeakers had intrigued me because they seemed to do something fundamentally right when properly set up and this despite their obvious limitations. I can see in hindsight that a few loudspeaker designers had pointed to the benefits of omni-directional loudspeakers.

I wonder though whether it is a question of frequency domain or rather of time domain?
whether the reflections ought to be "delayed replicas of the direct sound" in terms of frequency content or rather in terms of their time structure that is "delayed replicas" of the original (direct, the first) transient waveform?

best,
graaf
 
Interesting quote Graaf!
IMHO and to answer your questions, it is both: the reflections have to be delayed enough in time and they should have the same frequency content as the direct sound. The first one is there for assuring a good localisation. As Earl once said: "the principle direction is set very early 1-2 ms, but the stability (...) of that image is strongly influenced by the next 8-10 ms". Concerning the second criteria, both Earl and Siegfried are in agreement there: power response curve and direct response curve should match in order to avoid coloration.
One step in either direction is surely a step forward, but it will get you to the summit! :)
The set up you proposed in this thread, does delay some of the reflexions. But the problem is that the kind of wide range we use will start to beam somewhere around 1 or 2 kHz with the front lobe becoming thinner and thinner as the frequency goes up, all that without mentioning the side lobes... In other words a full ranger will not have the same direct and power response curves!

Siegfried seem to say that dipole speakers have the same direct and power response curves which I don't believe is true. According to his own website (here), the dipole speakers show constant directivity (after equalisation for the 6dB/octave roll off) below v/2D (V being the speed of sound and d the shortest path between the front and the back side of the driver). Above that, the polar pattern looks like a flower with 4 petals at v/D and then changed again to I don't really know what... :confused: Nevertheless, I am sure that whatever driver mounted like a dipole will start to beam at a certain frequency. Thus assuring that the direct and power response curves will not be the same. That is assuming one unique driver. Then if consider a 2, 3 or maybe more way dipole speaker and if you cut the first driver low enough for it to not beam, the second driver will not show constant directivity unless you narrow the board on which it will be mounted. To have CD up to 20 kHz, the path between the front and the back of the driver should theoretically be 8,5 cm... Which is not what he does!? what am i missing here???

This little reflection make me wonder why Earl does not use dipole for the frequencies below the ones covered by the waveguide... the WG has a 90º coverage so does dipoles for frequencies below v/2D. Maybe the fact that the polar response for dipole looks like a 8 is the reason for that: the back lobe might be unwanted? Earl, if you are still around, your knowledge would be appreciated! :D

Regards,
Etienne
 
Graaf,

I have a question to you!
In another thread, you mention that you have some resonance problem with your Fostex. You call the resonances howling and locate them around 2,8 kHz.
I personally experienced nasty resonances as well when I had my speakers lying down. I don't know how to call them neither where where to place them on a frequency axle. But I can say that they took place on female voices and on electric guitars when the later were holding a note. When my speakers are standing, I don't experience this phenomenon...

Here comes the question part ;)
Have you listen to your speaker firing forward?
If yes, are you still hearing the resonances in such a configuration?
I wonder if the resonances are not related to the positioning of the driver which implies different direction for gravity, for example.

Regards,
Etienne
 
Etienne88 said:
Interesting quote Graaf!
IMHO and to answer your questions, it is both: the reflections have to be delayed enough in time and they should have the same frequency content as the direct sound.

well, I am not sure
first, talking about "time structure" I mean preserving of the original (direct, the first) transient waveform
second, frequency content of reflected sound depends not only on polar response of the loudspeaker but first of all on sound absorption characteristic of the reflecting surface

therefore I doubt whether if it is really neccesary for reflections to have the same frequency content as the direct sound
WHY?
because humans don't live and have never lived (during evolution) in perfectly reverberant spaces but rather in acoustically diversified environments with nonlinear sound absorption - reflections rarely have the same frequency content as the direct sound
but precedence effect has to work in all those environment for us to be able to make any practical use of our sense of hearing

therefore, as a result of evolution in diversified environments, our hearing works in time domain

and we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source

what we HEAR AS SOUND is in reality A SUM of the whole time-locked acoustic energy we receive during 30-50 ms after the arrival of the first wavefront

Etienne88 said:

In other words a full ranger will not have the same direct and power response curves!

but WHAT direct?
two points:
first, we are not listening "on axis",
second (and more important) - in fact we don't hear any direct sound at all, NEVER EVER
"direct response curve" is totally irrelevant from the perspective of physiology of human hearing
this is simply not like our human hearing mechanism works

Etienne88 said:

what am i missing here???

I don't know! :)
and I think that it is not that important
IMHO Linkwitz has interesting intuitions but that is all
because He is not into psychoacoustics
He is very reliable, very honest in what He does, He is even willing to openly question His own assumptions, to admit that He "was wrong" or that He "doesn't know"
this is very rare
BUT still thinks like an engineer and it is not enough to be even extremely competent and knowledgable engineer to understand what audio is all about IMHO

Etienne88 said:

This little reflection make me wonder why Earl does not use dipole for the frequencies below the ones covered by the waveguide

because according to the results of Dr Geddes' research it is not needed
CD is critically important for 500 Hz and up

Etienne88 said:

I personally experienced nasty resonances as well when I had my speakers lying down.
(...)
Have you listen to your speaker firing forward?
If yes, are you still hearing the resonances in such a configuration?
I wonder if the resonances are not related to the positioning of the driver which implies different direction for gravity, for example.

yes indeed, You are right, my experience is the same
why does it happen? why resonances become more audible in such positioning?
I have a hypothesis
I think that Dr Toole explains this:
Reverberation alone could increase our sensitivity to a medium or low-Q resonance by about 10 dB - a huge effect. This latter fact explains why music is so much more satisfying in a reverberant space than outdoors - timbrally richer because we can hear more of the resonant subtleties. It also explains why the toughest test for loudspeaker accuracy is in a room with some reflections, and why headphones (which have no added reflections) have an inherent advantage, and can sound acceptable when measurements indicate that there really are resonant problems. Killing all early reflections with absorbers not only changes imaging, it also makes loudspeakers with poorly controlled directivity sound better. All interesting stuff for audio folk.

see: http://www.audioholics.com/educatio...an-hearing-phase-distortion-audibility-part-2

yes, the same mechanism that gives loudspeakers in such positioning an advantage over conventional speakers in reproducing "resonant subtleties" in the recording at the same time reveals more of their faults - their own resonances :(
in other words - it gives better results but at the same time is more demanding in terms of loudspeaker drivers quality

best,
graaf
 
graaf said:
therefore, as a result of evolution in diversified environments, our hearing works in time domain

and we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source ...
... second (and more important) - in fact we don't hear any direct sound at all, NEVER EVER
"direct response curve" is totally irrelevant from the perspective of physiology of human hearing
this is simply not like our human hearing mechanism works

I´m not sure about that. Hearing in the time domain surely has been helpful in the hostile environments of yesterdays. But dialogue, which is a very important part of human evolution too, is more about the frequency domain. Distinguishing one persons voice from another does not rely on early or late reflections, but on differencies in frequency IMHO. How else do you explain our ability to distinguish/separate a certain voice from a talking crowd?
 
Rudolf said:

I´m not sure about that. Hearing in the time domain surely has been helpful in the hostile environments of yesterdays. But dialogue, which is a very important part of human evolution too, is more about the frequency domain. Distinguishing one persons voice from another does not rely on early or late reflections, but on differencies in frequency IMHO. How else do you explain our ability to distinguish/separate a certain voice from a talking crowd?

good question
"our ability to distinguish/separate a certain voice from a talking crowd" depends on precedence (Haas) effect which is a time effect

hypothesis that "Distinguishing one persons voice from another (...) rely on (...) differencies in frequency IMHO" is completely false

this is all basic psychoacoustics lamentably ignored by most audio engineers

identification of a voice, not unlike identification of a musical instrument and all other imaginable sound sources, depends on time factors: shape of initial transient, resonances, and envelope variations
as the say: "timbre is not static"
there is not only initial transient, voices and sounds of musical instruments are wholly transient in character
frequency as such (time factors taken apart) is irrelevant, tells nothing in terms of identification, literally:

If we record an instrument sound and cut its initial transient, a saxophone may not be distinguishable from a piano, or a guitar from a flute This is because the so-called "quasi-steady state" of the remaining sound is very similar among the instruments. But their tone beginning is very different, complicated and so very individual.

see: http://www.aip.org/149th/bader_microrhythm.htm

what is individual and as such identifiable depends wholly on time factors

You should also remember that identification process - from the first wave arrival at the ears to the human being become aware of a particular known, identified sound source - is not simple and takes substantial amount of time:

The ear has three integration times in terms of musical sound. After about 5 milliseconds (ms), we are able to perceive more than just a click. This is related to the human inner ear capacity of building up the critical bandwidth for frequency discrimination. The second important time is about 50ms. Here, we begin to hear distinct frequencies, not very accurately, but our ear gives us the chance to perceive a pitch Then after about 250ms, the whole sound is perceived very well.

add to this that the direction of sound is established in the first millisecond

250 ms from the first wave arrival at the ears to the initial phase of a sound source identification (human being becoming aware of a particular, identified sound source)
250 ms is "Amount of recordable time in echoic memory; that is, chunks of sound stimuli are recorded in echoic memory at this length of time"

see: http://www.humdrum.org/Music839D/Notes/timeline.html

from the perspective of our conciousness this is very short time, we are simply not aware what is going on until ofter 250 ms
therefore we think that we can identify a sound source immediately ;)
but from perspective of acoustics and of human hearing physiology it is quite a long time, many things happen well before we become aware of our friend talking in a crowded room

time factors are critical for hearing and as such for sound reproduction as well
from the perspective of information theory steady state signals (such as sine waves) used by audio engineers to test quality of sound reproduction equipment are useless, completely irrelevant
as steady state signal carries no information
and hearing is about processing information

interesting comment: www.celticaudio.co.uk/articles/science.pdf

best regards,
graaf
 
graaf,
I am learning from your reaction that my previous comment was quite misleading. My fault!:rolleyes:

What I wanted to argue in the first place was not frequency domain against time domain, but the irrelevance of the direct sound (first wave front) for the listening experience.

You have gathered an impressive amount of scientific proof for the relevance of transient effects in the listening experience. No question about the results. Lets postulate that those 250 msec are needed to perceive a sound in its "full richness". My believe is, that starting from the arrival of the first wavefront the ear/brain will scan all following acoustic events for their relevance with regard to that first wave front. Only if the brain can relate later spatiotemporal, directional and frequency events (reflections etc.) logically to that first wave front they will be integrated into the "complete" sound. Otherwise we would not be able to distinguish different sounds which are less than 250 ms apart.

So I feel that first wavefront is highly relevant - not just with regard to the direction of the sound source. It does not give the full picture, but it "sharpens" the subsequent combination of all other acoustic clues in the brain.

This seems in line with my personal experience regarding non direct sound. To me the multidirectional radiation from omnipoles or dipoles always sounds less sharp and less defined (but spatially more interesting) than from a direct radiator.

How do you think about it?
 
Rudolf said:
graaf,
What I wanted to argue in the first place was not frequency domain against time domain, but the irrelevance of the direct sound (first wave front) for the listening experience.
You have gathered an impressive amount of scientific proof for the relevance of transient effects in the listening experience. No question about the results. Lets postulate that those 250 msec are needed to perceive a sound in its "full richness". My believe is, that starting from the arrival of the first wavefront the ear/brain will scan all following acoustic events for their relevance with regard to that first wave front. Only if the brain can relate later spatiotemporal, directional and frequency events (reflections etc.) logically to that first wave front they will be integrated into the "complete" sound. Otherwise we would not be able to distinguish different sounds which are less than 250 ms apart.
So I feel that first wavefront is highly relevant - not just with regard to the direction of the sound source. It does not give the full picture, but it "sharpens" the subsequent combination of all other acoustic clues in the brain.
This seems in line with my personal experience regarding non direct sound. To me the multidirectional radiation from omnipoles or dipoles always sounds less sharp and less defined (but spatially more interesting) than from a direct radiator.
How do you think about it?

basically I agree with You

my proposition that:
we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source

has to be understood as a direct response to Etienne88 proposition that:
the reflections (…) should have the same frequency content as the direct sound

all I wanted to say was that human hearing does not discriminate between "frequency content of the direct sound" and "frequency content of the (early) reflected sound"

I absolutely agree with You that "integration" of the reflected sound to the experienced "complete sound" coming from a certain direction requires particular relation of the reflected sound to the direct sound in the sense of preserving certain common characteristic

but I believe that it is not the "frequency content"
WHY?
simply because it is NOT YET "established" IN physical REALITY!

it takes some time for the real tone from a voice or a musical instrument to build up and sound out as defined sound event with particular spectral (frequency) content

"Shortest possible length of a spoken English consonant (voiced stop consonants)" is 30 ms
"Fastest perceptual musical separation possible" and also "the time needed to cortically process musical elements" is 100 ms
"Shortest vowel length in normal speech" is 200 ms
see:http://www.humdrum.org/Music839D/Notes/timeline.html

moreover:
In the first 50 to 100 milliseconds of an instrumental sound, its spectrum is very unstable. This is due to inertia, the law of motion that states that things at rest tend to stay at rest unless acted upon by an external force. For example, the air column inside a saxophone has a certain amount of inertial resistance that must be overcome before it will vibrate properly. During the first 50 milliseconds of a note, that inertial battle produces wild spectral fluctuations called the initial transient . Pitch goes haywire, chaos ensues, and then you hear "saxophone."
see: http://emusician.com/mag/emusic_spectral_vistas/

so the "direct sound (first wave front)" is not even analyzed by the brain in terms of frequency content because the frequency content is not yet established in the sound source itself! There are only "wild spectral fluctuations called the initial transient"

what can possibly serve as a common characteristic enabling the brain to compare, analyse and integrate reflected waves as relating to a first wave can be perhaps the "initial waveform", the shape of the first "transient attack"
perhaps this is not very reliable mechanism and can be confusing so the brain takes more samples before even the sense of direction is established?
we all know that in real life it takes some time and concentration to point the direction of sound source in darkness or with eyes closed
but in real life also is very important that:
"Fastest perceptual musical separation possible" and also "the time needed to cortically process musical elements" is 100 ms

so the brain can take its time and many samples before the cortical process leading to conscious experience really starts
and as it has been noted, for us to become fully aware of all this around 250 ms is needed!

this is more than RT60 of a typical furnished living room (typical listening room) which IIRC is around 200 ms
it means that a short sound can practically stop (the sound pressure can be -60 dB in relation to the initial wave sound pressure) BEFORE we become aware of what it is!

how about that! :D

the timeline of the process of hearing speech and music in a typical living room seems to be as follows:
1) the brain starts immediately (<<1 ms) at the arrival of initial wavefront at the ear (sensing fluctuation of sound pressure) to collect all the data that might be relevant
2) then the sound event – a tone from a voice or instrument – gradually builds up in a process of fighting inertia, a process that can last even 100 ms (or more?), the brain continues to collect all data, comparing and selecting what is relevant. The selection is based not only on evolutionary formed physiology of the sense of hearing but also on the particular person previous experience!! THIS IS VERY IMPORTANT! I have 3 week baby at home. Does my little son hear sounds as I can hear them? I doubt :) He rather learns to hear. Well, newly born child learns to see as well. Hard to believe but they don’t see as we do. This is why they cross-eye so often. Perhaps they "cross-eye" also with their ears ;)
3) then the short sound can end around 200 ms that is before we become aware of what it is at around 250 ms after the start of the whole process

well, sort of ;)
this not real "theory"
I was just thinking while I was writing :)

so
"direct sound" is VERY relevant

BUT "frequency content" of "direct sound" corresponding to "frequency response on axis" is COMPETELY irrelevant
WHY?
because there is no such thing really :D
the sense of "where" is established well before the sense of "what"
the so called "localization phase" of the hearing process ends before first millisecond from the initial transient arrival at the ear - this is consequence of binaural hearing and of typical size of human head (distance between the ears)

so the sense of "direct sound" as a basis for comparison so that reflections can be compared and selected for "integration to one perceived sound event" is formed < 1 ms that is when we even cannot talk about any defined frequency content of a physical sound source!
there are only "wild spectral fluctuations called the initial transient"

in my hypothesis what can serve as a basis for comparison is something that perhaps can be called "initial transient wave attack curve shape", or "the shape of transient impulse rise curve"

which is the very thing corrupted by every crossover filter in multiway loudspeaker where the midrange lags behind the tweeter and the woofer lags behind the midrange :(

I wonder whether time response (as can be seen in step response measurement) off axis of a multiway loudspeaker significantly differs from its time response on axis?
does the kind of filters used affect this?

I supposed "yes and yes" but I really don’t know
If "yes" then we have the answer why reflections in the case of multiway loudspeakers are very often detrimental to quality of the spatial reproduction of sound
the precedence effect and "integration" of reflected sound cannot work properly because the reflected sound wave is significantly different from the first one

what do You think?

best regards,
graaf
 
graaf,
first and OT: Congratulations to your latest "loud speaker". :cloud9:
He possibly might be the most demanding one you ever build! :cheers:
I should know about it because mine has already changed from tweeter to woofer - a long time ago. :D

Back to topic: You have presented a really big and valuable amount of information in your last posts. I appreciate that very much. Have to dig deeper into it.

I would like you to think about two observations that came to my mind when reading your argumentation: One cycle of a 343 Hz wave takes 10 msec, one cycle of a 3.4 kHz wave 1 msec. So your initial transient attack (if we talk about ~ 50 msec) is quite far away from the common step or impulse response transient, which even for a fullrange speaker is done in less than 5 msec for the most part. In 50 msec the "intonation" process of an instrument could move from two cycles of 343 Hz to five cycles of 850 Hz to ten cycles of 3.4 kHz. I don´t know how many cycles the cochlea needs to identify a single frequency, but even in this "initial transient attack" time frame we already speak about frequency response. I have played the flute in younger days and - yes - a trained ear can actually hear the buildup of a tone in those 250 msec. Not in a rational sense of "understanding" or "analysing", but in a cognitive way.
In my view you are still argumenting in the realm of "transient" response when it already is "changing frequency" response. :confused:

Second observation: My dipole speakers work as dipoles up to ~ 2 kHz, where the midrange cone drivers cross to dome tweeters. If I equalize for a linear response in the nearfield, it sounds ok in the nearfield. But at the listening position the highs are a bit dull and they measure - as is to be expected - some dBs attenuated, because there is no contribution from the back of the tweeter.
If I equalize for a linear response at the listening position the highs get too bright for my ears. I think that´s because now the highs are emphazised in the direct response.
Only when I give the reverberant field the same frequency response as the direct field (by positioning added tweeters to the rear) this mismatch will be cured - I believe.