The Advantages of Floor Coupled Up-Firing Speakers - Page 22 - diyAudio
Go Back   Home > Forums > Loudspeakers > Multi-Way

Multi-Way Conventional loudspeakers with crossovers

Please consider donating to help us continue to serve you.

Ads on/off / Custom Title / More PMs / More album space / Advanced printing & mass image saving
Reply
 
Thread Tools Search this Thread
Old 13th July 2008, 07:27 AM   #211
graaf is offline graaf  Poland
diyAudio Member
 
graaf's Avatar
 
Join Date: Jun 2007
Default Re: Re: tired

Quote:
Originally posted by graaf

there is one more thing that should be "hypothesised" to make the reasoning behind my proposition complete
early lateral reflections are indeed detrimental to the sound quality in case of most typical front firing speakers
they seem not to be detrimental to the sound quality in case of omnidirectional speakers, especially of the Carlsson type
why? this is the question
there is certainly something in Linkwitz's reasoning:

Quote:
It is possible to reproduce a stereo recording in an ordinary living room such that listeners have the illusion that the two loudspeakers have disappeared and when they close their eyes, they can easily imagine to be present at the recording site.
The vast majority of loudspeakers that have been sold - the typical box speakers - can only produce this effect to a limited degree because of a fundamental limitation: they radiate sound into the room with different intensity at different frequencies and angles, though flat on axis. Thus the many reflections from room boundaries and surfaces become sonically colored in a way that is characteristic for this type of loudspeaker and we always recognize the sound as coming from a box rather than being live. It is the generic loudspeaker sound.

Loudspeakers with frequency independent, constant directionality such as omni, dipole or cardioid loudspeakers, create delayed replicas of the direct sound in a room and fewer colored reflections. Our ear/brain perceptual apparatus does not get confused by replicas. Instead it relegates them to the earlier learned acoustic behavior of the room and readily blankets that information and thereby the room. This automatic response is part of the Precedence Effect in psychoacoustics and it is essential for creating the illusion of "being there" and not just in your living room.

It has been a fascinating journey for me to come to this understanding. Early on, electrostatic panel loudspeakers had intrigued me because they seemed to do something fundamentally right when properly set up and this despite their obvious limitations. I can see in hindsight that a few loudspeaker designers had pointed to the benefits of omni-directional loudspeakers.
I wonder though whether it is a question of frequency domain or rather of time domain?
whether the reflections ought to be "delayed replicas of the direct sound" in terms of frequency content or rather in terms of their time structure that is "delayed replicas" of the original (direct, the first) transient waveform?

best,
graaf
  Reply With Quote
Old 14th July 2008, 08:15 PM   #212
diyAudio Member
 
Join Date: Apr 2007
Interesting quote Graaf!
IMHO and to answer your questions, it is both: the reflections have to be delayed enough in time and they should have the same frequency content as the direct sound. The first one is there for assuring a good localisation. As Earl once said: "the principle direction is set very early 1-2 ms, but the stability (...) of that image is strongly influenced by the next 8-10 ms". Concerning the second criteria, both Earl and Siegfried are in agreement there: power response curve and direct response curve should match in order to avoid coloration.
One step in either direction is surely a step forward, but it will get you to the summit!
The set up you proposed in this thread, does delay some of the reflexions. But the problem is that the kind of wide range we use will start to beam somewhere around 1 or 2 kHz with the front lobe becoming thinner and thinner as the frequency goes up, all that without mentioning the side lobes... In other words a full ranger will not have the same direct and power response curves!

Siegfried seem to say that dipole speakers have the same direct and power response curves which I don't believe is true. According to his own website (here), the dipole speakers show constant directivity (after equalisation for the 6dB/octave roll off) below v/2D (V being the speed of sound and d the shortest path between the front and the back side of the driver). Above that, the polar pattern looks like a flower with 4 petals at v/D and then changed again to I don't really know what... Nevertheless, I am sure that whatever driver mounted like a dipole will start to beam at a certain frequency. Thus assuring that the direct and power response curves will not be the same. That is assuming one unique driver. Then if consider a 2, 3 or maybe more way dipole speaker and if you cut the first driver low enough for it to not beam, the second driver will not show constant directivity unless you narrow the board on which it will be mounted. To have CD up to 20 kHz, the path between the front and the back of the driver should theoretically be 8,5 cm... Which is not what he does!? what am i missing here???

This little reflection make me wonder why Earl does not use dipole for the frequencies below the ones covered by the waveguide... the WG has a 90 coverage so does dipoles for frequencies below v/2D. Maybe the fact that the polar response for dipole looks like a 8 is the reason for that: the back lobe might be unwanted? Earl, if you are still around, your knowledge would be appreciated!

Regards,
Etienne
  Reply With Quote
Old 14th July 2008, 08:29 PM   #213
diyAudio Member
 
Join Date: Apr 2007
Graaf,

I have a question to you!
In another thread, you mention that you have some resonance problem with your Fostex. You call the resonances howling and locate them around 2,8 kHz.
I personally experienced nasty resonances as well when I had my speakers lying down. I don't know how to call them neither where where to place them on a frequency axle. But I can say that they took place on female voices and on electric guitars when the later were holding a note. When my speakers are standing, I don't experience this phenomenon...

Here comes the question part
Have you listen to your speaker firing forward?
If yes, are you still hearing the resonances in such a configuration?
I wonder if the resonances are not related to the positioning of the driver which implies different direction for gravity, for example.

Regards,
Etienne
  Reply With Quote
Old 15th July 2008, 01:57 PM   #214
graaf is offline graaf  Poland
diyAudio Member
 
graaf's Avatar
 
Join Date: Jun 2007
Quote:
Originally posted by Etienne88
Interesting quote Graaf!
IMHO and to answer your questions, it is both: the reflections have to be delayed enough in time and they should have the same frequency content as the direct sound.
well, I am not sure
first, talking about "time structure" I mean preserving of the original (direct, the first) transient waveform
second, frequency content of reflected sound depends not only on polar response of the loudspeaker but first of all on sound absorption characteristic of the reflecting surface

therefore I doubt whether if it is really neccesary for reflections to have the same frequency content as the direct sound
WHY?
because humans don't live and have never lived (during evolution) in perfectly reverberant spaces but rather in acoustically diversified environments with nonlinear sound absorption - reflections rarely have the same frequency content as the direct sound
but precedence effect has to work in all those environment for us to be able to make any practical use of our sense of hearing

therefore, as a result of evolution in diversified environments, our hearing works in time domain

and we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source

what we HEAR AS SOUND is in reality A SUM of the whole time-locked acoustic energy we receive during 30-50 ms after the arrival of the first wavefront

Quote:
Originally posted by Etienne88

In other words a full ranger will not have the same direct and power response curves!
but WHAT direct?
two points:
first, we are not listening "on axis",
second (and more important) - in fact we don't hear any direct sound at all, NEVER EVER
"direct response curve" is totally irrelevant from the perspective of physiology of human hearing
this is simply not like our human hearing mechanism works

Quote:
Originally posted by Etienne88

what am i missing here???
I don't know!
and I think that it is not that important
IMHO Linkwitz has interesting intuitions but that is all
because He is not into psychoacoustics
He is very reliable, very honest in what He does, He is even willing to openly question His own assumptions, to admit that He "was wrong" or that He "doesn't know"
this is very rare
BUT still thinks like an engineer and it is not enough to be even extremely competent and knowledgable engineer to understand what audio is all about IMHO

Quote:
Originally posted by Etienne88

This little reflection make me wonder why Earl does not use dipole for the frequencies below the ones covered by the waveguide
because according to the results of Dr Geddes' research it is not needed
CD is critically important for 500 Hz and up

Quote:
Originally posted by Etienne88

I personally experienced nasty resonances as well when I had my speakers lying down.
(...)
Have you listen to your speaker firing forward?
If yes, are you still hearing the resonances in such a configuration?
I wonder if the resonances are not related to the positioning of the driver which implies different direction for gravity, for example.
yes indeed, You are right, my experience is the same
why does it happen? why resonances become more audible in such positioning?
I have a hypothesis
I think that Dr Toole explains this:
Quote:
Reverberation alone could increase our sensitivity to a medium or low-Q resonance by about 10 dB - a huge effect. This latter fact explains why music is so much more satisfying in a reverberant space than outdoors - timbrally richer because we can hear more of the resonant subtleties. It also explains why the toughest test for loudspeaker accuracy is in a room with some reflections, and why headphones (which have no added reflections) have an inherent advantage, and can sound acceptable when measurements indicate that there really are resonant problems. Killing all early reflections with absorbers not only changes imaging, it also makes loudspeakers with poorly controlled directivity sound better. All interesting stuff for audio folk.
see: http://www.audioholics.com/education...ibility-part-2

yes, the same mechanism that gives loudspeakers in such positioning an advantage over conventional speakers in reproducing "resonant subtleties" in the recording at the same time reveals more of their faults - their own resonances
in other words - it gives better results but at the same time is more demanding in terms of loudspeaker drivers quality

best,
graaf
  Reply With Quote
Old 15th July 2008, 03:13 PM   #215
Rudolf is offline Rudolf  Germany
diyAudio Member
 
Rudolf's Avatar
 
Join Date: Mar 2003
Location: Germany
Quote:
Originally posted by graaf

therefore, as a result of evolution in diversified environments, our hearing works in time domain

and we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source ...
... second (and more important) - in fact we don't hear any direct sound at all, NEVER EVER
"direct response curve" is totally irrelevant from the perspective of physiology of human hearing
this is simply not like our human hearing mechanism works
Im not sure about that. Hearing in the time domain surely has been helpful in the hostile environments of yesterdays. But dialogue, which is a very important part of human evolution too, is more about the frequency domain. Distinguishing one persons voice from another does not rely on early or late reflections, but on differencies in frequency IMHO. How else do you explain our ability to distinguish/separate a certain voice from a talking crowd?
__________________
www.dipolplus.de
  Reply With Quote
Old 16th July 2008, 07:16 AM   #216
graaf is offline graaf  Poland
diyAudio Member
 
graaf's Avatar
 
Join Date: Jun 2007
Quote:
Originally posted by Rudolf

Im not sure about that. Hearing in the time domain surely has been helpful in the hostile environments of yesterdays. But dialogue, which is a very important part of human evolution too, is more about the frequency domain. Distinguishing one persons voice from another does not rely on early or late reflections, but on differencies in frequency IMHO. How else do you explain our ability to distinguish/separate a certain voice from a talking crowd?
good question
"our ability to distinguish/separate a certain voice from a talking crowd" depends on precedence (Haas) effect which is a time effect

hypothesis that "Distinguishing one persons voice from another (...) rely on (...) differencies in frequency IMHO" is completely false

this is all basic psychoacoustics lamentably ignored by most audio engineers

identification of a voice, not unlike identification of a musical instrument and all other imaginable sound sources, depends on time factors: shape of initial transient, resonances, and envelope variations
as the say: "timbre is not static"
there is not only initial transient, voices and sounds of musical instruments are wholly transient in character
frequency as such (time factors taken apart) is irrelevant, tells nothing in terms of identification, literally:

Quote:
If we record an instrument sound and cut its initial transient, a saxophone may not be distinguishable from a piano, or a guitar from a flute This is because the so-called "quasi-steady state" of the remaining sound is very similar among the instruments. But their tone beginning is very different, complicated and so very individual.
see: http://www.aip.org/149th/bader_microrhythm.htm

what is individual and as such identifiable depends wholly on time factors

You should also remember that identification process - from the first wave arrival at the ears to the human being become aware of a particular known, identified sound source - is not simple and takes substantial amount of time:

Quote:
The ear has three integration times in terms of musical sound. After about 5 milliseconds (ms), we are able to perceive more than just a click. This is related to the human inner ear capacity of building up the critical bandwidth for frequency discrimination. The second important time is about 50ms. Here, we begin to hear distinct frequencies, not very accurately, but our ear gives us the chance to perceive a pitch Then after about 250ms, the whole sound is perceived very well.
add to this that the direction of sound is established in the first millisecond

250 ms from the first wave arrival at the ears to the initial phase of a sound source identification (human being becoming aware of a particular, identified sound source)
250 ms is "Amount of recordable time in echoic memory; that is, chunks of sound stimuli are recorded in echoic memory at this length of time"

see: http://www.humdrum.org/Music839D/Notes/timeline.html

from the perspective of our conciousness this is very short time, we are simply not aware what is going on until ofter 250 ms
therefore we think that we can identify a sound source immediately
but from perspective of acoustics and of human hearing physiology it is quite a long time, many things happen well before we become aware of our friend talking in a crowded room

time factors are critical for hearing and as such for sound reproduction as well
from the perspective of information theory steady state signals (such as sine waves) used by audio engineers to test quality of sound reproduction equipment are useless, completely irrelevant
as steady state signal carries no information
and hearing is about processing information

interesting comment: www.celticaudio.co.uk/articles/science.pdf

best regards,
graaf
  Reply With Quote
Old 16th July 2008, 09:53 AM   #217
Rudolf is offline Rudolf  Germany
diyAudio Member
 
Rudolf's Avatar
 
Join Date: Mar 2003
Location: Germany
graaf,
I am learning from your reaction that my previous comment was quite misleading. My fault!

What I wanted to argue in the first place was not frequency domain against time domain, but the irrelevance of the direct sound (first wave front) for the listening experience.

You have gathered an impressive amount of scientific proof for the relevance of transient effects in the listening experience. No question about the results. Lets postulate that those 250 msec are needed to perceive a sound in its "full richness". My believe is, that starting from the arrival of the first wavefront the ear/brain will scan all following acoustic events for their relevance with regard to that first wave front. Only if the brain can relate later spatiotemporal, directional and frequency events (reflections etc.) logically to that first wave front they will be integrated into the "complete" sound. Otherwise we would not be able to distinguish different sounds which are less than 250 ms apart.

So I feel that first wavefront is highly relevant - not just with regard to the direction of the sound source. It does not give the full picture, but it "sharpens" the subsequent combination of all other acoustic clues in the brain.

This seems in line with my personal experience regarding non direct sound. To me the multidirectional radiation from omnipoles or dipoles always sounds less sharp and less defined (but spatially more interesting) than from a direct radiator.

How do you think about it?
__________________
www.dipolplus.de
  Reply With Quote
Old 16th July 2008, 07:22 PM   #218
graaf is offline graaf  Poland
diyAudio Member
 
graaf's Avatar
 
Join Date: Jun 2007
Quote:
Originally posted by Rudolf
graaf,
What I wanted to argue in the first place was not frequency domain against time domain, but the irrelevance of the direct sound (first wave front) for the listening experience.
You have gathered an impressive amount of scientific proof for the relevance of transient effects in the listening experience. No question about the results. Lets postulate that those 250 msec are needed to perceive a sound in its "full richness". My believe is, that starting from the arrival of the first wavefront the ear/brain will scan all following acoustic events for their relevance with regard to that first wave front. Only if the brain can relate later spatiotemporal, directional and frequency events (reflections etc.) logically to that first wave front they will be integrated into the "complete" sound. Otherwise we would not be able to distinguish different sounds which are less than 250 ms apart.
So I feel that first wavefront is highly relevant - not just with regard to the direction of the sound source. It does not give the full picture, but it "sharpens" the subsequent combination of all other acoustic clues in the brain.
This seems in line with my personal experience regarding non direct sound. To me the multidirectional radiation from omnipoles or dipoles always sounds less sharp and less defined (but spatially more interesting) than from a direct radiator.
How do you think about it?
basically I agree with You

my proposition that:
Quote:
we don't hear any "direct sound" AT ALL - the so called "direct sound" (= the first wavefront) establishes only the sense of direction of sound source
has to be understood as a direct response to Etienne88 proposition that:
Quote:
the reflections () should have the same frequency content as the direct sound
all I wanted to say was that human hearing does not discriminate between "frequency content of the direct sound" and "frequency content of the (early) reflected sound"

I absolutely agree with You that "integration" of the reflected sound to the experienced "complete sound" coming from a certain direction requires particular relation of the reflected sound to the direct sound in the sense of preserving certain common characteristic

but I believe that it is not the "frequency content"
WHY?
simply because it is NOT YET "established" IN physical REALITY!

it takes some time for the real tone from a voice or a musical instrument to build up and sound out as defined sound event with particular spectral (frequency) content

"Shortest possible length of a spoken English consonant (voiced stop consonants)" is 30 ms
"Fastest perceptual musical separation possible" and also "the time needed to cortically process musical elements" is 100 ms
"Shortest vowel length in normal speech" is 200 ms
see:http://www.humdrum.org/Music839D/Notes/timeline.html

moreover:
Quote:
In the first 50 to 100 milliseconds of an instrumental sound, its spectrum is very unstable. This is due to inertia, the law of motion that states that things at rest tend to stay at rest unless acted upon by an external force. For example, the air column inside a saxophone has a certain amount of inertial resistance that must be overcome before it will vibrate properly. During the first 50 milliseconds of a note, that inertial battle produces wild spectral fluctuations called the initial transient . Pitch goes haywire, chaos ensues, and then you hear "saxophone."
see: http://emusician.com/mag/emusic_spectral_vistas/

so the "direct sound (first wave front)" is not even analyzed by the brain in terms of frequency content because the frequency content is not yet established in the sound source itself! There are only "wild spectral fluctuations called the initial transient"

what can possibly serve as a common characteristic enabling the brain to compare, analyse and integrate reflected waves as relating to a first wave can be perhaps the "initial waveform", the shape of the first "transient attack"
perhaps this is not very reliable mechanism and can be confusing so the brain takes more samples before even the sense of direction is established?
we all know that in real life it takes some time and concentration to point the direction of sound source in darkness or with eyes closed
but in real life also is very important that:
"Fastest perceptual musical separation possible" and also "the time needed to cortically process musical elements" is 100 ms

so the brain can take its time and many samples before the cortical process leading to conscious experience really starts
and as it has been noted, for us to become fully aware of all this around 250 ms is needed!

this is more than RT60 of a typical furnished living room (typical listening room) which IIRC is around 200 ms
it means that a short sound can practically stop (the sound pressure can be -60 dB in relation to the initial wave sound pressure) BEFORE we become aware of what it is!

how about that!

the timeline of the process of hearing speech and music in a typical living room seems to be as follows:
1) the brain starts immediately (<<1 ms) at the arrival of initial wavefront at the ear (sensing fluctuation of sound pressure) to collect all the data that might be relevant
2) then the sound event a tone from a voice or instrument gradually builds up in a process of fighting inertia, a process that can last even 100 ms (or more?), the brain continues to collect all data, comparing and selecting what is relevant. The selection is based not only on evolutionary formed physiology of the sense of hearing but also on the particular person previous experience!! THIS IS VERY IMPORTANT! I have 3 week baby at home. Does my little son hear sounds as I can hear them? I doubt He rather learns to hear. Well, newly born child learns to see as well. Hard to believe but they dont see as we do. This is why they cross-eye so often. Perhaps they "cross-eye" also with their ears
3) then the short sound can end around 200 ms that is before we become aware of what it is at around 250 ms after the start of the whole process

well, sort of
this not real "theory"
I was just thinking while I was writing

so
"direct sound" is VERY relevant

BUT "frequency content" of "direct sound" corresponding to "frequency response on axis" is COMPETELY irrelevant
WHY?
because there is no such thing really
the sense of "where" is established well before the sense of "what"
the so called "localization phase" of the hearing process ends before first millisecond from the initial transient arrival at the ear - this is consequence of binaural hearing and of typical size of human head (distance between the ears)

so the sense of "direct sound" as a basis for comparison so that reflections can be compared and selected for "integration to one perceived sound event" is formed < 1 ms that is when we even cannot talk about any defined frequency content of a physical sound source!
there are only "wild spectral fluctuations called the initial transient"

in my hypothesis what can serve as a basis for comparison is something that perhaps can be called "initial transient wave attack curve shape", or "the shape of transient impulse rise curve"

which is the very thing corrupted by every crossover filter in multiway loudspeaker where the midrange lags behind the tweeter and the woofer lags behind the midrange

I wonder whether time response (as can be seen in step response measurement) off axis of a multiway loudspeaker significantly differs from its time response on axis?
does the kind of filters used affect this?

I supposed "yes and yes" but I really dont know
If "yes" then we have the answer why reflections in the case of multiway loudspeakers are very often detrimental to quality of the spatial reproduction of sound
the precedence effect and "integration" of reflected sound cannot work properly because the reflected sound wave is significantly different from the first one

what do You think?

best regards,
graaf
  Reply With Quote
Old 17th July 2008, 07:05 PM   #219
Rudolf is offline Rudolf  Germany
diyAudio Member
 
Rudolf's Avatar
 
Join Date: Mar 2003
Location: Germany
graaf,
first and OT: Congratulations to your latest "loud speaker".
He possibly might be the most demanding one you ever build!
I should know about it because mine has already changed from tweeter to woofer - a long time ago.

Back to topic: You have presented a really big and valuable amount of information in your last posts. I appreciate that very much. Have to dig deeper into it.

I would like you to think about two observations that came to my mind when reading your argumentation: One cycle of a 343 Hz wave takes 10 msec, one cycle of a 3.4 kHz wave 1 msec. So your initial transient attack (if we talk about ~ 50 msec) is quite far away from the common step or impulse response transient, which even for a fullrange speaker is done in less than 5 msec for the most part. In 50 msec the "intonation" process of an instrument could move from two cycles of 343 Hz to five cycles of 850 Hz to ten cycles of 3.4 kHz. I dont know how many cycles the cochlea needs to identify a single frequency, but even in this "initial transient attack" time frame we already speak about frequency response. I have played the flute in younger days and - yes - a trained ear can actually hear the buildup of a tone in those 250 msec. Not in a rational sense of "understanding" or "analysing", but in a cognitive way.
In my view you are still argumenting in the realm of "transient" response when it already is "changing frequency" response.

Second observation: My dipole speakers work as dipoles up to ~ 2 kHz, where the midrange cone drivers cross to dome tweeters. If I equalize for a linear response in the nearfield, it sounds ok in the nearfield. But at the listening position the highs are a bit dull and they measure - as is to be expected - some dBs attenuated, because there is no contribution from the back of the tweeter.
If I equalize for a linear response at the listening position the highs get too bright for my ears. I think thats because now the highs are emphazised in the direct response.
Only when I give the reverberant field the same frequency response as the direct field (by positioning added tweeters to the rear) this mismatch will be cured - I believe.
__________________
www.dipolplus.de
  Reply With Quote
Old 18th July 2008, 02:36 AM   #220
adason is offline adason  United States
diyAudio Member
 
adason's Avatar
 
Join Date: Nov 2004
Location: Maryland
Great web page Rudolf, there is nothing like dipole midrange, nothing even close.
  Reply With Quote

Reply


Hide this!Advertise here!
Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Using a diffuser cone for up-firing speakers tspringer99 Multi-Way 19 23rd July 2014 02:04 AM
Floor Standing Speakers. gurpreetsingh Full Range 11 12th June 2012 06:42 AM
side/ rear firing speakers Good/Bad? mcmahon48 Multi-Way 1 6th February 2009 12:28 PM
How far can the driver of a down-firing sub be from the floor? The Paulinator Subwoofers 11 16th May 2007 08:10 PM
Woofer: side firing pair vs front firing? tcpip Multi-Way 13 9th September 2005 02:13 PM


New To Site? Need Help?

All times are GMT. The time now is 08:08 PM.


vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd.
Copyright 1999-2014 diyAudio

Content Relevant URLs by vBSEO 3.3.2