Putting the Science Back into Loudspeakers

Status
Not open for further replies.
Those same reflections did not exist in the original “event” (unless it was in that room in which case they already exist in the recording) so how can adding something created by your speakers in your room TO the recording be necessary or present more than the recording contained?

That is partly why I urge people to set their stereo’s up outdoors or to try tiny point sources full range speakers on a large baffle because until you do that, there is no point of reference for what it’s like with no reflections (outdoors) or weak ones with a large flat baffle.

So far as I am aware Toole’s work at Harmon etc did not include a comparison to “no reflections” and was limited to just what was possible in their listening rooms and with the directivity of the speakers available.
Best,
Tom
 
Hi Kindhornman
They used to say if everyone agreed, then no one is thinking very hard.
In that vein, indulge my explanation for a bit and consider conducting the experiment I suggest.......

Best,
Tom
Very nice. I noticed you didn't mention the contrast in falloff rate vs wavefront geometry for spherical, cylindrical, and planar...but that's ok, no need to get that deep...

I ran a setup outdoors with two speakers 6 feet up on tripods, 60 feet apart, and listened for a while to the nice imaging despite the sources angle from 180 degrees to about 90.

with all due respect - in the light of research presented and discussed for example by Toole this general statement is simply UNTRUE
I tend to disagree with you. Tom wrote well and accurately, I found fault with nothing stated.

jn
 
Last edited:
That is partly why I urge people to set their stereo’s up outdoors or to try tiny point sources full range speakers on a large baffle because until you do that, there is no point of reference for what it’s like with no reflections (outdoors) or weak ones with a large flat baffle.

Fundamentally, what is the difference between this arrangement, and headphones? I'm guessing there is bleeding from both channels to each ear, and some effects from the shape of your ears and head on the directional wave fronts..? But taking it to its limits, are you ultimately aiming to get the same sound as wearing headphones?
 
Those same reflections did not exist in the original “event” (unless it was in that room in which case they already exist in the recording) so how can adding something created by your speakers in your room TO the recording be necessary or present more than the recording contained?

That is partly why I urge people to set their stereo’s up outdoors or to try tiny point sources full range speakers on a large baffle because until you do that, there is no point of reference for what it’s like with no reflections (outdoors) or weak ones with a large flat baffle.

So far as I am aware Toole’s work at Harmon etc did not include a comparison to “no reflections” and was limited to just what was possible in their listening rooms and with the directivity of the speakers available.
Best,
Tom

some reflections are just like second harmonic in the example that You gave:
For example mid band, 10% 2nd harmonic is inaudible even with a single pure tone and even less so with music which is mostly even harmonically related in it’s content...

and some are, so to speak, indispensable - "no reflections" is a "no option", and not just that it is difficult to achieve, I know, it is quite counterintuitive but still:
 

Attachments

  • in-head localisation.jpg
    in-head localisation.jpg
    303.5 KB · Views: 230
Last edited:
Fundamentally, what is the difference between this arrangement, and headphones? I'm guessing there is bleeding from both channels to each ear, and some effects from the shape of your ears and head on the directional wave fronts..? But taking it to its limits, are you ultimately aiming to get the same sound as wearing headphones?

Almost. With headphones, it is lateralization....with speakers it is localization.

When headphones are given identical L-R signals, the brain perceives it inside the head. Not speakers, in that case it is directly in front at some distance.

graaf...nice text. Certainly consistent with headphones, but extremely susceptible to L-R differences. Any reflections at all would spoil that effect..

jn
 
Last edited:
It's absurd to suggest anechoic conditions as a reference. Real domestic rooms are not anechoic. Not even close.

And besides, cross talk will reveal the speaker positions whenever direct sound is let to dominate. This happends especially at high freqs.

It is acknowledged Toole's studies are quite limited, but accordingly speakers producing more early reflections are ranged highest in preference. It should indicate the trend.

There is evidence in literature audio professionals having strongly biased preferences leaning towards dry sound with minimal reflections. This is in complete opposite amongs normal people.
The obvious problem arises when professionals are designing speakers for normal people, they find them sounding not natural. Should people submit themselves to it ? Of course not.


- Elias
 
Tom,
I follow your arguments well and you state them very concisely. I have done many things with PA sound systems where yes we had nothing causing a reflective wave to affect the sound. Again since that was a distributed sound system it is hard to correlate that with a single point source loudspeaker. I don't consider any of the current line source arrays really perfect in their radiation pattern or without comb filtering so that wouldn't meet your specifications either. But what we get in a room as Elias is saying is not an anechoic response. We have to take into affect the room responses, that would even include a flush mounted system in a wall, we still have floor, ceiling and corner reflection no matter the room treatment we apply. So we have to decide what is acceptable to the average listener and the average room. Anything else just seems to be a scientific experiment of what could be if we had transparent walls. I do agree with most of all that you are saying, I just don't think that it is applicable most of the time. Comb filtering is an obvious problem and there are as many attempts to overcome that as there are designs of loudspeakers. A careful selection of both crossover frequency and device size is important. But what of things like the MTM or WMTMW systems, these have incredible comb filter anomalies and yet some people love them. I understand about using a device in it optimum frequency, size is relevant to not just the comb filtering but also the cone breakup at frequencies less than the diameter of the device. I think that we can agree that any loudspeaker and crossover design, even active filtering is a set of compromises that the designer has to evaluate and attempt to do the least harm. That is why there is no ultimate loudspeaker, they all have their pluses and minuses. We can both find fault with any design. the thing is to find what we like and also we have to meet a price expectation in most instances. I work from waveguides to loudspeakers and I can find fault everywhere I look, that is just the nature of the beast we call loudspeakers.

Steven
 
Last edited:
The difference is the "in my head" location of the soundstage of headphones versus the externalized soundstage of speakers, which sounds more like a big picture hung in front of you.

Except the quoted text in Graaf's post above which suggests that in anechoic conditions you also get the 'in-head' sound with speakers.

What made me ask the question was Tom Danley's suggested experiment:

Obtain a pair of small full range drivers like some Fostex offer.
Mount them on a large flat baffle. Position them in normal stereo configuration but do it much closer than normal so that you are not close to side walls and making the SPL requirement easier.

where, presumably, the logical conclusion of "much closer" is right next to your ears, like headphones.

If two speakers are playing a mono signal in an anechoic chamber, it seems to me that both ears will receive identical signals, hence the in-head effect. The signal at each ear won't be the same as that emerging from each speaker, however, because it will comprise a mixture of paths from both speakers, with the listener's head in the way of one of them more than the other. But both ears will receive this combined signal.
 
I believe I read that quote from Toole in the connection of looking at graphical data of loudspeakers.

He points out that a person looking at such data and seeing a sharp spike or dip would automatically assume that the speaker was inferior to one that had a smoother looking curve.

Listening tests however show that such peaks and dips are hardly audible if not too prevalent, whereas a series of low q resonances is.

Then there is the tests done with the speaker systems concealed and not concealed.

Listeners come up with a different set of preferences depending upon whether they can see the speakers or not, and speakers in classier looking boxes, "sound" better when you can see them.

Manufacturers know this of course and the likes of Bose spend a great deal of effort on reducing the parts count and quality of the internal works, so that they can put on a more stylish exterior, because they know that what most perceive as how something sounds is as much to do with how it looks as it sounds.
rcw
 
Except the quoted text in Graaf's post above which suggests that in anechoic conditions you also get the 'in-head' sound with speakers. ...

I've done Tom's "suggested experiment", though I didn't realise it at the time.

I was playing with a set of laptop speakers. They're very small, approximately the size and shape of a teacup. (Like these, but with a matching size battery powered amplifier):
SBA1500/97 Philips Portable Speaker System SBA1500 - Philips Support

I had them set up each side of my laptop, about 18 inches apart and 24 inches away. I was playing some music selections that included centre panned mono vocals and acoustically stereo recorded instruments, and was amazed by how pinpoint sharp the location images were. I could move my head several inches from side to side without affecting the image locations. When I moved the speakers further back and apart, to the limit of their cables, the images became less precise. (More reverberant sound from the room?) I moved to the backyard and repeated the process, and found that the images remained sharp when I moved the speakers back. Granted, the backyard isn't an anechoic chamber, but a large lawn makes a pretty good half space with no significant reflections at mid to high frequencies.

I've listened to the same music via headphones, and the mono voices just seem to be "in my head". Some of the instruments do sound "outside my head", but their locations are random. For example, there's a cowbell that very definitely sounds like it's above and behind my head to the left.

Anyway, it's definitely worth experimenting with.
 
Hi All
Jn, thanks for the words, I tried to tell it like I see it.

Coppertop, when you have headphones on, you have an un-natural experience of having sound entering straight in, as if it were coming from 180 degrees apart. In that case, your outer ear has little effect, the reflections which cause the Pina response do not happen so your hearing / brain assembles the ”in your head” image lacking those ques. When you have speakers that radiate few spatial clues and have reflections suppressed low enough, you can get a similar image except that it is in front of you or between the two speakers.

In the late 80's and early 90’s Don Davis developed an “in the ear” recording technique where a volunteer allowed tiny tubes to be snaked into the ear canals and externally connected to tiny condenser microphones. After the effect of the tubes length was eq’d out, then what you got had many of the directional related artifacts that everyone’s ears add and allow us to hear height. In other words the recordings had many of the clues that give us our spatial hearing.
To reproduce this, he placed a pair of speakers on the ground facing up and towards your ears while sitting in “the “ chair. On major drawback of this was there was only one sweet spot and it was only large enough for one person.
Anyway, at the time, my acoustic world was focused on the above 20KHz acoustic levitators I was working on for a job and below 100Hz for the the Motor driven subwoofer (Servodrive subwoofers they were called) I developed because I loved low bass. I sent Don our smallest subwoofer, one called a Contrabass and While Don had figured a 16Hz low corner would make no difference in his recordings, he insisted I come down and hear what it sounded like.
The recording he played for me was made by a fellow walking around at the Indy 500 during time trials. I swear, it made the hair on the back of my neck stand up, it was by far the most realistic capture I have ever heard. It was not because of my subwoofer.
It required that the sound arrive at an angle where your ears alter it very little and so what was encoded is preserved as the directional information.
A guy I am pleased to call a friend and now works at our company, Doug Jones, he made a demonstration recording called LEDR ages ago now, by artificially generating these Pinna cues. There is a nice write up and the recordings here;

Online LEDR - Listening Environment Diagnostic Recording Sound Test

Hi graff If we are to see what prevents us from making a completely real reproduction of an acoustic environment, it can be helpful to remove the things which are variable if we can, to see how much effect they had. Hearing anything live outdoors or reproducing it out doors you don’t have lateral reflections or ones from above.
Hearing anything live or reproduced in an anechoic chamber is a very foreign experience when you don’t have reflections from a lower boundary, we aren’t floating in air that often.
Your right too that if a refection is low enough in level it is lost to the direct signal. In large scale sound to preserve intelligibility of words, it is desirable to have the direct sound at least 10Db over the reflected /reverberant sound level. Understand, I am not saying reflected sound is the only thing that can harm stereo imaging, it is just one of a number of things.

Hi Kindhornman The direction I hoped to convey was that we can break down all of the things like comb filtering, like radiating an interference pattern for other reasons, like a speaker being dispersive in time, it’s storing energy, it’s “free sound” (distortion and mechanical noise etc) and so on. All of these flaws or sources of signal alteration are from different issues but all of these can be eliminated if you find a solution that satisfies all the constraints.
The more faithful the speaker is to the input signal, the more like the input signal it sounds when you record it with a measurement mic, the more generations one can record through the mic and speaker before sounding lame. Each generation becoming an increasing caricature of what is wrong. I am a horn guy too and I can tell you that it is possible to make a multi-way system that acts like it has one driver.

Hi Coppertop No not that close, ideally you want to suppress the reflected sound by at least 10 or more dB. The sound level decreases at -6 dB per doubling so one can estimated the path and sound level of any reflected sound to that of the direct signal which is much closer to the source than the reflected sound. If the reflected sound had to travel twice as far from the source to the reflector and then to your ears, it will only be about -6dB down but if it were 4 times, then it would be down about -12 dB. Alternately do it outdoors where there are no side or upper reflections. In recording studio’s, when you see tiny monitors on the mix desk meter bridge, that is what they are going for, no close side or upper reflections or “near field” monitoring.
Best,
Tom Danley
 
Listening through loudspeakers are generally limited by the interaural effect no matter what you do, also the SPL is also limited to driver capability. One you start to put in more drivers to make up the SPL, you trade off detail. There are even problems when you have single full range drivers, the larger the drivers, the more focus and detail is lost. I think earphones may have a better chance of realistic presentation, but this is OT.
 
Tom,
I have to agree with most everything that you say and that is actually a nice thing for a change. I try to think outside of the box that we often see in loudspeaker design and that does require thinking about all the things you are talking about to produce a device that doesn't cause harm to the original waveform. Having a clean clear waterfall response without ringing. good off axis polar response, fast and clean impulse response without cone breakup, and decreased reflected waves in the transition from the cone to the surround are areas that I take vary seriously. I have thrown out the idea that I have to follow convention. I hear what you say and without going into to much detail I will continue to design outside the box, to use alternative solutions and materials to achieve the results that I think that we actually agree upon. I think that we would have some very interesting conversations over a bottle of wine or glass of beer outside of this forum. And yes I do believe that waveguides can be designed to blend without the destructive comb filtering that is so common, I hate honky horns and endeavor not to do that.
 
Also, the way a musical instrument radiates, it’s radiation balloon is not what we are trying to reproduce, our ears only occupy two points in space and as a result we cannot hear or be effected by sound radiated off axis, to the sides or rear in that outdoor condition.

The problem we have is the same as in optics: we are attempting to recreate a phase field ( set of "radiation baloons" and their "native" reflections) with a phase-free source. (Phase in this context really means direction of propogation )

Consider a television - even if the colours and brightness were correct (they are not) they have no phase information. It's flat. So-called "3-D" televisions fake their way around this by delivering different information to each eye - again phase free, just like headphones - hoping we won't notice that everything's at the same focal depth. It's not a 3D image (aka hologram)

As Tom Danley points out - an individual microphone does not (meaningfully) distinguish phase. Audio recording goes down hill from there.

(To any optical signal processing people or sonar operators - please forgive my oversimplification)
 
thoglette,
That is what I was trying to get at. An instrument or any other sound source is usually not a point source radiator. The sounds radiate from multiple surfaces and our brain puts all of these multidimensional sources together to form a picture. The reason we can detect distance and direction of sounds so well is not that we can pinpoint a single point source but the tiniest differences in detected sounds in both ears and also through bone conduction. I think that most of the time we try and think to simply. our processing is much more than that. It is the time differences, phase shifts and reflective and delayed wavefront that give us the three dimensional sounds that we are always trying to do reproduce with point source reproducers. As you say this does not create a holographic sound as in nature, we just have learned to accept this as close enough for now.
 
(To any optical signal processing people or sonar operators - please forgive my oversimplification)

interestingly the hero of this thread - John "putting the science back" Watkinson - is an optical processing expert too, see eg.:
http://www.3dmedia2010.com/sites/de...training_programs_of_3d_stereo_media_2010.pdf

He has even written an "industry bible" that is "The Art of Digital Video":
Amazon.com: The Art of Digital Video, Fourth Edition (9780240520056): John Watkinson: Books

so perhaps he really knows what he says
 
Last edited:
thoglette,
That is what I was trying to get at. An instrument or any other sound source is usually not a point source radiator. The sounds radiate from multiple surfaces and our brain puts all of these multidimensional sources together to form a picture.

That's not really the point I was making: the point I was making is that the sound field at a point in space and time is a vector not a (single) number.

You don't have enough degrees of freedom in two point sources to reproduce that. Period.


And while our ultimate sound receptors are sensitive to only absolute (rather than vector) pressure our whole system has evolved (and selftrained) to detect vector information for certain classes of signals.


The more faithful the speaker is to the input signal, the more like the input signal it sounds when you record it with a measurement mic, the more generations one can record through the mic and speaker before sounding lame. Each generation becoming an increasing caricature of what is wrong.

And on that topic I am in general agreeance with Messrs Danley (I have been seen standing in the garden with "the stero") and Olson (I try to spend as much time listening to what it is I'm trying to reproduce*; finding specific, repeatable audible diferences and then trying to understand what's going on)

However I'm not as perceptive, persistent, practiced or productive as either!

ps - was listening to a full range Stuart and Sons piano at close range today. The hype is certainly justified - it's an impressive beast!
 
It is the time differences, phase shifts and reflective and delayed wavefront that give us the three dimensional sounds that we are always trying to do reproduce with point source reproducers. As you say this does not create a holographic sound as in nature, we just have learned to accept this as close enough for now.


What do You mean by "holographic" (perceptually not technically) in this context?


Also I believe that the problem is not with point source reproducers but how we use them
 
Last edited:
Status
Not open for further replies.