designing for imaging, soundstage

Status
Not open for further replies.
this may be a dumb question, but what the heck, it's never stopped me before!!

Regarding the narrow baffles and diffraction. From what I can gather, the theory is that it is the diffraction that 'smears' the image (hope I get the terms correct) and from the link to the murphy site the diffraction shows up clearly on the FR. And also we can see the difference (on the FR) when the diffraction is lessened by the roundover.

So, and this is where my poor wording might come into play so please think with it, is the improvement given by the roundover (seen in the FR with presumably an improvement in imaging associated with it) simply a matter of 'correcting the FR??

To explain, let's say that we did not do a roundover but by some other method was able to account for the diffraction off the baffle would we still get the same benefit? I use a deqx, but equally of course others could use some of the other computer based programs to do the same thing, and I can measure the response of the setup and via the magic of DSP get as flat a response as possible.

That would show an equal amount of improvement as seen in the graphs shown, BUT does that imply I have 'removed/lessened the diffraction'???

Or is it always best to have the roundovers AND a corrected FR? IE, are there extra effects of having a roundover that is not necessarily reflected in the FR?

Hope you can decipher what I'm asking.
 
The most critical time interval from an image localization standpoint is the first .68 milliseconds of sound, corresponding to a path length of about nine inches. This is roughly the distance around the head from one ear to the other. Reflections and diffraction within this time interval will be fused with the first-arrival sound from a localization standpoint and will degrade the imaging. The Precedence Effect or Haas Effect kicks in after this initial .68 millisecond window, whereby the ear/brain system from then on largely ignores repetitions of the original signal (reflections) as far as image directional cues are concerned.

Now the later within this .critical 68 millisecond window a reflection occurs, the greater the detriment to imaging because your ear/brain system interprets it as a larger angular discrepancy. So if reflections or diffraction happen early on (like from a narrow bafffle), the imaging is less disrupted. On the other hand, a very wide baffle can also image quite well as it can push the edge diffraction beyond the .68 millisecond window. If combined with a gentle curvature, this works extremely well. An example of a wide-baffle speaker that imaged extremely well was the sadly discontinued Snell Type A and its variants.

Duke
 
Well, diffraction is diffraction. One may be able to equalise on-axis, but this would also have an effect off-axis. This is the problem I have with offsetting of drivers on the baffle. It only works from the point of observation (which is usually on-axis).

I think that it is best to deal with diffraction by using curved or bevelled baffle surfaces. The reason being that one is then able to achieve a smoother power response (providing one gives attention to off-axis response at the same time).

For me the main problem that diffraction presents is the distortion of the polar response. It causes lobing. We hear (listen to) both the direct sound as well as the early reflections. These two sound sources blend to give us the final speaker sound. If the off-axis response is ragged, the early reflections will be ragged too, and so the final frequency response, depending on the mix ratio of direct to reflected sound.

Some here have suggested absorbing the early reflections. That is one option. However, the materials employed to do this have their own effective bandwidth, thus we see another frequency response distortion. An alternative method is to ensure that the furnishings and room boundaries immeditately surrounding the loudspeakers are symmetrical. Thus the frequency response of the early reflected sound from left and right will be similar. It is important for correct soundstaging that left and right frequency response be equal.

Furthermore, early reflections help in setting up phantom images. The images obtained thus may or may not be similar to what the original recording engineer intended (it depends on his studio layout and the recording method). I think that this comes down to personal taste.

I don't think that small speakers have less diffraction than larger ones. Different, certainly. The one thing that is more likely to happen on a large baffle than on a small baffle is the offsetting of drivers. This may yield a smooth on-axis response, but will give a lopsided polar response. If the reflective surfaces around such a speaker are not geometrically symmetrical, the imaging will be poor. Here is where applying acoustically absorbant materials can help.
 
Shaun said:


And wide baffles ofer poor imaging? I can't imagine why this might be so. What is the science behind this?


I believe the basic aim of narrow baffles is to keep ripples in the response at certain angles above the midrange (where it arguably is most important). That said, a wide baffle could easily achieve the same thing (below the midrange), and optimal driver placement and listening angle can also address these issues.
 
Naturally they become significant in higher frequencies than with bigger speakers. So they have a benefit for localization cues in the mids. The lower highs tend to sound rougher most of the time in plain vanilla minis.
 
We can look at this in two different domains, i.e. the frequency domain and the time domain.

In the frequency domain, diffraction causes frequency response irregularities which are minimum phase therefore can be corrected in EQ. With a square box you may derive a flat FR on axis but that is no guarantee it has flat FR off axis. Offsetting the tweeter does not solve this problem. The FR is often rough. A roundover with a radius from 40mm to 100mm helps smooth the FR significantly. The radius of the round over should be calculated in reference to around 1/8 to 1/2 of a wave length of the XO frequency. Woofers have a large radiating surface so the sound source distances to the edges vary therefore do not cause much FR irregularities. It is the tweeter that is a concern.

In the time domain, errors can not be corrected in EQ. So a narrow baffle wins.

i had built the narrow baffle about 5 years. i never got the image. i cannot agree with you. you say that it's easy to get the image.
no....no

I wished that diffraction were the single matter affecting sound image!
 
MaVo said:
The best possible soundstage would be created by two point sources without reflections, for example headphones or speakers in free space. When you put speakers into a room, you get reflections from boundaries in the room. This messes up the image, as it provides additional information, which was not in the original.

I disagree with this. You are correct that the room will add sounds and reflections that were not in the original, but these reflections are necessary for enhancing imaging. Sound source localization in the brain depends on reflections, though too much reflections will be confusing and render a blurry sound stage.

Have you ever been inside an acoustically dead room, such as a good wind tunnel? In there, you will have incredible difficulty localizing sound sources because reflections are completely absent. It sounds so strange and unnatural.

Also, most studio recordings are done in acoustically dead rooms. When playback is also in an acoustically dead room, there will be no reflections captured in the recording or added by the room. Therefore your brain will not have sufficient information to precisely localize the sound source.

However, too much reflections are certainly a problem. Therefore I agree that a constant-directivity system is a good solution. I would prefer a dipole for this, because it can achieve a pretty flat power response over the entire frequency range, even in the lower frequencies.
 
a_tewinkel said:

You are correct that the room will add sounds and reflections that were not in the original, but these reflections are necessary for enhancing imaging.

Wrong.

That would be like looking at blue with yellow glasses on and wondering why it's green.

The intended localization cues are captured as intended on the recording. Adding room reflections to the playback is distorting the playback.

This technique was (and in some cases still) used to create artificial ambience from a dry (anechoic) recording by placing a speaker in a relective room, then recording the direct and reflected sounds by stereo miking. It's called "echo chamber(ing)".

Sound source localization in the brain depends on reflections

Yes. If it's a stereo recording, then, the intended reflections (or lack therof) are on the recording. Otherwise you are echo chambering an ambient recording and distorting it.


Also, most studio recordings are done in acoustically dead rooms.

Wrong.

In pro studio's, most instruments are recorded in large spaces with acoustics that can be tailored, but are very ambient. They are called "live" rooms.

You do have "iso" rooms, but these are generally used for vocals and solo instruments that are isolated for greater control by the mixing engineer to be able to post-process and place these tracks in the proper place in a stereo mix by adding digital reverbs and so forth.

Recording instruments in an ambient manner can be very tricky, and often, you cannot undo the ambience, because it's pre-processed.

Common misconceptions about the how's and why's of studio recording.

Cheers
 
The ear is good at masking in the frequency domain, but poor at masking in the time domain. Translation: It's not the little ripples caused by diffraction that are audibly significant; it's the fact that the diffraction arrives at the ears later in time than the original signal, so the relatively low-level diffraction energy is not masked. We don't perceive it as a separate sound; we perceive it as a level-dependent distortion (gets subjectively worse - more harsh - as the loudness increases).

On another topic, I don't know that much about studio recording techniques but subjectively my preference is for a well-energized reverberant field in the listening room.

Duke
 
audiokinesis said:
The ear is good at masking in the frequency domain, but poor at masking in the time domain. Translation: It's not the little ripples caused by diffraction that are audibly significant; it's the fact that the diffraction arrives at the ears later in time than the original signal, so the relatively low-level diffraction energy is not masked. We don't perceive it as a separate sound; we perceive it as a level-dependent distortion (gets subjectively worse - more harsh - as the loudness increases).

audiokinesis, would you have a reference for this at all?

I can agree with what daygloworange is saying from the recording side. Everything I have recorded of a Classical or Jazz genre has been recorded in reverberant rooms, the aim in those cases has been to capture the original performance (including the venue).

When recording contemporary stuff I have generally put the percussion in a nice sounding room and kept the natural reverb. Other instruments and vocals are placed into dead rooms, Artificial Reverb is later added. I usually DI bass guitar to avoid room interactions altogether.
 
Noodle_snacks, I don't really have a comprehensive reference that I can give you a link to. My source is mostly conversations with Earl Geddes wherein he translated some of his findings about distortion perception into language that I could understand.

I think some relevant information can be found in an unfinished book that Earl was working on, and fortunately drafts of a few chapters are still online. Here's a link to his chapter on psychoacoustics:

http://www.gedlee.com/downloads/Chapter3.pdf
 
Roundovers

Just as a point of curiousity, I would like to know what the effect of a large roundover is with respect to baffle diffraction. I am imagining that if one wants to attenuate the effect of the late arriving diffracted sound, the larger a roundover is, the better. Is there an optimum radius? Maybe a point of diminishing return?

This begs the question: for a given baffle width (say w=9") will a given roundover radius (say r=1") have the effect of diffusing the diffracted sound, while retaining the diffraction frequency of a 9" square-edged baffle, or will it effectively widen the baffle face (maybe w=10", or maybe 11"?) or something in between?
 
Everything I've read, including the Geddes paper above, suggest the direction of a sound is imprinted by the outer ear. So even though the room reflections are recorded into the music, I think the ear/brain recognizes that the sound is not coming from say, the walls or ceiling (or trees or whatever environment such an ability evolved in), but instead coming from a point source (the speaker). Now we can do some neat recording tricks to increase the perceived spaciousness, but I don't think the ear is easily fooled and recognizes these tricks as slightly unnatural. We need REAL reflections to sound real.

One way to test this: play a recording where great effort is made to include the ambiance of the hall it was recorded in, and listen to it in a anechoic chamber. My guess is it will sound unnatural.
 
When I showed response for a step function of Jordan fullrange driver to Herr Manger he said to me that the voltage curve from microphone should go up simultaneously even off axis. I haven't seen such response from any speaker and this leads to microsecond accuracy of time align, acoustic lenses etc. This in "theory" is important for localisation. Would you agree with that? Some researchers claim 13 microsecond resolution for interaural time difference. I'm not sure if that matters.
 
Status
Not open for further replies.