How do you get good imaging?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
But what is the typical crossover point subs to mains?? As long as it is below say 70Hz or so all the higher frequency location clues from a bass player in the say right channel still go there. So in a stereo set up a panned bass player will be where he is supposed to be even if the sub channel is summed to mono.

Rob:)
 
'Thats basically the point isn't it. That the HF cues of the LF signals go to the correct location regardless of where the LFs go, so there is no disruption of the image."

Well yes but then

"But bass is never located in the middle, but might not matter at sub frequencies"

So no it doesn't or am I missing something??

Rob:)
 
That the HF cues of the LF signals go to the correct location regardless of where the LFs go
I understand the "window" of auditory fusion ( re Hass ) is indefinite, with 35ms to 50ms an accepted range.
A 35ms sample would allow 35 cycles of a 1000 hz tone but only 7 cycles of 200Hz, and less than 4 @ 100hz.
The ear's has decreased insensitivity to LF ( Fletcher Munson ) and the size of a LF wavelength @ 100Hz is 10X the 1khz length. The distance involved for a given amount of phase shift is also 10x larger.
This mechanics of human perception and physiology appear to determine that bass ( below 80 -100Hz approx ) is perceived as being omni directional.
 
Well, my experience with recording is probably influenced by listening to a lot of classical, where a forest of "spotlight" multi-mikes are used, along with arrays to pick up the "hall" sound. These recordings are notorious for spatially randomized bass, unless special steps are taken to sum the bass below a certain frequency. Multitracked mono, as commonly used in rock recordings, only uses spatially dispersed stereo microphone pickup for a few instruments - drums and piano come to mind - everything else is close-miked and pan-potted on the console to the desired location.

I guess a lot depends on whether the band was really playing together in the studio all at the same time, or if different instruments were "phoned in" from different locations around the world as part of the sweetening process. If they are all sitting together in the same location, some bleedthrough from mike to mike is inevitable, regardless of recording technique. Get enough bleedthrough, and some interesting things start to happen in the bass range.

Movie soundtracks, of course, are wholly synthetic. There may be a real orchestra, there may be a lot of samples mimicking an orchestra, and intensive signal processing to prepare the signal for the heavy compression ratio of Dolby Digital is a given. Throw too much at a lossy compressor and artifacts start to become audible; since the destination soundtrack of a movie is going to be Dolby Digital, DTS, and SDDS, the mix is adjusted until the various compressors are artifact-free. Since different algorithms generate different artifacts, it's not surprising that they might even get different mixes.

All of this impacts the bass; if the destination is going to be lossy compression, particularly at high ratios, it doesn't make sense to leave a lot of partially correlated phase information in the mix. That just overtaxes the compressor and throws away valuable bits on something that can't be heard in the theater anyway. A simple mix is an easy to compress mix; rapidly fluctuating interchannel phase information is not going to be handled well by a high-ratio lossy compressor. If you're lucky, you get momentary mono; if not so lucky, artifacts.

So the roundabout answer about spatial bass is that it very much depends on the recording; the days of summed mono bass to cater to a 1959 RCA console phonograph that is prone to groove-skipping are long gone, but now many recordings are essentially compilations of many different mono tracks, with a little bit of real stereo here and there for flavoring. Large-ensemble recordings are probably the last refuge of true stereo (or at least having all of the musicians in one place at the same time).
 
I should mention in passing that noise - that is, fully random, noncorrelated noise - is not compressible, either via lossy or lossless compression. This is true of both visual and auditory compressors.

With a visual compressor, the noise problem appears as grain that is different from one movie frame to the next. There is no correlation between frames, of course, and the location of the grain clouds are fully random. So a visual compressor can end up using a lot of the bit allowance on something as dumb as the clear sky, which can occupy a lot of the frame in outdoor shots. We don't see the film grain whizzing by at 24 frames a second, but the compressor sure does, and it wastes a lot of power and bandwidth trying to find correlations where none exist. In practice, film has to be signal-conditioned before compression, particularly to remove useless and bandwidth-consuming film grain. Get rid of the grain, and the compressor puts its power to work finding edges and determining motion paths from frame to frame.

In audio, we have a problem with spaced microphone artifacts and reverb tails. Simply walking between a pair of spaced microphones generates an astoundingly complex phase picture. Put several musicians between the spaced microphones and the phase just turns into a ball of yarn on the vectorscope. Reverb tails, although correlated to the music for a human listener, look like noise to the compressor, and if bits are running short (dense spectrum), will be discarded as the reverb descends into the noise floor. In order to discard 90% or more of the bitstream coming in, the compressor has to use a variety of algorithms, and make decisions on the fly which algorithm to use. When the bit allocation is running low, the compressor will gradually discard more and more of the stereo information (the L-R channel), until you end up with steered mono.

It's in the interest of the skilled mixdown engineer to avoid compressor artifacts; it's not a major concern if the target is an uncompressed Compact Disc, but if the target is high-ratio compression - Dolby Digital or MP3 - the engineer would be making a mistake to allow compressor artifacts to be audible. This is where using a vectorscope and paying attention to possible correlations between spaced microphones can make a big difference to the success of the compressor.
 
HK26147 said:
This mechanics of human perception and physiology appear to determine that bass ( below 80 -100Hz approx ) is perceived as being omni directional.


Quite correct. Futhermore, because of these same physics, the perception of bass is almost a steady state situation where we cannot perceive reflections etc. We hear steady state signals at LFs, not transients.
 
Nicely summarized! I'm glad we're drawing a disctinction between early and late reflections.

One thing I've noticed with waveguides is that you can bend the rules a bit. For example, you wouldn't want a conventional speaker too close to the walls. But with a waveguide it's possible if the wall forms a boundary with the waveguide.

For instance, I used to have my speakers set up in a very small room. With the speakers perpendicular to the rear wall, the image lacked spaciousness. But the sense of spaciousness increased dramatically when I oriented the speakers so that the mouth of the waveguide formed a boundary with the side wall.

It was quite amazing to turn the lights off and hear a soundstage which extended beyond the confines of the room!

Of course this isn't ideal, I really needed a bigger room. But if you're cursed with a small room, waveguides can offer unique advantages if you use room boundaries to full effect. (The same thing applies to cars also, and it's the reason waveguides sound excellent in a vehicle.)

I need to share this. I recently built omnis and was having some "fun" with placement. Let me first get the obvious out of the way and say that they sound best in the position recommended by Linkwitz for the Plutos.

I made an unusual discovery when I placed them in the room corners. The image was incredible! The wall in front of me disappeared and the performance was happening in place of it! The lips of the singer were wrapped around my room and I was "seeing" directly inside of her throat! The soundstage was extremely well defined more so than in the recommended spot.

I probably don't need to say that the overall sound quality was bad in the corner so they didn't stay there long. The tonal balance was off, resolution and detail were diminished and there was a blurring of the highs that made transients sound lame.
 
I made an unusual discovery when I placed them in the room corners. The image was incredible! The wall in front of me disappeared and the performance was happening in place of it! The lips of the singer were wrapped around my room and I was "seeing" directly inside of her throat! The soundstage was extremely well defined more so than in the recommended spot.
That is not what I would call "good imaging" :no:
 
You'll get good imaging if stereo channels do not intermix before arriving to listener.
If you have computer speakers on your table, put them on the sides of that table(on the floor or small stands lower than table's height). You'll get great imaging. Floorstanding speakers usually have equipment rack or a TV between them, which also helps better imaging.
I assume that speakers with directional sound don't need physical separation between them to achieve good imaging, not sure though.
Diffraction and early reflections may color sound but I didn't find them to affect imaging.
 
Last edited:
MisterTwister - I have to disagree.

What you are talking about is "headphone" style imaging, which is, yes, rather precise, but lacks any kind of room ambiance. Getting the room involved adds to the spaciousness feeling, but also make imaging much more difficult. The "equipment rack or a TV between" the main speakers is a real degradation in imaging due to the diffraction from these objects. And I would disagree completely with the later part of this statement: "Diffraction and early reflections may color sound but I didn't find them to affect imaging." Diffraction and early reflections have a significant effect on imaging.

Maybe we are not talking about or looking for the same things. I am not looking for "headphone" imaging in my room - the "you are there" effect. I want the sound to appear to be in the room with me, not the other way arround.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.