How do you get good imaging?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Patrick Bateman said:
Wayne,

I have my speakers set up in the manner described in your link. Being able to listen any where in the room is delightful. For instance, my listening room also doubles as my office, and I really appreciate being able to listen in the sweet spot, but also getting good sound at my desk which is a few behind the sweet spot. Highly recommended!

If I'm not mistaken this is also the setup described here:
http://www.gedlee.com/downloads/Cum laude.pdf

You're right. Earl recommends that same setup and so does Duke at AudioKinesis. We each have speakers that provide a constant 90 degree pattern, which is a requisite for this setup.

I learned about this by accident a long time ago, way back in the 1970's. The Pi cornerhorn always seemed to do something magical, even more than just boundary loading and constant directivity would explain. The imaging and room coverage was much better than anything else I had heard.

Back then, I guessed it was mostly due to the limited early reflections inherent in the design. It provides uniform coverage and the radiation angle is ultimately limited by its corner placement. The sound doesn't reflect from the side walls, rather, it is directed into the room by them, much like a very large horn. This is true down to the Schroeder frequency anyway, where room modes start to come into play.

What I didn't realize until later was how the geometry of having the forward axis cross in front of the listener tended to compensate stereo balance when the listener is offset one side or the other. Movement towards one speaker moves you further off axis from it, but more on-axis with the more distant speaker. This tends to compensate for stereo balance, making a larger area where the two speakers sound approximately equally loud.

You can walk around the room and you'll notice spectral balance everywhere, because the speakers are CD. But stereo imaging is best when you're equidistant from each speaker when you're at the place where the forward axis cross or ahead of it. As you move away from the speakers, past where the forward axis cross, the "sweet spot" becomes larger.

I displayed Pi cornerhorns at the Midwest Audiofest in 2003. Duke LeJeune was there and commented on the sound. He talked about how uniform the sound was througout the room and stated an interest in the approach. He has since then introduced speakers of his own that provide uniform directivity and he recommends that they be toed in 45 degrees.

Similarly, in 2005 at the Great Plains Audiofest, Earl Geddes unveiled his Summa loudspeakers. They provide uniform directivity using his oblate spheroidal waveguide and patent pending HOM reducing foam. His setup is just like mine, with 45 degree toe in.

I was very pleased because prior to Geddes and AudioKinesis' entry, my loudspeakers were somewhat alone in the market. The craze back then seemed to be tractrix round horns like Edgars and Avantgardes. When I would talk about constant directivity, tractrix horn lovers usually tended to be somewhat skeptical. In particular, the use of CD equalization in the tweeter circuit was seen as something "impure". To them, horn loudspeakers that provided constant directivity were for PA use only. To me, the most impure thing was using a non-CD horn that provided acoustic EQ at the expense of polar response.

Now days, I see Geddes, AudioKinesis and Pi Speakers as having a lot in common. All make speakers that are CD, all recommend the same placement and all suggest using multiple subs to smooth the response below the Schroeder frequency. We have minor differences, but the main things are the same. Geddes is the only one of us to use a foam insert to reduce internal reflections, for example. My first introduction to multi-subs was Welti's papers, and so that's what I initially adopted. Geddes proposed a slightly different arrangement. I think all of us agree that however the subs are arranged, using a number of them is better than using one. I use horns that limit vertical coverage, which in turn, limits ceiling reflection. It also allows the null angle to be placed outside the HF pattern. Geddes and AudioKinesis don't bother with that. But in general, we all three embrace the same principles, and this is very encouraging to me. I see it as corroboration between manufacturers, something rarely seen in the loudspeaker market.
 
Thank you Dr. Geddes:

I did indeed mean the Delta ( the difference added by the other 2 sides of the triangle ).

The ear appears designed for optimization around 3Khz - where the energy of consonants lie, perhaps since humans need to locate/communicate with other humans by vocalizations.
The ability to discriminate and focus and reject also seems to be adaptive: Being able to isolate 1 familiar voice in a crowd and isolate/locate.
Could part of what is attributed to listener fatigue, be due to the brain tiring having to focus and reject/discriminate out these delays...
The 170hz to 17khz window of preference ( Sept 2008 SB ) p 17 makes more sense in this context as well.
Because the binaural pattern of ears favor forward, If the reflections from behind create a problem Delta distance, how much does that come into play...
Thanks again
 
It may seem a little obvious, but one thing not mentioned is phase-match between the stereo pair. One technique for confirming good phase and level match is to place the pair close to each other, put the measurement microphone 1 meter away and on the exact centerline between the two, reverse the phase of one of the speakers, and measure the resulting spectra of the uncancelled residue.

This was a technique we used at Audionics for all of our speakers - we aimed for at least 25~30 dB of cancellation in the range from 100 Hz to 5 kHz. This degree of match isn't easy to do, by the way - most commercial speakers have more level and phase mismatch than this.

As you might imagine, phase spread between the stereo pair results in image-localization spread in the part of the spectrum where it occurs. One cause, not generally suspected, is mismatch of uncorrected peaks in the tweeter or midrange driver, or slightly out-of-tune notch filters in the crossover. This can result in very steep phase mismatches over narrow parts of the spectrum.

What the BBC called "detenting" is a separate issue. This isn't about image broadening, but asymmetries in the overall reverberant sound field - particularly when it "piles up" around the speakers, and is shallow or deficient elsewhere. This is mostly an artifact of diffraction and energy storage in the drivers (and horn, if applicable). The quicker the settling time, the less this is a problem.

As mentioned in the posts above, any reflections from nearby objects are very undesirable - a 10 mSec reflection is not a good thing, but a 5 mSec or shorter reflection is much worse. Reflections that are very close, 1-3 mSec, tend to merge with speaker artifacts, since many commercial speakers have substantial energy storage in this range.
 
Lynn Olson said:
One technique for confirming good phase and level match is to place the pair close to each other, put the measurement microphone 1 meter away and on the exact centerline between the two, reverse the phase of one of the speakers, and measure the resulting spectra of the uncancelled residue.

Great technique, thanks for sharing!
That would also be interesting to do with the speakers placed at their normal location in the room, so that the effect of the room and furnitures could also be taken into account for the EQ (preferably linear phase EQ I suppose).
 
gedlee said:


John - not what I would expect, thats for sure. Maybe a smoother bass helps? I don't know. If what you say is true then the image should be level dependent. Is that true?

Well I had three subwoofers, each with a $25 MCM 55-2421 woofer. Paired with the Summas, it's a recipe for disaster, as these cheap little woofers can't keep up.

It's a GOOD woofer, but it's a cheap woofer, and there's only so much you can do with an eight.

When I played the subs by themselves, the distortion was obvious, and impossible to miss.

But once I fired up the Summas, the distortion was masked by the mains. But it was still there, and my hypothesis is that it was muddying the image.

Either way, it sounds a lot better with seven subs now. I should have the eighth finished soon.
 
It was the JBL concept that originally got me interested. It is a great idea and works well when properly implimented. Those older horns lack the "quality" of the newer ones however, which is why, I think, the idea never caught on. It makes perfect sense, but requires a waveguide to do it. Those have historically not had the best sound quality. I used to toe-in my 4430s, but always objected to the horn coloration from its diffraction device.
 
pos said:


Great technique, thanks for sharing!
That would also be interesting to do with the speakers placed at their normal location in the room, so that the effect of the room and furnitures could also be taken into account for the EQ (preferably linear phase EQ I suppose).


I tried it that way for a day, out of curiosity.

It still sounds good, but the size of the soundstage narrows. Also, the image collapses to quite a degree when you move to the left or the right.

Set up in the way described above, the image is stable whether you move left or right.
 
Patrick Bateman said:



I tried it that way for a day, out of curiosity.

It still sounds good, but the size of the soundstage narrows. Also, the image collapses to quite a degree when you move to the left or the right.

Set up in the way described above, the image is stable whether you move left or right.
You mean the inversion phase rejection test or the 45° toe-in ?
 
pos said:
JBL did also use that principle in 1985 for the everest:
http://audioheritage.org/vbulletin/showthread.php?t=5671
http://audioheritage.org/html/profiles/jbl/everest.htm

That horn has a builtin toe-in and is meant to achieve exactly what you describe.

You're right and I also should have mentioned the design of the JBL 4430 and 4435 speakers, which were very influential for me. You can see thinking reflected in the post below. The early Pi Speaker models borrowed some concepts from the Klipschorn and combined them with other design principles used in the JBL 4430, namely uniform directivity and controlled spacing of vertical nulls.

Take those ideas and use horns that reduce internal reflections and I think you really have a winning combination, especially when combined with multiple subs.
 
pos said:

Great technique, thanks for sharing!

That would also be interesting to do with the speakers placed at their normal location in the room, so that the effect of the room and furnitures could also be taken into account for the EQ (preferably linear phase EQ I suppose).

I'm not sure that would result in a valid measurement, since it mixes two entirely separate problems - phase mismatch in the speaker/crossover system, and the collection of reflections between the speaker and the listener. They exist in two separate domains - phase mismatches in the direct-arrival waves coming directly from the speakers, and many discrete time-domain reflections coming from all different directions. Not the same thing, and not a good idea to try and correct a problem in one domain with a solution for the other domain.

Better to fix them at the level where they occur - phase-match the driver/crossover system as closely as possible, then re-arrange the furniture after the speaker system has been optimized. I also have mixed feelings about trying to correct for room acoustics with EQ - this works fairly well below 300 Hz, but not at higher frequencies.

By the way, the aim-the-speakers-at-a-point-several-feet-in-front-of-the-listener dates back to Blumlein, and the Wireless World articles by the BBC in the late Fifties. I first heard Quad ESL57's set up like that in Hong Kong in 1962. The current audiophile fad for nearly parallel aiming seems to come from Stereophile and Absolute Sound magazines in the early Eighties. Crossing the speaker's axes so they converge several feet in front of the listener is simply a reversion to correct stereo, or the Blumlein method. The difference now is that we have a much better idea why it works - by decreasing the magnitude of mid/high reflections from the adjacent side wall, as well as improving Haas-effect time/energy tradeoffs for off-axis listeners.
 
OT: Lissajou patterns

Lynn Olson said:
It may seem a little obvious, but one thing not mentioned is phase-match between the stereo pair. One technique for confirming good phase and level match is to place the pair close to each other, put the measurement microphone 1 meter away and on the exact centerline between the two, reverse the phase of one of the speakers, and measure the resulting spectra of the uncancelled residue.

That's an interesting study, Lynn.

I noticed something way back when that is somewhat related. Back when I was a young man, I would regularly connect my oscilloscope to the left and right channels to display Lissajou patterns. It made a fun little display, something like the laser light shows popular in the 1990's.

Now this was before CD's existed, and the source was most often a turntable. That may be significant. I sometimes consider doing this with a CD to see if it occurs there too. The reason is this: What I saw on turntables was most times, the right and left channels were not in phase at LF, but more regularly were closer to 20-60 degrees apart, sometimes further than 90 degrees.

It was more noticable on some material. One memorable song in particular is Pink Floyd, Welcome to the Machine. The phase between bass notes actually is modulated, going from maybe 45 degrees to about 135 degrees. Each successive note shifted from one phase back to the other, and stayed relatively consistent through each note.

If you clicked the amp to mono or drove both channels with the same signal, you got the pure diagonal line, with amplitude modulated by the signal. That is to be expected, and proved that the scope or something else wasn't causing this. No, in fact, it was a true phase difference between channels.

At high frequency, you would expect this because the different material on each channel interacted in a complex way, and created lots of phase relationships. But at low frequency, you would probably expect the signal to be more correlated than that. Common thinking is that bass signals are essentially monophonic.

It isn't, at least not on all or even many recordings. It is within 90 degrees on most recordings, and so for all practical purposes, I guess some shift doesn't matter much. But the bass signals definitely had phase shift between the two signals, and sometimes it was modulated.

I think of that often when I see Earl talk about decorrelated bass sources. I remember way back when, seeing those Lissajou patterns. At least some content was decorrelated at the source. I wonder if it was a result of the recording, the mix or the transfer to vinyl. I still have an oscilloscope, so one of these days, I always mean to hook it up and see. I have a copy of Wish You Were Here on CD, so I can easily hook up the scope and see. But with a 2 year old running around, stuff like that usually gets forgotten about by dinnertime. :)
 
Re: OT: Lissajou patterns

Wayne Parham said:

Now this was before CD's existed, and the source was most often a turntable. That may be significant. I sometimes consider doing this with a CD to see if it occurs there too. The reason is this: What I saw on turntables was most times, the right and left channels were not in phase at LF, but more regularly were closer to 20-60 degrees apart, sometimes further than 90 degrees.

It was more noticable on some material. One memorable song in particular is Pink Floyd, Welcome to the Machine. The phase between bass notes actually is modulated, going from maybe 45 degrees to about 135 degrees. Each successive note shifted from one phase back to the other, and stayed relatively consistent through each note.

Two reasons: spaced microphones in the original recordings - which will always cause quasi-random interchannel phase shifts at LF, and dissimilar vertical/lateral compliances in the cutterhead and arm/phono cartridge playback system. Both are more common than you might think. Pink Floyd was also experimenting with quadraphonic encoding - I have an EMI SQ-encoded "Dark Side of the Moon" we used for Shadow Vector demonstrations at the Chicago CES.

Working with the Shadow Vector quadraphonic decoder in the early Seventies, we had to track down and remove a lot of this stuff - and pre-filter the decoder direction-sensing circuits so they wouldn't be thrown for a loop by record warps and various playback asymmetries. As it was, the decoder had two set-screws that electronically rotated the sensing coils in the cartridge so they'd be exactly at 45/45, so the cartridge could attain maximum separation, at least in the midband. Real-world phono cartridges typically have the cantilever (and stylus) rotated off-axis by 2 to 3 degrees (sometimes much worse) and the alignment of the sensing coils in the body of the cartridge is usually no better. Thus, interchannel separation in the 20 to 25 dB range. Twiddle it electronically (or go crazy with the azimuth setting and a test disc), and it can be raised to 35 dB or so.

After going through the mill with a prototype direction-sensing variable-matrix SQ decoder with 6-pole all-pass networks and 12 low-distortion VCAs, speaker design seemed like a relatively easy transition - little did I know!
 
Makes sense. Might also mean it's good to leave the bass split out as stereo sources, to provide some decorrelation in a multi-sub setup. Of course, then you gotta decide what channel goes to what sub...

...but then again, common sense would probably tell you to run the left sub output to subs closest to the left mains, and right to right. Unless of course it measures and/or sounds better to switch them. :)
 
Lynne

Your comment was well covered over at the AV forums. The overwhelming opinion is that the bass is mono on the vast majority of recordings. I know that it is on all that I've ever tested. So "summing the bass" is already done in the mix, its not an option. There probably are some rare recordings where this is not true, but again, the opinion on the thread that I was involved in was that "spatial bass", i.e. seperate bass in all channels, was not a significant, one could almost say not even audible, improvement.

This is contrasted with the stark improvement that everyone reports from the use of multiple subs. Hence, to me, summing the bass and using this signal to drive multiple subs results in the best compromise available.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.