OK, will try to do that, too.
My main obstacle (other than time) is how to do dual channel measurements. I want to record left and right at the same time, to see the timing differences. It's easy to record a stereo signal, but I'm not sure how to do two simultaneous measurements with any measurement software I've used. ARTA has a dual channel mode, but I don't know how it works. Usually the second channel is meant as a reference, not a second sweep.
Any ideas for direct two channel measurements?
My main obstacle (other than time) is how to do dual channel measurements. I want to record left and right at the same time, to see the timing differences. It's easy to record a stereo signal, but I'm not sure how to do two simultaneous measurements with any measurement software I've used. ARTA has a dual channel mode, but I don't know how it works. Usually the second channel is meant as a reference, not a second sweep.
Any ideas for direct two channel measurements?
I was actually wondering about that myself. I know REW works like Arta with simultaneous measurements. Regarding one of them as the control stream. No idea yet, we may have to fool the application somehow.
It would be easier to find one that can log/record 2 different sweeps at once though.
It would be easier to find one that can log/record 2 different sweeps at once though.
Last edited:
I may be wrong here, but what if you ran each head mic into an Audio Editor software, separate tracks, and used tonebursts at incremental frequencies as the test signal? You might be able to measure phase and amplitude differences from there (with a dual trace oscilloscope looking at the audio editor playback?), and then compare that with an L+R of the head mics.
Here's a different angle:
The electronic inter-aural crosstalk canceller (Carver Hologram for ex.) spreads imagary out beyond the speakers quite effectively (at the sweet spot), but leaves you with a weak or "phasey" center image. so I have to wonder if doing the opposite of the hologram circuit might instead make the center image more clear... sorta like when George Castanza decided to say the opposite of whatever came into his head... but I 'm not sure how that would work without shrinking the stereo effects...
Maybe if it was restricted to the frequencies above about 1kHZ? So instead of cancelling the crosstalk signals above 1kHZ (which may even be theoretically questionable), you'd be reinforcing them. Carver uses a 125uS all-pass delay circuit by the way, if anyone is curious. Not sure how that fits in with the math, but it does work real good for me with my roughly 60 degree angles (me to the speakers). Coarse then you might create something that only works well at the exact sweet spot...
I wonder if that's what the Bozak speaker with the two vertical line arrays of tweeters was trying to achieve? Those Bozak tweeters being rear mounted may have had too much "cavity effect" (baffle hole reflection interaction) for that experiment to work well. Plus it may have only helped at a tiny sweet spot.
Here's a different angle:
The electronic inter-aural crosstalk canceller (Carver Hologram for ex.) spreads imagary out beyond the speakers quite effectively (at the sweet spot), but leaves you with a weak or "phasey" center image. so I have to wonder if doing the opposite of the hologram circuit might instead make the center image more clear... sorta like when George Castanza decided to say the opposite of whatever came into his head... but I 'm not sure how that would work without shrinking the stereo effects...
Maybe if it was restricted to the frequencies above about 1kHZ? So instead of cancelling the crosstalk signals above 1kHZ (which may even be theoretically questionable), you'd be reinforcing them. Carver uses a 125uS all-pass delay circuit by the way, if anyone is curious. Not sure how that fits in with the math, but it does work real good for me with my roughly 60 degree angles (me to the speakers). Coarse then you might create something that only works well at the exact sweet spot...
I wonder if that's what the Bozak speaker with the two vertical line arrays of tweeters was trying to achieve? Those Bozak tweeters being rear mounted may have had too much "cavity effect" (baffle hole reflection interaction) for that experiment to work well. Plus it may have only helped at a tiny sweet spot.
Last edited:
If I can't figure out how to do dual measurements of a sweep, I'll have to record a stereo track and post-process it, as Bob mentions. That has its own problems, but they should not be insurmountable.
I can post in the ARTA thread to see if anyone knows how to do the dual measurement. I don't know where to post to ask for REW.
I can post in the ARTA thread to see if anyone knows how to do the dual measurement. I don't know where to post to ask for REW.
If I can't figure out how to do dual measurements of a sweep, I'll have to record a stereo track and post-process it, as Bob mentions. That has its own problems, but they should not be insurmountable.
I can post in the ARTA thread to see if anyone knows how to do the dual measurement. I don't know where to post to ask for REW.
Not hard to do this actually! It's the same way you would create a stereo (or multichannel) impulse response for capturing a room's reverb.
- Use Voxengo Deconvolver. Generate a sweep and import it into a DAW.
- Use the DAW to playback the file through the speakers, while recording the input from the microphones on two mono tracks.
- Select those files (the new recordings) in Deconvolver and process them against the sweep you used - this will yield an impulse response.
- Import the impulse responses into REW or whatever.
As for the distance to speakers for nearfield mixing - it remains at the same equilateral triangle, but is closer. 1.5m?
Wesayso - I don't understand how the perceived sound of one channel (with the shuffler) would be any different from the other. Can you elaborate?
Yes I can, I can't tell them apart in listening. But what one ear gets to process is different from the other due to the timing difference.
After all, you first hear the direct sound wave before the combing starts after about 0.27 ms.
Due to the phase being each others opposite we get slightly different results. Remember the SPL charts showing the dips that moved?
So I tried to optimize that picture with slightly different phase settings etc... but it wasn't until I could not find a reason why I failed to adjust my tonality to my liking that I started to look at very early waterfall plots of the combined result of left + (delayed right 0.27 ms) and right + (delayed left 0.27 ms).
Here's what that looks like:
Left + (delayed right 0.27 ms)
Right + (delayed left 0.27 ms)
See the difference? Yet this shuffler did do something that I could not replicate with EQ alone. I might have found another way though. Giving the top waterfall results on both ears after that 0.27 ms delay. Initially it does what the shuffler did, but without the strange tonal errors I was getting.
After all, you first hear the direct sound wave before the combing starts after about 0.27 ms.
Due to the phase being each others opposite we get slightly different results. Remember the SPL charts showing the dips that moved?

So I tried to optimize that picture with slightly different phase settings etc... but it wasn't until I could not find a reason why I failed to adjust my tonality to my liking that I started to look at very early waterfall plots of the combined result of left + (delayed right 0.27 ms) and right + (delayed left 0.27 ms).
Here's what that looks like:

Left + (delayed right 0.27 ms)

Right + (delayed left 0.27 ms)
See the difference? Yet this shuffler did do something that I could not replicate with EQ alone. I might have found another way though. Giving the top waterfall results on both ears after that 0.27 ms delay. Initially it does what the shuffler did, but without the strange tonal errors I was getting.
Last edited:
Thanks! I will look into this. All the software I've used has always been in single channel mode, or single + reference. I have not seen a way to link the analysis of two channels.- Select those files (the new recordings) in Deconvolver and process them against the sweep you used - this will yield an impulse response.
- Import the impulse responses into REW or whatever.
That's very odd. Is it because of the opposite directions of the phase shift on each channel? Good catch, I've got to look at this..... I started to look at very early waterfall plots of the combined result of left + (delayed right 0.27 ms) and right + (delayed left 0.27 ms)
Yes it is... on one side it actually does as advertised and the other side gets a little worse. I couldn't believe my ears told me this 😀. I was pretty happy with the theory I came up with, but my ears told me something was fishy.
Right now I'm doing it by inserting a delayed sound at -14 dB in the mid stream, but looking at the early waterfall plot of 2x the left shuffle:
Left + (delayed Left 0.27 ms)
There's bound to be a phase recipe that can fix this without adding an extra stream...
It would have an equal result left and right, and no inter channel time differences. Still shifting time a bit though.
We will get there 🙂.
It might be as easy as to mimic my current phase trace:
(compare this to my original phase plot and the difference would be ~ the right target)
Right now I'm doing it by inserting a delayed sound at -14 dB in the mid stream, but looking at the early waterfall plot of 2x the left shuffle:

Left + (delayed Left 0.27 ms)
There's bound to be a phase recipe that can fix this without adding an extra stream...
It would have an equal result left and right, and no inter channel time differences. Still shifting time a bit though.
We will get there 🙂.
It might be as easy as to mimic my current phase trace:

(compare this to my original phase plot and the difference would be ~ the right target)
Last edited:
Just found out I can do the dual channel measurement in SMAART, which we have at work.- Use Voxengo Deconvolver.
Will give that a whirl.
That's good news Pano! That should give us some insight...
All this time trying to figure out the shuffler, and I already posted about it:
Guess what... I finally figured out Professor Edgar Choueiri's Phase wiggles 😀... It was in front of us the whole time...
I'll try to re-make the phase only part of this one, as he uses cross talk cancelation (the negative pulse) with a very short delay and adjusted the FR to accommodate it. But Prof. Choueiri figured out the shuffling part, probably in a nice anechoic room with a dummy head. All data indicates that this one was made for a conventional 60 degree stereo angle.
All this time trying to figure out the shuffler, and I already posted about it:
Let's try and bring this back on subject a bit. I stumbled over a couple of older files from Professor Edgar Choueiri, known for his BACCH 3D sound implementation.
These are older files, when he was still on the Ambiophonic route.
What was interesting in these files is the phase swings:
![]()
Direct sound IR
![]()
Cross talk IR (this one is inverted)
I haven't figured out why these phase swings are used in this case yet. As the right IR's look exactly the same as the left, but he swapped both IR pulses.
Here's the phase of both in one plot:
![]()
Professor Choueiri abandoned the Ambiophonic route shortly after and continued to pursue a more complicated algorithm but the targets he had in his early work are quite identical to "our shuffler".
A link to his later work is here: https://www.princeton.edu/3D3A/Publications/BACCHPaperV4d.pdf
Guess what... I finally figured out Professor Edgar Choueiri's Phase wiggles 😀... It was in front of us the whole time...
I'll try to re-make the phase only part of this one, as he uses cross talk cancelation (the negative pulse) with a very short delay and adjusted the FR to accommodate it. But Prof. Choueiri figured out the shuffling part, probably in a nice anechoic room with a dummy head. All data indicates that this one was made for a conventional 60 degree stereo angle.
An externally hosted image should be here but it was not working when we last tested it.
Last edited:
Glad to have you on the case! I do not have the time to do much of anything with this right now. Might be awhile before i do.
I'll do what I can 🙂...
After some preliminary tests, the IR from Choueiri isn't shuffling phase only, there's some FR manipulation (over time) as well to make it cancel, or better worded: postpone the cross talk dips. I focused on the 44100 version, the high res are working a bit different. It did not work out to let JRiver convert on the fly (from 96000 to 44100) and preserve the cancelation feature of the high res one. I did not try it the other way around, so for now: no guarantees.
The few examples I have from Choueiri are designed to work together with the out of phase part you see in the graphs in the opposite channel. Not what we (or at least I) were after. So I manipulated the FR to flat as best I could while preserving the canceling/postponing mechanism. I'll attach this first example for those willing or wanting to run some tests with it.
I haven't done anything with it yet, except some raw tests in REW. I can see this one working though. Hoping to test it some time soon.
After some preliminary tests, the IR from Choueiri isn't shuffling phase only, there's some FR manipulation (over time) as well to make it cancel, or better worded: postpone the cross talk dips. I focused on the 44100 version, the high res are working a bit different. It did not work out to let JRiver convert on the fly (from 96000 to 44100) and preserve the cancelation feature of the high res one. I did not try it the other way around, so for now: no guarantees.
The few examples I have from Choueiri are designed to work together with the out of phase part you see in the graphs in the opposite channel. Not what we (or at least I) were after. So I manipulated the FR to flat as best I could while preserving the canceling/postponing mechanism. I'll attach this first example for those willing or wanting to run some tests with it.
I haven't done anything with it yet, except some raw tests in REW. I can see this one working though. Hoping to test it some time soon.
Attachments
I love that what you are trying to do; improving the center image with only two speakers, seems impossible to me. Anything you do seems to create other compromises. But the battle is admirable, and great insights often come from such explorations. Even if a method worked, it might only work for a tiny sweet spot, much like the inter-aural cancellation process.
If there was a way to separate out the L+R from L and R (there is to some degree, using the L-XR function working against the L+R function), process that alone, and then mix it back in with L and R... As for the L+R "process", I'm thinking multiple very short delays (60uS, 125uS, 250uS (?)) that might have the effect of decorrelation (giving it a sense of separateness), but not damage the in between images much (since L and R would be barely if at all directly affected).
If that didn't work, I'd try delays in the low mS (forget theory for the moment and just try it). With certain ratios of the delays (1:1.41 or 1:1.62 for starters?), you should be able to get most if not all cancellations in the final outputs to be mostly filled in/averaged out. This might cause a slight increase of clarity for the center image.
Delays in that range correlate with the phase shifter, flanger and possibly even the chorus guitar effect pedals (as in the coloration is very audible). But I think with enough additional delays that are strategically related to each other (5 delays might be nice), the effect becomes more of a general enhancement than the relatively severe cancellation sound you get with only one or two delays. I'd want to attenuate the delays some (maybe 3-6dB) to help minimize the comb filter ripple, but have enough left so the enhancement is audible. This might be something you'd fine tune by ear.
It would amount to a super short reverb, not perceived as a reverb, perhaps similar to a vinyl record vibrating from the inertia of the needle flying around in the groove, creating a slight sense of depth.
If there was a way to separate out the L+R from L and R (there is to some degree, using the L-XR function working against the L+R function), process that alone, and then mix it back in with L and R... As for the L+R "process", I'm thinking multiple very short delays (60uS, 125uS, 250uS (?)) that might have the effect of decorrelation (giving it a sense of separateness), but not damage the in between images much (since L and R would be barely if at all directly affected).
If that didn't work, I'd try delays in the low mS (forget theory for the moment and just try it). With certain ratios of the delays (1:1.41 or 1:1.62 for starters?), you should be able to get most if not all cancellations in the final outputs to be mostly filled in/averaged out. This might cause a slight increase of clarity for the center image.
Delays in that range correlate with the phase shifter, flanger and possibly even the chorus guitar effect pedals (as in the coloration is very audible). But I think with enough additional delays that are strategically related to each other (5 delays might be nice), the effect becomes more of a general enhancement than the relatively severe cancellation sound you get with only one or two delays. I'd want to attenuate the delays some (maybe 3-6dB) to help minimize the comb filter ripple, but have enough left so the enhancement is audible. This might be something you'd fine tune by ear.
It would amount to a super short reverb, not perceived as a reverb, perhaps similar to a vinyl record vibrating from the inertia of the needle flying around in the groove, creating a slight sense of depth.
Last edited:
Well from my experiments so far I know it has the potential to work, but only in the sweet spot. But the beauty is the rest of the listening positions remain stereo as before, no real degradation there, we are just trying to create that clearer sweet spot.
I've used your proposal, at least a variant thereof. Inserting a L+R copy, band passed at -14 dB, delayed 0.270 ms, and another -19 dB inverted signal 0.270 ms later to cancel out that pulse I just added. It has the same objective and it does give a clearer sweet spot.
But I don't like adding it this way. One side of Pano's rephrase-2 shuffler was very effective to cancel the cross talk, the other side wasn't. But the part that works actually helps the entire stage.
Look at this picture:
This is the shuffler vs no shuffler after the time it takes for the signal to reach the wrong ear...
Now lets review the Rephase shuffler-2 under the same circumstances. So you either get the above at both ears or:
One ear gets this:
And the other gets this:
That's why the shuffler worked, but up to a point. If we can improve that.... like in the picture above it...
I do not want de-correlation, just less cross talk. Achieve that and you really get that 3D deep, enveloping and stable stage (I promise). I've had that 3D feel, just not with the right tonal balance yet. But it is a nice enough "feature" for me to want to hunt it down.
I have a seamless stage from left to right. I would not want to mess with that. I'm not touching the stereo mechanism, just enhancing it. The ambient channels do get a bit of reverb, way way down in level and outside of the Haas limit and it does help to hide the room.
I've used your proposal, at least a variant thereof. Inserting a L+R copy, band passed at -14 dB, delayed 0.270 ms, and another -19 dB inverted signal 0.270 ms later to cancel out that pulse I just added. It has the same objective and it does give a clearer sweet spot.
But I don't like adding it this way. One side of Pano's rephrase-2 shuffler was very effective to cancel the cross talk, the other side wasn't. But the part that works actually helps the entire stage.
Look at this picture:

This is the shuffler vs no shuffler after the time it takes for the signal to reach the wrong ear...
Now lets review the Rephase shuffler-2 under the same circumstances. So you either get the above at both ears or:
One ear gets this:

And the other gets this:

That's why the shuffler worked, but up to a point. If we can improve that.... like in the picture above it...
I do not want de-correlation, just less cross talk. Achieve that and you really get that 3D deep, enveloping and stable stage (I promise). I've had that 3D feel, just not with the right tonal balance yet. But it is a nice enough "feature" for me to want to hunt it down.
I have a seamless stage from left to right. I would not want to mess with that. I'm not touching the stereo mechanism, just enhancing it. The ambient channels do get a bit of reverb, way way down in level and outside of the Haas limit and it does help to hide the room.
Last edited:
Had a first listen today. It definitely does something! First impression though, I need way more time with it to know if it's all good. I combined it with my mid/side EQ settings and it got me a very persuasive 3D stage. 😱
The tonality remained very balanced though the songs I played today. Only had about an hour to play though.
No fatigue within this hour. I'm quite positive. 🙂
Now that it's all set up I can listen for a few days to see how it holds up.
The tonality remained very balanced though the songs I played today. Only had about an hour to play though.
No fatigue within this hour. I'm quite positive. 🙂
Now that it's all set up I can listen for a few days to see how it holds up.
Last edited:
Had a first listen today. It definitely does something! First impression though, I need way more time with it to know if it's all good. I combined it with my mid/side EQ settings and it got me a very persuasive 3D stage. 😱
The tonality remained very balanced though the songs I played today. Only had about an hour to play though.
No fatigue within this hour. I'm quite positive. 🙂
Now that it's all set up I can listen for a few days to see how it holds up.
What exactly are you auditioning now?
- Home
- Loudspeakers
- Multi-Way
- Fixing the Stereo Phantom Center