why tweeter is normally wire inverted to correct the phase

Status
Not open for further replies.
We are going off the rails here. The question was why is it always the tweeters that are inverted and the answer is there is no particular reason -- you do whatever works best for phase overlap through the crossover region. Now we are talking about audibility of so-called "transient perfect" crossovers. Of course if the phase shift changes the on- or off-axis response it is going to be audible. If you want to test audibility of phase in crossovers, you should do it using all-pass filters, i.e., only the phase changes, the frequency response doesn't change. And you wouldn't be the first person to do this. This has been tested numerous times by many researchers and the conclusion has always been that phase changes of the magnitude typically found in crossovers are never audible, but of course if you start delaying things by a lot then it becomes audible.
 
I don't think it's that off topic. As the question was why, and we have the reason why, but it's a good discussion to ask if that is always the best thing and is it audible? It's actually a good discussion and gets at one of the aspects that many multi-way passive XO speaker designers and commercial manufacturers have overlooked for many years: phase coherency, and related with coherency, is absolute phase magnitude. Even a highly esteemed speaker designer on this forum has just recently admitted he hasn't paid attention to phase coherency until now, and that it looks like it may be important.

In the commercial world, you can probably count on one hand how many multi-way speakers are phase coherent and transient perfect.

It's not an easy task, but I am convinced from my own listening tests that it matters. And as Wesayo says, it's up to the individual to try out and see (or hear) for himself. It is important to do it for longer term listening rather than ABX as many sources of material, as Weltersys has pointed out, are recorded with mics and amps inverted willy-nilly. I have a few recordings that are simple live recordings with a mic setup that allows these transients to come through clearly.
 
We are going off the rails here. The question was why is it always the tweeters that are inverted and the answer is there is no particular reason -- you do whatever works best for phase overlap through the crossover region. Now we are talking about audibility of so-called "transient perfect" crossovers. Of course if the phase shift changes the on- or off-axis response it is going to be audible. If you want to test audibility of phase in crossovers, you should do it using all-pass filters, i.e., only the phase changes, the frequency response doesn't change. And you wouldn't be the first person to do this. This has been tested numerous times by many researchers and the conclusion has always been that phase changes of the magnitude typically found in crossovers are never audible, but of course if you start delaying things by a lot then it becomes audible.

Audible on what, headphones as the test medium?
I'll tell you that you can perceive a difference. I perceived a difference in linear phase vs minimum phase throughout the pass-band of my speakers. That change should be subtle (at least that was what I was expecting), is wasn't. I preferred the minimum phase one. Not the straight phase as if FR goes down to DC but the one following the FR curve's minimum phase below 30-40 Hz. That came as a big surprise for me. I figured I'd like the linear one more but didn't. FR was the same in this test.

I do wonder how some of these tests are done though. In one of my experiments I've inserted all-pass filters too, on my time coherent setup. It changed the imaging when I did it in the mid band only with mid/side processing.

You guys may continue to believe the research, for me, proving it to myself was way more important than blindly following others. I do not regret one minute spend on that subject. Make up your own minds, that will teach you how you feel about it. If you're going to do it with RePhase, please use a frequency dependent window on your exported FR to ensure you're not fixing the wrong phase plot. If you don't use that you're bound to get skewed results.
Just my opinion though.
 
It's fine to think that you have stumbled upon something that others have overlooked for years. As engineers, you are trained to find the optimum solution and do things that matter more and put things that matter less lower on the list of priorities. So, if engineers have ignored so called "phase coherency" for years, we shouldn't dismiss them as being idiots. There are very good reasons to do so. The research has been done. Yes, it measures better, square waves look better, triangle step responses look better, but does it matter to our hearing? As Wesayso says, test it for yourself. But do the right test. Don't change the frequency response and come back to tell us that you can hear phase changes. Don't change the phase response by delaying it by a second and come back and tell us you can hear phase changes.

If I were you, I'd start with rephase. It's a brilliant software to do this sort of test.

EDIT: I see I cross-posted with Wesayso. Wesayso, I tested it using rephase too. For a while, I thought I was sure I could hear it on my horn system. But the setting without phase changes was on. Since then, I have given up.

I am not following what you did with linear phase versus minimum phase. Do you happen to have the two phase responses?
 
Last edited:
Let's not compare speakers to sound recorded by mics that may or may not be flipped. If you listened to live un amplified drums sitting in audience say 15 ft away, do you think a speaker playing a recording of that same performance would sound more realistic if correct absolute polarity were observed.


In live sound we will typically start with the kick drum and set the polarity (not phase) of the mic input to give the best gain before feedback response for the drummer's monitor speaker. Then everything else is set up based on the kick setting.
A kick drum with a mic at the front had the beater moving towards the mic - this is the reference. The snare and toms have the direction of the stick hitting the skin moving away from the mic and these will be set with the opposite polarity to the way we decided to set the kick mic.


Dave Rat can explain it better than me:

https://m.youtube.com/watch?v=tPxxzswyoVg



Steve.
 
I find the whole discussion around absolute phase and even phase coherence a bit irrelevant. Mics may be set to try to produce a certain initial phase, but that doesn't necessarily follow how your ear hears things. Is the initial wavefront that hits your ears from a snare drum at a concert positive or negative in pressure? Who knows. Who cares. Are there delays on multiple mics set so that the initial wavefronts from multiple tomtoms are coherent? Of course not.

After the recording process, with mics and wires and preamps, who knows if the initial polarity has been preserved. Who knows if phase relative to what your ear in a certain location at a live event is anything like coherent over multiple microphones.

And then it goes through who knows what combination of effects processors, EQ, etc during mixing and mastering. What is the phase now?

So while you MIGHT be able to convince me that in ideal test conditions, with test signals, I could hear absolute phase differences, what does that matter? There really is no gold standard reference at that point to say one is right and the other is incorrect. And while phase coherence might be easier to hear in ideal conditions, I'm not convinced even that matters because as per above the signal has already undergone any number and manner of phase scrambling events relative to what a single ear would have heard at a live event. So a little more phase scrambling... what's to say that makes it worse, or better? It doesn't matter. It may sound subtly different, but there is no wrong or right, just what you like or don't like.

Phase coherence through the crossover region is important because of effects on amplitude response, polar response, power response when multiple driver outputs are summed at a specific location. Maintaining the shape of a square wave, when there is no such thing from microphone to your speaker in a real recording? Meh. Whatever. Too many compromises added in the attempt to linearize phase for the subtle and arbitrary audible change.
 
Phase coherence through the crossover region is important because of effects on amplitude response, polar response, power response when multiple driver outputs are summed at a specific location. Maintaining the shape of a square wave, when there is no such thing from microphone to your speaker in a real recording? Meh. Whatever. Too many compromises added in the attempt to linearize phase for the subtle and arbitrary audible change.

You clearly haven't looked at wave shapes of Amy Winehouse's Rehab:
amy.jpg

Doesn't that look like square waves to you? 😀
 
Steve Smith,

In live sound we will typically start with the kick drum and set the polarity (not phase) of the mic input to give the best gain before feedback response for the drummer's monitor speaker. Then everything else is set up based on the kick setting.

Thanks for that clarification - that is good that there is a reference for the polarity based on the kick drum with mic in front of the drum. It's not all willy nilly then as some may like to believe (well at least not with a properly setup recording mic system)
 
Steve Smith,



Thanks for that clarification - that is good that there is a reference for the polarity based on the kick drum with mic in front of the drum.


Yes, but my point was that it has nothing to do with the reproduced sound the audience hears but everything to do with getting the best performance from the drummer's monitor. As far as what the audience hears is concerned, sometimes the kick will be normal polarity, other times it will be reversed.

For recorded music there will be no apparent difference but there could be some advantages with experimenting with polarities of mics relative to each other to minimise phase cancellations.


Steve.
 
Can you share more? I would be truly amazed if you could detect 1 kHz with just 2 cycles. You are saying you heard a sound for 2 milliseconds and could identify what it was. Any sound heard for 2 ms is not a sound, it's a click. Of course, I could be wrong, but would like to see proof.

exactly, and I was truly amazed too: in any daw produce a 1kHz sine wave (or 6kHz, or 80Hz for that matter). then precisely cut and export 1/2 cycle, 1 cycle, 2 cycle, 4 cycles and 8 cycles and on if you wish. you won't need fading because the cycle will end at 0, not clicking. you'll find yourself being able to recognize a tone with 2 cycles, maybe 1and a half cycles. less . and it'll become a click. pretty impressive.
 
Last edited:
.....Maintaining the shape of a square wave, when there is no such thing from microphone to your speaker in a real recording? .....

Think the work and compromises is worth it and will ensure a high quality control because square wave is most demanding wave shape to replicate and will thereby ensure acoustic wave output is as close to input wave shape as possible at design axis.

Her some on topic visuals how XSim see it for two-way second order LR XO.

Common for all five system stop bands is BW2 20Hz to BW2 22kHz.

1. Two way LR2 XO at 200Hz tweeter polarity reversed.
2. Two way LR2 XO at 200Hz woofer polarity reversed.
3. Two way LR2 XO at 2000Hz tweeter polarity reversed.
4. Two way LR2 XO at 2000Hz woofer polarity reversed.
5. One way with exemplary good full ranger : ) or lets say two or three-way system with FIR XO.
 

Attachments

  • 1.png
    1.png
    179.5 KB · Views: 124
  • 2.png
    2.png
    181.1 KB · Views: 119
  • 3.png
    3.png
    182.2 KB · Views: 131
  • 4.png
    4.png
    181.1 KB · Views: 108
  • 5.png
    5.png
    144.4 KB · Views: 105
Last edited:
It's fine to think that you have stumbled upon something that others have overlooked for years. As engineers, you are trained to find the optimum solution and do things that matter more and put things that matter less lower on the list of priorities. So, if engineers have ignored so called "phase coherency" for years, we shouldn't dismiss them as being idiots. There are very good reasons to do so. The research has been done. Yes, it measures better, square waves look better, triangle step responses look better, but does it matter to our hearing? As Wesayso says, test it for yourself. But do the right test. Don't change the frequency response and come back to tell us that you can hear phase changes. Don't change the phase response by delaying it by a second and come back and tell us you can hear phase changes.

If I were you, I'd start with rephase. It's a brilliant software to do this sort of test.

EDIT: I see I cross-posted with Wesayso. Wesayso, I tested it using rephase too. For a while, I thought I was sure I could hear it on my horn system. But the setting without phase changes was on. Since then, I have given up.

I am not following what you did with linear phase versus minimum phase. Do you happen to have the two phase responses?

Best I can do right now is post two graphs that are simulations. I'd like to redo them using APL_TDA at some point: (both left channel only)
linearphase.jpg

Example of linear phase correction, grey plot is REW's minimum phase calculation.

minimumphase.jpg

Same plot with minimum phase response correction.

This is how that last one looks in APL_TDA:
APL_Demo_Wesayso2D.jpg

This isn't a simulation, its the measurement of the stereo pair at the listening position.

I'd like to point out one engineer that has strived for time coherency his whole career: John Dunlavy. He didn't ignore it like most others and tried to find his best way of doing it. Quite the achievement if you ask me. And I believe I know why he found that to be important. I agree with him.

If you ever experiment with it again, try and use APL_TDA, it will show you if you managed to get it right. More so than REW can currently do that in my humble opinion. The Demo version can show it to you. The above is done with a screen grab of the demo. You can't save anything, but you do get a clear picture. As above in 2D or 3D:
TDA_3D.jpg


I may even have to point out that APL has a suite that can get you there, or at least close to it. APL_TDA is the measurement suite, but APL_TDA_EQ can do FIR correction based on multiple measurements to get it right for you. As I have only used the demo of APL_TDA yet I cannot tell you how powerful it is. I will probably try it out some time in the near future.
 
Last edited:
Best I can do right now is post two graphs that are simulations.

If I'm reading your plots right, the only significant phase difference between the two occurs at less than 80 hz. At those frequencies, a few radians of phase delay implies pretty significant time delays, which is not true at higher frequencies. In my experience, phase issues at low frequencies are detectable, but not at higher frequencies.

Also, can you post the impulse response rather than the phase response for those two versions. The minimum phase response will be tighter than the linear phase version. The vast majority of people aren't over-writing the compact, minimum phase nature of the low end response of their woofers with something DSP generated.

I think most people feel that at the low end, the room dictates the response so much that phase shifts become unimportant. If you've gone through great trouble to diminish the impact of the room, you may be in a rare situation where that isn't the case. Or maybe the low end phase response is more important than people realize.

Based on my own experiments, I have long considered that only 1 crossover in my system might benefit from phase accuracy: the crossover from the subwoofer to the woofer.
 
I got in on the audibility of phase discussion. I have no crossovers in my setup.
So I could post impulse plots but they don't apply to the original question in this thread.
My results were posted in my thread: http://www.diyaudio.com/forums/full-range/242171-making-two-towers-25-driver-full-range-line-array.html along with many impulse plots and STEP responses. ray wanted to know what I was talking about, mentioning minimum phase vs linear phase. That was the intent of those plots. 🙂
(a little clarification ray is supposed to be: ra7) 😛
 
Thanks for the plots Wesayso! Are you saying you could reliably hear the difference between the two corrected phase responses? I would be shocked if I could hear the differences. There is almost no change in the mid frequencies. What was different between the two according to you?

Did you run the test blind? I know you mentioned running long term tests, but it has been shown that switching back and forth quickly produces a more reliable result than listening long term (our memories aren't very reliable). It's very easy to convince yourself that you can hear a difference; I know I had convinced myself but I was wrong.
 
Thanks for the plots Wesayso! Are you saying you could reliably hear the difference between the two corrected phase responses? I would be shocked if I could hear the differences. There is almost no change in the mid frequencies. What was different between the two according to you?

Did you run the test blind? I know you mentioned running long term tests, but it has been shown that switching back and forth quickly produces a more reliable result than listening long term (our memories aren't very reliable). It's very easy to convince yourself that you can hear a difference; I know I had convinced myself but I was wrong.

First you've got to try and get it like this, no small task I'll tell you. 🙂
I've run them both for a longer time, abx isn't really possible with the delay involved here.
One sounds natural. One sounds hasty for lack of a better description, as if someone is "pushing or pressuring you".
The natural one lets you get into the music. The other one keeps you on your toes, not getting completely comfortable. It's mainly the difference down low, but I've tried to switch them many times. I was convinced the linear one would be my preferred one. So much about expectation bias.

The difference is clear though. More clear than I expected from such a small change. In the month September I ran the linear phase filter. I was pleased and proud, but something was bugging me. Especially in some songs, like "September in Montreal" which has quite low and strong content. I was always looking around me while listening, the "what was that, was that right?"
chase I couldn't quite place.

After I switched to the minimum phase setup it immediately sounded different to me. Tried it for a few days and tested both against each other. I even started to ask questions on it here and there, wondering if I was crazy. Only phase is different. Everything else is the same.

From the end of October till now I run the minimum phase version. It hasn't bugged me again. Much more natural sound. It's only a small change in timing. I have run listening tests again in the mean time with the same changes between linear and minimum phase. My preference always stayed for the minimum phase plot. The one seen in the APL_TDA plot.
I've found a few more people that have come to similar conclusions. But I can't tell you guys this is the (only) way to go. You've got to try it to know what it is. It is for me though, against my conviction even. I was so sure I needed to have flat phase. In real life this means the tweeter is timed a bit sooner and the low end a bit later. The tweeter being early isn't even seen in a group delay plot, the low end is more obvious but smaller than I had expected it to be in sound.

Have you ever noticed how good music makes you move spontaneously? Makes you tap your feet? The linear phase one doesn't (as much) I can't fault it for being bad. Get the minimum filter in line and you can't help it, you'll start to move to the beat, that's not only heard but felt just as much at these low frequencies.
 
I don't know whether to trust you or not 😀 My personal experience has been different. I know what you mean by the music making you move. My arrays made me experience that for the first time. But I would attribute it to a lack of muddying floor and ceiling reflections.

Could you not load the two corrections into JRiver one at a time and switch back and forth? Ideally, you'd not know which one is playing or if things have been changed at all. At the very least, it is important to not know which version is on to avoid biasing yourself unconsciously.
 
Last edited:
The fact that my preference was against my expectations was telling me enough. I don't need more convincing. The difference in feel is more than enough for me to accept I was wrong about wanting flat phase as if there's output down to DC. I've accepted it as my preference, while testing even though I expected the opposite result. The smaller queues like feeling more comfortable, tapping feet again etc. all helped me make the choice.
 
Status
Not open for further replies.