The phase shift continuously increases due to time delay from speaker to mike position. Put the mike just before the cone and you will see less, put it further away and you see more.
Jan
The phase shift continuously increases due to time delay from speaker to mike position. Put the mike just before the cone and you will see less, put it further away and you see more.
Jan
Thx, i was thinking usefull/possible to correct it at the listening position...

So, in order to correct my two ways loudspeakers, i must position the mic capsule in the tweeter/midbass lobes junction ?
How can i make a good choice for this position ?
How does it compare with the other side speaker? if they are the same, perhaps there is no issue. looks like a smooth phase response to me.
Thx, i was thinking usefull/possible to correct it at the listening position...
So, in order to correct my two ways loudspeakers, i must position the mic capsule in the tweeter/midbass lobes junction ?
How can i make a good choice for this position ?
There is nothing to correct. It is how physics works, you have a source and you observe it from a distance and depending on how many cycles 'fit' in that distance you see a continuing phase rotation.
Jan
How does it compare with the other side speaker? if they are the same, perhaps there is no issue. looks like a smooth phase response to me.
Yes, it looks barely the same
There is nothing to correct. It is how physics works, you have a source and you observe it from a distance and depending on how many cycles 'fit' in that distance you see a continuing phase rotation.
Jan
Yes, it is great to explore physics, i've done a convolution of a rephase impulse file in order to linearise the loudspeakers phase (with a microphone at a distance of 5cm of the midwoofer/tweeter)
And i don't hear the difference 😕
Perhaps i must retry 🙄
Last edited:
See if REW has a tool to extract minimum phase - or "unwrap" the excess phase. If not you might try 'frequency response blender'
Yes, it looks barely the same
I'm confused by this, does this mean the same (or not)?
This looks like a normal, smooth phase response to me. Nothing weird. What matters most is whether something your crossing it into aligns at the crossover region.
What you want is for the SOURCE to have linear phase characteristic. This NOT the same as the sound measured at the listening position, which includes both the phase of the source and delay from traveling through the air.
Delay manifests itself as a frequency dependent phase rotation. The amount of rotation increases with frequency because, for a given amount of time (the propagation delay), the number of rotations (wave periods) increases with frequency. Another way to say the same thing is that with 1 millisecond of time there will be more periods of a high frequency than a low one. Delay gives you that characteristic "falling off of the cliff" phase response (when it is unwrapped) that is slowly decreasing at low frequency and the slope of which keeps increasing with frequency.
So if you want to see if your system is linear phase, you need to remove the propagation delay first in order to see the phase characteristic of the source (your loudspeaker).
See if REW has a tool to extract minimum phase - or "unwrap" the excess phase. If not you might try 'frequency response blender'
I'm confused by this, does this mean the same (or not)?
This looks like a normal, smooth phase response to me. Nothing weird. What matters most is whether something your crossing it into aligns at the crossover region.
Thanks you all 😎
I've linearized my FR with 61 peaking filters in equalizer APO

The effect is huge on the percieved tonal eqilibrium, it is better by far (with my configuration)... i was not expecting something extraordinary with the phase linearisation (regularity).
But i've decided to linearize it from above 100Hz (the subwoofer phase is smooth witout any correction) by exporting measurements from REW in rephase and manually crorrect it then generate a Wavfile (32b 48K).
So i put it in a equalizer APO convolver, and it seems again to be better by far, the voice of the first recording i've tested is a lot less diffuse.
Imaging seems to be very impacted by the phase linearity !
Last edited:
What you want is for the SOURCE to have linear phase characteristic. This NOT the same as the sound measured at the listening position, which includes both the phase of the source and delay from traveling through the air.
Delay manifests itself as a frequency dependent phase rotation. The amount of rotation increases with frequency because, for a given amount of time (the propagation delay), the number of rotations (wave periods) increases with frequency. Another way to say the same thing is that with 1 millisecond of time there will be more periods of a high frequency than a low one. Delay gives you that characteristic "falling off of the cliff" phase response (when it is unwrapped) that is slowly decreasing at low frequency and the slope of which keeps increasing with frequency.
So if you want to see if your system is linear phase, you need to remove the propagation delay first in order to see the phase characteristic of the source (your loudspeaker).
Thanks for your explanation !
- Status
- Not open for further replies.
- Home
- Design & Build
- Software Tools
- Could you explain my phase issue ?