Well - knowing the audible limits for group-delay is already quite a challenge ! 😉
Regards
Charles
That's why I provided a link to a review paper on the subject! 😛
As long as you only sum in phase. That is on aksis an not of aksis like a spinorama measurementWithout the IIR knee problem
But I agree with group delay being a real easy limitation to meet in bass region.
Not much high Q filtering can be done before GD goes to high.
So there long latency and FIR can be a real benefit
Not much high Q filtering can be done before GD goes to high.
So there long latency and FIR can be a real benefit
But the summing in bassregion can be a real mess with lots of reflection and path differences so I would be weary to go over 24 dB/oct even with linear phase IIR/FIR filtering
ERB widens with increasing frequency.So true.
ERBs (https://en.wikipedia.org/wiki/Equivalent_rectangular_bandwidth) seem to be at the heart of understand best known to date psychoacoustic science re phase/group audacity.
The amount of phase rotation from one end of an ERB to the other end, seems to be the determinant. (recommend searching J.J Johnton's posts on ASR)
Since the ear's ERB's bandwidth widens with a decrease in frequency, it makes the likelihood of phase audibility aka group delay, greater with lower frequency.
The fact that a xover's group delay increase with a decrease in freq, double downs on the odds of the likeliness of low frequency audibility.
I've read a number of pro-audio techs who have experimented with reducing phase rotation as low in freq as possible, report positive audible improvements.
Certainly more low-end research is needed ,imho.
I think the high end, like at 1000Hz or above is fairly well researched (much easier to research than the the low end).
The ear's ERB's at higher frequency seem to be sufficiently narrow to preclude phase ability...like CharlieLaub was showing..
My understanding is the ERB widens log wise, octave wise, with decreasing frequency...and that's what counts. No?ERB widens with increasing frequency.
Maybe you mean widens on a linear scale?
With a thirdth octave from approx 500hz
https://www.researchgate.net/figure...ency-response-of-the-gammatone_fig2_258383269
https://www.researchgate.net/figure...ency-response-of-the-gammatone_fig2_258383269
With a thirdth octave from approx 500hz
https://www.researchgate.net/figure...ency-response-of-the-gammatone_fig2_258383269
Ok thx. Here's the ERB chart from that link. Linear frequency scale.
Here's from same site. Log scale.
Which is the scale that supposedly matters in audible phase detections. As in how many degrees of phase rotation/tilt, are there over the octave width of the ERB, when comparing the lower frequency end of the ERB to the higher frequency end.
Which has even less to do with the kind of group-delay distortions caused by loudspeaker crossovers than most older studies. 😉 There is one recent paper that also came from FInland which is is dealing with the group-delay distortion of multiway speakers. Maybe I can find it again.That's why I provided a link to a review paper on the subject! 😛
Regards
Charles
Edit: found it: https://acris.aalto.fi/ws/portalfil...udspeaker_Group_Delay_Characteristics_AAM.pdf
Fig 11 is the most interesting one IMO
Last edited:
If you feel like reading here is an old web article I wrote about group delay.
http://web.archive.org/web/20090809205449/http://www.geocities.com/kreskovs/GD1.html
http://web.archive.org/web/20090809205449/http://www.geocities.com/kreskovs/GD1.html
I'm afraid the math is getting beyond me. Let me ask this question another way. I recently built a 3-way center with LR4 filters in between each driver. Please keep in mind the delta between when I learned this in college and when I built my own speaker was easily 20 years.
🙂
So, based on my knowledge, there's an inherent mismatch between a purely electrical filter which never has acoustic offsets and multi-way speakers which implement them because of the XY location on the baffle AND the depth of the speaker assembly, and of course the choice of mic placement. So, again, going from a hazy recollection of a course, we could either alter our choice of filter orders or move the drivers so their acoustic offsets would align. I remember a drawing where the magnets of tweeter, woofer and mid were all in a vertical line.
So, forward to my last project. Based on my classroom memories, to use these idealized LR4 filters (assume I mean final electroacoustic sum) I would have to add delay to the midrange and tweeter, further (and here is where theory and practice failed) since the driver to driver delays are additive the tweeter should have the MOST delay.
That is, if this was a physical speaker I could slide drivers around on, the 1" dome tweeter should be pushed back the most, the 4" cone midrange the least in order to achieve the correct offsets for the filters to mesh correctly.
What I found in my case was that this was not optimal, and I had to empirically sneak up on the correct delays using an inverted mid, and that the tweeter had about half the delay of the mid.
Is this a function of anything in the LR4 group delay mechanisms?
🙂
So, based on my knowledge, there's an inherent mismatch between a purely electrical filter which never has acoustic offsets and multi-way speakers which implement them because of the XY location on the baffle AND the depth of the speaker assembly, and of course the choice of mic placement. So, again, going from a hazy recollection of a course, we could either alter our choice of filter orders or move the drivers so their acoustic offsets would align. I remember a drawing where the magnets of tweeter, woofer and mid were all in a vertical line.
So, forward to my last project. Based on my classroom memories, to use these idealized LR4 filters (assume I mean final electroacoustic sum) I would have to add delay to the midrange and tweeter, further (and here is where theory and practice failed) since the driver to driver delays are additive the tweeter should have the MOST delay.
That is, if this was a physical speaker I could slide drivers around on, the 1" dome tweeter should be pushed back the most, the 4" cone midrange the least in order to achieve the correct offsets for the filters to mesh correctly.
What I found in my case was that this was not optimal, and I had to empirically sneak up on the correct delays using an inverted mid, and that the tweeter had about half the delay of the mid.
Is this a function of anything in the LR4 group delay mechanisms?
Which has even less to do with the kind of group-delay distortions caused by loudspeaker crossovers than most older studies. 😉 There is one recent paper that also came from FInland which is is dealing with the group-delay distortion of multiway speakers. Maybe I can find it again.
Regards
Charles
Edit: found it: https://acris.aalto.fi/ws/portalfil...udspeaker_Group_Delay_Characteristics_AAM.pdf
Fig 11 is the most interesting one IMO
Very interesting article. But like most group delay studies, the participants listened using headphones (Sennheiser HD-650). Others have claimed it is much harder to discern group delay when listening to a speaker in a room.
Delays, meaning constant time delays that equal distances between acoustic centers, are not additive driver to driver. (maybe I'm not grasping what you mean)Based on my classroom memories, to use these idealized LR4 filters (assume I mean final electroacoustic sum) I would have to add delay to the midrange and tweeter, further (and here is where theory and practice failed) since the driver to driver delays are additive the tweeter should have the MOST delay.
They are set individually relative to the latest arriving driver section. So for each driver section arriving earlier than the latest, delay it individually relative to the lastest arrival.
(This all assumes of course, that a single mic location was used to measure all driver sections)
Delays, meaning constant time delays that equal distances between acoustic centers, are not additive driver to driver. (maybe I'm not grasping what you mean)
No, that's what I'm asking. Form my memory, with a 3-way on a flat baffle, the tweeter is the closest to the mic, then the mid then the woofer.
If we made a physical adjustment, the tweeter would have to move back the most, then the midrange, no?
If we made a physical adjustment, the tweeter would have to move back the most, then the midrange, no?
Not necessarily, we can't just rely on physical distances. We have to measure the apparent acoustic center of each driver (from one reference mic position).
I say apparent because it can be hard to measure/nail down precisely.
We can't just line up voice coils on top of each other, IOW, ...[although that might be a decent start i guess w/o any meas capability...can't say I know.]
IMHO, lining up the estimated (eyeballed) "center of cone/dome gravity" points for each driver is for sure one of the best starting points.We can't just line up voice coils on top of each other, IOW, ...[although that might be a decent start i guess w/o any meas capability...can't say I know.]
And in doubt, always opt for a bit more tweeter setback (with a waveguide, preferably) than estimated, as it is much easier to add some constant group delay to the woofer than to the tweeter.
In the end, one will always have to manually match measured magnitudes -- and more importantly, phases -- to their targets, by fiddling with the XO parameters (that's the fun part, or not ;-)
Once you have matched phases (for Linkwitz-Riley target types), any wobbles in the sum can be EQ'd out globally and simply added in to all the final driver correction filters.
OTOH, when the tweeter is leading the woofer, there is no easy practical way to fully match phases. The usual workaround is that the tweeter polarity is switched and then hope to find a partial phase match with this starting point. But it's not a true Linkwitz-Riley anymore. For steady state sines, the inversion is equivalent to half a cycle time offset so that it now sums perfectly. For transient signals, however, the tweeter is still too early and wavelets (short shaped sine bursts) around XO don't match the woofer's, making the summed time response there half a cycle longer.
BTW, I'm not asking to eyeball anything, and I understand the need to measure the actual acoustic distances. Personally I use interferometry, and I don't mean to imply anything else should b used.
My questions were just applying expectations (Tweeters are closest, woofers further away, mids in the middle) to digital delays didn't work out quite as expected.
So, interferometry says woofer is 1.5" away, mid 0.5" away and tweeter is 0", I'd expect to apply delays so that mid has 1" of delay, and tweeter has 1.5" of delay.
My questions were just applying expectations (Tweeters are closest, woofers further away, mids in the middle) to digital delays didn't work out quite as expected.
So, interferometry says woofer is 1.5" away, mid 0.5" away and tweeter is 0", I'd expect to apply delays so that mid has 1" of delay, and tweeter has 1.5" of delay.
My questions were just applying expectations (Tweeters are closest, woofers further away, mids in the middle) to digital delays didn't work out quite as expected.
So, interferometry says woofer is 1.5" away, mid 0.5" away and tweeter is 0", I'd expect to apply delays so that mid has 1" of delay, and tweeter has 1.5" of delay.
The discrepancy might be because of the phase shift of the drivers themselves. I don't think that's the issue though, your measurement likely is. Or, to be more precise, the measurement distance, if it's short then the running distance of the sound is much more different because of the angle. Try a measure distance of 3m.
Aside from that, it could be an issue of delay if you are using different dsps or analog filters. That usually only happens if a sub got its own dsp and got a different lag/delay. It's not likely but I wanted to mention the possible error source anyway.
- Home
- Loudspeakers
- Multi-Way
- Understanding acoustic offsets vs. group delay with LR4