Removing Loudspeaker Group Delay using reverse-IIR filtering

Guys, sure...in the electrical domain...global vs individual makes no difference. We can all agree, Ok?

But what about the 3D acoustical measurement domain?
There's no inherent problem in the acoustical domain. If the resulting electrical transfer function for each band is the same in both cases—and it can be, only limited by numerical precision and/or maximum number of taps—there cannot be a difference in the acoustical domain.

There's also no good reason, in my opinion, to generate phase correction filters directly from acoustic measurements. Assuming the system is minimum phase (or very close to it, as loudspeakers typically are), all you really have to do is match the magnitude closely to a target using minimum phase filters, then generate the phase correction from the target response.
 
Last edited:
No matter how you do it, in the 3d acoustic domain you can only achieve a relatively perfect impulse at one point in space. And that applies even with a full range driver. The reason is simple. The acoustic output of a drive is minimum phase. That is there is a 1 to 1 relationship between amplitude and phase. However,the amplitude response is position dependent. Off axis is not the same as on axis. Thus the phase also changes off axis. That's the reality of it.

There's no inherent problem in the acoustical domain. If the resulting electrical transfer function for each band is the same in both cases—and it can be, only limited by numerical precision and/or maximum number of taps—there cannot be a difference in the acoustical domain.

There's also no good reason, in my opinion, to generate phase correction filters directly from acoustic measurements. Assuming the system is minimum phase (or very close to it, as loudspeakers typically are), all you really have to do is match the magnitude closely to a target using minimum phase filters, then generate the phase correction from the target response.

But the system is not minimum phase. Each band pass may be, but not a multi way system. And as I already noted, unless the band passes have LR type responses (or Duelund type),i.e. sum at the -6dB point, linearizing the phase of each band pass won't sum correctly.

But you do have a point. If you model the system it would be possible to generate the correction filter based on the modeled response.
 
That's the reality of it.
And, frankly, the optimizations that we're discussing don't really matter all that much. We have over a hundred years' worth of loudspeaker systems that are totally unacceptable in terms of magnitude and/or phase response, crossover design, diffraction, distortion, polar response, etc., yet they sound pretty good.

I'm not saying that we shouldn't improve the designs in the ways we're discussing. I'm just saying that we don't need to obsess over it.
 
  • Like
Reactions: Juhazi
Reality of a L/R 4-way with elliptic (LR2) xos, 4 measurements per speaker near the spot.

ainogneo83 2x4 conf4 MMM 300ms 124.jpgainogneo83 2x4 conf4 MMM impulse.jpgainogneo83 2x4 conf4 MMM phase.jpg
 
Maybe it's worth stopping to define what we are thinking, when using the term 'global correction'.
I know my assessment of global correction's validity vs individual correction, depends on the global span of corrections applied,
and I sense my definition uses a wider span than others may have in mind.

So for a start...where global means across the input to any and all further filtering/xover stages.
And with the premise we are interested in achieving flat /linear phase.

Seems to me there are three logical branches to global corrections:
1) frequency magnitude (mag) only
2) phase only
3) both mag and phase.

1) Mag only.
With the idea I will use a transfer function to identify and correct mag variations.

Is the easiest for me to consider. The long time standard for correcting finished loudspeakers...aka good ole IIR EQ.
At one level of consideration, I understand drivers are predominately minimum-phase devices, and when their mag variations are corrected, phase variations are automatically corrected too. Same understanding includes that xover summation regions between two drivers are not minimum-phase.

So I see global mag corrections as having full validity within drivers' passbands where they are the sole contributor to acoustic output, to correct both mag and phase. But only valid for mag in xover summation regions.

At another level of consideration, there's the issue that drivers are minimum phase, but with a unique response to each point in acoustic space. (as per John K's #243 and some of my previous belaboring's)
Comes down to which point in space or spatial average, to correct to. No escaping this one.

My bottom line for mag only, is that corrections are predominantly valid the individual driver level, using IIR/ min-phase.
Past that, adding global IIR is a jump ball,...as likely to hurt phase as to help..dunno.

2) Phase only.
With the idea I will use a transfer function to identify phase variations, and some form of reverse time filtering to linearize phase.

I see three sources of non-linear phase.
a) xovers,
b) drivers' natural mag roll-offs
c) drivers pass-band phase variations,
due to mag variations. Also extend past pass-band to mag variation vs acoustic xover target.

a) xovers. Phase correction of electrical overs seems to me to be a valid global phase correction. It directly address the phase rotations / GD of xovers, and is not spatially dependent.

b) drivers' natural roll-offs. This one seem the most complicated to me. I've done enough FIR work using linear phase xovers, to know that pre-ring potential is minimized only when two driver sections have fully complementary (mag and phase) summations.

Phase correcting a driver's roll-off that is at system end, like the bottom end roll-off of a woofer, will give a massive dip in step response before impulse peak, indicating pre-ring potential.
But how about global phase correction of summed natural roll-offs (that may have been truncated earlier by xover cut-offs)?
Does the summation require less correction such that it doesn't matter so much? I dunno here, but it doesn't smell right...

There's also the issues that measured roll-offs if used for corrections are spatially dependent, and how would you even measure them without being under the influence of xovers?

c) drivers' phase only pass-band phase variations
This one I feel confident about. Bogus. Nothing to offset phase correction pre-ring potential. Spatially dependent to boot.
Don't do it. Besides , such variations should have been done via min-phase /IRR on an individual driver basis as the beginning stage of tuning.

3) Both mag and phase
With the idea that a system impulse response will be used as the basis for some degree of applied impulse inversion, to flatten both mag and phase across the spectrum.
This the span of global correction I'm think of, when I say it's a bad idea. I guess from the numerous attempts I've seen of folks to making measurement of finished speaks and then applying global correction.

For me, my nay-say comes from simply merging the two lines of reasoning above, in mag only and phase only.

I think global mag says no good for xover regions, and who knows what it means for phase.
Global phase says to me, no good for anything other than specifically isolating and linearizing electrical xover phase.
Bottom line, no good. Linearize xovers only or nothing.


Hey all, sorry for the length. It's been kind of helpful for me, to try to layout a logical presentation/explanation.
Rain coat on, have at it ! and thx
 
  • Like
Reactions: camplo and uriy-ch
And, frankly, the optimizations that we're discussing don't really matter all that much. We have over a hundred years' worth of loudspeaker systems that are totally unacceptable in terms of magnitude and/or phase response, crossover design, diffraction, distortion, polar response, etc., yet they sound pretty good.

I'm not saying that we shouldn't improve the designs in the ways we're discussing. I'm just saying that we don't need to obsess over it.

Lol...from the man who bought up inter sample delays earlier in the thread ! Just teasing....🙂
Let's face it, we each have our areas that we love to delve into deeply. Or perhaps obsess over 🙂
I think we are lucky to have forums to share our audio or audio related, passions.

Mine? DIY multiway speakers!
I enjoy the acoustic design process, the woodwork, the measurements, and particularly the processing that optimizes them.

I've come to believe that once a decent acoustic design is achieved, processing rules.
As far as corrections.....I have a few mottos:
Isolate as best as possible what requires correction. Avoid lump sum corrections that don't identify
And once an issue is isolated/identified, ... neither overcorrect nor under correct. Both leave SQ on the table.


As far as speakers sounding good over the years...sure...I remember our family's' great sounding stereo back when I was a kid in the 50/60's.
Just like our B&W TV looked awesome too...(until we got color)...

Since then, I've enjoyed every stage of both audio and videos improvements.
Looking back....
There' probably as much improvement between the old family stereo and my current synergy/MEH rig,
as there is between our first color TV and and todays large screen oleds.

The TV industry has obviously had enormous technological innovation/improvement.
Can the same be said for the audio industry? Sure drivers have improved, materials improved. But by how much?

Isn't most the improvement in audio due to ongoing optimization? Through simulations, measurements, and processing.

PS Have you taken a good look at the tech behind the new Sphere in Las Vegas? Utterly mind-blowing optimizations involved.
Over 160,000 drivers, all individually amped and processed.
 
But the system is not minimum phase. Each band pass may be, but not a multi way system.
Sorry, I wrote that in a terribly confusing/ambiguous way. By "system" I meant a single band, comprised of the driver, cabinet, etc.—not the complete loudspeaker system.

And as I already noted, unless the band passes have LR type responses (or Duelund type),i.e. sum at the -6dB point, linearizing the phase of each band pass won't sum correctly.
Yes, one should base the correction on the summed phase response rather than that of the individual filters.

a) xovers. Phase correction of electrical overs seems to me to be a valid global phase correction. It directly address the phase rotations / GD of xovers, and is not spatially dependent.
Seems we may have been arguing about the wrong thing then. Correcting the excess phase resulting from the (idealized) crossover(s) is the only phase correction that should be done, in my opinion.
 
a) xovers. Phase correction of electrical overs seems to me to be a valid global phase correction. It directly address the phase rotations / GD of xovers, and is not spatially dependent.
Correcting the excess phase resulting from the (idealized) crossover(s) is the only phase correction that should be done, in my opinion.

If you are talking about the electrical crossover what do you gain?
 
FWIW, playing with RePhase I find that to get an almost perfect phase-unwrapper for a 1kHz LR4 @ 96kHz you need a bit more than 128 samples, like 192 samples. Which means 96 samples (1ms) of intrinsic delay. The additional block processing delay of a standard convolver can be reduced by overlap-and-add method which of course comes with a penalty, more computations.

@CharlieLaub , I think the main benefit of RIIR, the reduced computational effort, will be at lower frequencies, like a correction for a LR4 @ 80Hz subwoofer-to-main XO, where the phase unwrapping is so much more important than at 1kHz and above.

If the woofer above the subwoofer crosses over at a reasonable frequency, say 2 kHz or so, then one could downsample from 96 kHz to 12 kHz (or even 6 Khz for the daring), which saves computation on two fronts:
  • the FIRs will be shorter since you need proportionately fewer taps for the same filtering operation
  • the FIRs are run at the lower sampling rate
Both of those reduce the number of operations by a factor of downsample squared. In other words, running a FIR at half the sample rate needs a quarter of the computation, a quarter the sample rate takes one sixteenth of the operations, etc. This means a bit of cleverness can make usable otherwise prohibitively expensive filters.

The implications are obvious for linearising woofer or subwoofer phase due to their highpass transfer function. However, this is for roll-your-own DSP hackers because I haven't seen any packages which include sample rate converters (SRC). I'd love to be proved wrong.

Of course the SRCs will provide delay of their own and require some cycles, but those are relatively trivial compared with the savings one can get when running big ol' subwoofer FIRs.
 
The implications are obvious for linearising woofer or subwoofer phase due to their highpass transfer function. However, this is for roll-your-own DSP hackers because I haven't seen any packages which include sample rate converters (SRC). I'd love to be proved wrong.
One problem is that there are only few, if any, modern audio DAC chips which officially support sample rates below 44.1kHz.
Also, the simultaneous multirate support is something most operating system or hardware platforms have problems with, even dedicated audio DSP chips.
Much easier to throw more computing power on the problem ;-)
 
The best resamplers are totally transparent, whether or not you think the process is "imperfect". This is the same sort of argument that any amount of, say, jitter would be bad.

Just like many other potentially deleterious aspects of sound reproduction there is a floor below which there is no further improvement via reduction of that particular metric.

There are so many balls up in the air with reproduction via loudspeakers you have to pick your battles and give up some things, or you will never be satisfied.
 
If the woofer above the subwoofer crosses over at a reasonable frequency, say 2 kHz or so, then one could downsample from 96 kHz to 12 kHz (or even 6 Khz for the daring), which saves computation on two fronts:
  • the FIRs will be shorter since you need proportionately fewer taps for the same filtering operation
  • the FIRs are run at the lower sampling rate
Both of those reduce the number of operations by a factor of downsample squared. In other words, running a FIR at half the sample rate needs a quarter of the computation, a quarter the sample rate takes one sixteenth of the operations, etc. This means a bit of cleverness can make usable otherwise prohibitively expensive filters.

The implications are obvious for linearising woofer or subwoofer phase due to their highpass transfer function. However, this is for roll-your-own DSP hackers because I haven't seen any packages which include sample rate converters (SRC). I'd love to be proved wrong.

Of course the SRCs will provide delay of their own and require some cycles, but those are relatively trivial compared with the savings one can get when running big ol' subwoofer FIRs.

Sure, you can jump through those hoops so that you can continue to use your precious FIR processing.

Or you could just use RIIR processing, which does not require designing an FIR kernel or implementing it, and is lighter weight computationally speaking (at low frequencies).

RIIR is so straightforward, one could program a Pi Zero 2 W (user provides the ADC/DAC) to have a simple interface for choosing allpass parameters, and this could be used with ANY system, active or passive, for group delay EQ. It would probably sell like hotcakes to all the tweakers and audiophools who would LOVE to know that their system has been transformed via delay EQ! If people buy cable risers and 4 gauge mains cables with silver plated wire, then why not this? These people do not want to get their hands dirty with FIR filter design but would be content to plug a few parameters into a GUI. That is what RIIR is actually good for. It doesn't provide any new capability that could not be done before now (e.g. via FIR filtering) but does it in a simpler manner that lends itself to certain user-friendly applications.