Equalisation/Crossover - Best way to fit a curve?

Also check that ... you don't have numerical overflow issues inside the DSP pipeline if fixed point.
I am interested in the mathematics to start with, only after that are implementation issues a concern.
But that's one to keep in mind, thanks.
Similarly I realise we may not want to equalise a non minimum phase null, but that's not quite the problem.
The Spectre simulator can output pole-zero patterns...the choice between...QR and Arnoldi. No idea if those are suitable for measured data.
Nor do I yet. Thanks for the hint on Arnoldi. I had never heard of it but found a description, I have no idea what QR means in this context.
More fundamentally, can a loudspeaker still be treated as a lumped system.
As in my reply to Kipman, not the problem here, yet.
Basically it is an inversion....
Thanks, that's detailed and impressive.
A problem I have is that a lot of it is in the impulse response/ time domain.
Of course it's possible to transform between the two but I can't really "see" what it does, whereas I can look at a frequency response and have a pretty reasonable idea.
But how would such a routine based process identify diffraction from a driver minimum phase frequency response defect? Iow i have same concerns as Kipman725.
Not an issue at this point
Dave Zan, may i ask what idea leads you to ask for this?...or it is just curiosity?
I may implement it but firstly I want to understand.
I understand how to fit lines and polynomial curves but this is not quite the same, and that interests me.
Part of the problem is that technically we have to fit a complex variable rather than a real, hence the poles and zeros.
Even if we assume minimum phase and just try to fit the amplitude response I still don't know the procedure.
And I want a procedure so that the result can be truly optimised, not just "that looks ok"
Whenever I try to develop a formal solution it tests if I really understand, and I learn a lot more that just the specific problem.
More responses as soon as I have time (I have to fit some data for my job!)

Best wishes
David
 
  • Thank You
Reactions: krivium
A question for any truly advanced level mavens.
When we decide to fit a dataset with a line there are well defined algorithms like Least Squares fit in statistics.
All clear cut and reproducible.
When I want to, say, equalise a speaker then I have to best fit a curve with Poles and Zeros.
In parametric equaliser nomenclature that's centre frequencies and Qs, lo-pass and hi-pass, shelves and so on.
Similarly for a cross over, there may be a need to tailor the electrical response so the acoustic response behaves correctly.
Rather than just trial and error "there's a peak there so let's drop a parametric notch on that", has anyone seen any work on how to optimise the process?
The statistics literature has many ways to fit splines and the like but not quite suitable for this AFAIK.
It seems there should be some work on this but perhaps it's too nerdy for DIY?

David
This question is about multi-parameter optimization. There are algorithms that can optimize functions in more than one variable in a general way, such as Nelder-Mead Simplex, or Levenberg–Marquardt. The variables in this case are any set of values that parameterize your "curve" (in this case your equalization filters). This can be written in terms of poles and zeroes, or it can be the more familiar parameters stated in various Q and F values. It doesn't matter.

What you need is called an "objective function". In essence this codifies what you are trying to achieve by "fitting". The objective function must be written in such a way that the optimization algorithm can differentiate between "good" and "bad" sets of input values. The function should produce values trending towards one extreme (e.g. closer to zero) for good fits and the other extreme (e.g. far away from zero) for bad fits. It is completely up to you how to design the function and there are many ways to go about it, but often a "least squares" or "squared error" works well. The error would be the distance for each point (each frequency or selected values of frequencies) that the current value lies away from some desired value, e.g. the calculated EQ curve plus the un-EQ'd response lies from the target response (e.g. flat or whatever you are trying to achieve). You square all of those errors and then add the squared error values to get the sum of squared errors. But you can extend this to weight certain frequency ranges more, or you can include terms that describe what you want to happen to the input values. For instance, you could start with N PEQ sections and tell the algorithm to optimize. But during the optimization some of the F terms to drift towards each other and you would end up with more than one PEQ on top of each other. So another component of the error function could be proportional to 1/(Fn-Fm) for all pairs of M and N frequencies. This makes the error increase when any two frequencies of the PEQs got too close to each other. Other desired behaviors can be included in the error function as needed.
 
  • Like
Reactions: Dave Zan
I found two articles on Internet that might be relevant:

R. Kumaresan and C. S. Burrus, "Fitting a pole-zero filter model to arbitrary frequency response samples", ICASSP-88, 1988 or May 1991? File name: Pole-ZeroFilterModeltoArbitrary.pdf

James N. Brittingham, Edmund K. Miller and James L. Willows, "Pole extraction from real-frequency information", Proceedings of the IEEE, vol. 68, no. 2, February 1980, pages 263...273, 45-PoleExtractionfromReal-FrequencyInformation.pdf
 
  • Like
Reactions: Dave Zan
Thanks, that's detailed and impressive.
A problem I have is that a lot of it is in the impulse response/ time domain.
Of course it's possible to transform between the two but I can't really "see" what it does, whereas I can look at a frequency response and have a pretty reasonable idea.
Dave you might find this page on REW's EQ helpful it has pole zero plots at the bottom too

https://www.roomeqwizard.com/help/help_en-GB/html/eqwindow.html

If you have unlimited PEQ filters then the REW style of fitting works pretty well when you set the constraints properly and you can see it on the screen straight away. On a practical level I tend to find that using a shelving filter or two can often reduce the number of PEQ bands needed quite significantly but none of the auto optimization techniques seem to play nice with shelving filters.

I made a comparison a while ago of a manual PEQ of a few bands made by eyeballing it in REW vs a pure inversion or DRC derived correction
https://www.diyaudio.com/community/...-eq-based-on-asr-klippel-measurements.354981/

1659045523453.png
 
.... during the optimization some of the F terms to drift towards each other and you would end up with more than one PEQ on top of each other.....This makes the error increase when any two frequencies of the PEQs got too close to each other...
When that happens, you need to combine several of these "closely spaced" narrow (high-Q) filters into a single wide (low Q) filter, freeing up all but one of these biquads, to allow you to proceed to a next layer of iteration.

I found two articles on Internet that might be relevant...
An issue with such methods is that the DUT response might sometimes contain an RHP zero, cancelling which would require an RHP pole in the EQ which, in turn, would lead to instability.

Dave Zan:
You can only come up with your own method (by getting clever), after learning from existing methods / software packages (including freeware), as no developer would be willing to share their algorithm with you for free, over an internet forum. I hope you already understand that.

You may also have a look at the Python Open Room Correction:
https://www.diyaudio.com/community/threads/python-open-room-correction-porc.215529/
 
  • Like
Reactions: Dave Zan
An issue with such methods is that the DUT response might sometimes contain an RHP zero, cancelling which would require an RHP pole in the EQ which, in turn, would lead to instability.

Yes, like Dave wrote in post #21. Of course you can always mirror the pole in the imaginary axis and accept all-pass behaviour.

I wonder how to deal with distributed behaviour, though. A distributed system can't be described with a finite number of poles and zeros, so how will a pole-zero fitting algorithm deal with that?
 
  • Thank You
Reactions: Dave Zan
This question is about multi-parameter optimization....such as Nelder-Mead Simplex, or.....
Nice reply, made me realise that I had some implicit assumptions in mind.
I compared the problem to least squares fit of a line in statistics, or polynomial splines because what I had in mind was an explicit, closed form solution.
Neither of these problems need an iterative solution search like Nelder-Mead Simplex or similar solvers, there's a nice analytic answer already solved for the whole problem.
I expected there would be a similar solution for the Pole/Zero fit, they are simple rational polynomials, so I kind of assumed the problem was both explicitly soluble, and had, in fact, been solved and published by someone.
Dave Zan:
You can only come up with your own method...as no developer would...share...with you for free, over an internet forum. I hope you already understand that.
As I said above, I expected there would be a classical, standard solution, as in much of maths and stats.
But I could be mistaken, thanks for the comments, I will think a bit more and write a more detailed response to the issue of PEQs at the same frequency that Charlie raised and your response.

Best wishes
David
 
If you have unlimited PEQ filters then the REW style of fitting works pretty well when you set the constraints properly and you can see it on the screen straight away. On a practical level I tend to find that using a shelving filter or two can often reduce the number of PEQ bands needed quite significantly but none of the auto optimization techniques seem to play nice with shelving filters.
Implementation of shelving filters differ between DSP platforms so they not very safe unless the software is specific to the DSP you're using: https://www.bennettprescott.com/downloads/DSP_Differences.pdf
 
Implementation of shelving filters differ between DSP platforms so they not very safe unless the software is specific to the DSP you're using: https://www.bennettprescott.com/downloads/DSP_Differences.pdf
Implementation of every filter varies between platforms, very few can agree on the definition of Q they are going to use with shelving or peaking filters. Any software auto function has to know the specifics of the filters or the result won't be valid. Dave seems to want a mathematical solution which is very different to what is done with the currently available options. My point was shelving filters are useful but they don't always seem amenable to optimization without manual intervention.
 
Of course you can always mirror the pole in the imaginary axis and accept all-pass behaviour.
Yes, that would flatten the response, but at the expense of phase shift (and group delay), which are increased.

I wonder how to deal with distributed behaviour, though. A distributed system can't be described with a finite number of poles and zeros, so how will a pole-zero fitting algorithm deal with that?
No idea, Sir !!

Implementation of every filter varies between platforms, very few can agree on the definition of Q they are going to use with shelving or peaking filters...
Q-factor is a fairly standard term defined as resonant frequency by -3dB (skirt) bandwidth. Some people like RBJ have defined their own gain and Q-factor, that may be converted to the standard "EE's Q-factor", with explanations as given at the following link.

https://www.andyc.diy-audio-engineering.org/parametric-eq-parameters/index.html

Please note that the MATLAB code in post#20 is missing the gain correction (only), as it was accidentally left out while being copied from a much larger piece of code. Anyone planning to use the same please add the following line just before the %PARAEQ comment (thanks to fluid):

Code:
A=sqrt(A);

The definition of Q-factor for a shelf filter is also sometimes problematic, as some people are used to the quantity called "Slope", which equals unity when Q-factor = 0.7071 for a shelf gain of 1.
 
Nice reply, made me realise that I had some implicit assumptions in mind.
I compared the problem to least squares fit of a line in statistics, or polynomial splines because what I had in mind was an explicit, closed form solution.
Neither of these problems need an iterative solution search like Nelder-Mead Simplex or similar solvers, there's a nice analytic answer already solved for the whole problem.
I expected there would be a similar solution for the Pole/Zero fit, they are simple rational polynomials, so I kind of assumed the problem was both explicitly soluble, and had, in fact, been solved and published by someone.

As I said above, I expected there would be a classical, standard solution, as in much of maths and stats.
But I could be mistaken, thanks for the comments, I will think a bit more and write a more detailed response to the issue of PEQs at the same frequency that Charlie raised and your response.

Best wishes
David
That's correct - you need an iterative, general purpose fitting approach. Keep in mind that filters are in general rational polynomials (also have a polynomial in the demominator) and this type of function cannot be fit using linear algebra based approaches that are behind fitting of a simple polynomial. Problems of this type are not always easy to solve, and success may depend on how close to the solution you are starting. For a general EQ curve, it is difficult to know a priori how many EQ sections/stages will be needed to fit it well, which means that the problem itself is a bit ill-defined. I would imagine that quite a lot of heuristics are needed to reach a method that generally works more than not.

There is a much simpler way to get a PERFECT match to a desired EQ curve the first time, every time: use a single FIR filter instead of a multiple IIRs!
 
Q-factor is a fairly standard term defined as resonant frequency by -3dB (skirt) bandwidth.
If only that were true, the Q can be proportional, constant, gain bandwidth dependent or anything the designer wants it to be. Most can be matched with some effort but it is often tedious, it depends on how similar the values need to be for the application. One of things that makes FIR filters much more appealing.
 
For real world application, there is usually no needs for an automated process: tweak until it 'fit' your target curve.

First of all, I honestly don't know how to measure the speakers accurately enough for automated process. I tried different measuring methods including averaged multiple microphone positions, but it is impossible for me to consistently get a convincing result in my room.

So I do everything manually. I casually measure the speakers at the listing position, then apply low shelf, high self plus just a few peak or dip where the difference is obviously audible between with or without correction. That's it...
 
Last edited:
Plasnu,
I think it depend of the goal: i see difference between working on a way ( linearize it - making it freq flat in pass band and a bit more on both extreme- and apply xover on it) and some voicing of the whole loudspeaker ( i include BSC and the mid/high attenuation in it).

For a way well yes it could be automated ( given you are able to differentiate what need to be adressed by eq and what need another kind of answer). But it'll need a very defined set of 'rules' in my view.

For voicing, i think we agree.
 
First of all, I honestly don't know how to measure the speakers accurately enough for automated process. I tried different measuring methods including averaged multiple microphone positions, but it is impossible for me to consistently get a convincing result in my room.
I think you've hit the nail on the head.....
Making good measurements, or rather generating/choosing the measurement you use for correction, is the name of the game imo.

I say 'generating/choosing' because I mean are we talking outdoors or indoors.....near field or far field. on axis.. or listening window average.....or wider polar average........perhaps moving mic ...etc.

The measurement chosen for auto-correction matters at least 10x more than which piece of software was using for the auto-correct, imho.

I think the same chosen measurement thing applies to manual corrections too...it's just manual corrections can be slower to screw up, because we usually don't make manual corrections which don't make sense (obvious examples like really high Q, high gain, etc).
Depending on the auto software, it may or may not have such overcorrection protection built it....
 
A Raspi covers the hardware.

There is no delay in implementing minimum phase (IIR replication) via FIR.
Really? I'm quite skeptical of this "no delay" claim. Please tell me how to implement an FIR filter of say 10k taps without any delay penalty... I would really like to know the secret. For example, partitioned convolution ALMOST has no delay, but will still have some on the order of a few tens of samples IIUC.