Digital Signal Processing - How it affects phase and time domain

Then Neville Thiele and Richard Small figured out how to describe woofer response mathematically. Woofer design almost instantly not only got easier, but better.
very true. But sadly, the insight into systematic design that T and S provided (alignments etc) is not used much. There has been a long regression back to trial and error. Now in the form of box simulators. When folks ask for advice on forums on how to use a specific driver then the typical response is simulate it despite the existence of analytical insight.
 
  • Like
Reactions: MarcelvdG
You can't use FIR without understanding the math behind it.
no, the other way around. FIR is easier to use and benefited from the democratisation of DSP (in my youth it required very expensive special hardware and several PhDs). IIR is now rarely used and this is probably due to lack of insight into the math behind. Also, there are not many publicly available tools to synthesis IIR filters.
 
People have come to take the convenient assumption that loudspeaker drivers in a system are mostly linear phase as a rule. This is, in my experience, a mistake. Do you really check if your raw driver measured in a cab is linear phase in ins intended passband and can you really trust your measurements?
We were the first to measure speaker phase on a regular basis and also check for MINIMUM PHASE. (NOT Linear Phase)

In some 2 decades of doing this, I've only come across 2 drive units which were not Minimum Phase (This experience does suggest some modern units are also not Minimum Phase but IMHO, these are not good drive units)

Care to tell us what your experience is and which drive units you have checked for Minimum Phase?

On the other hand, multi way speakers are usually non Minimum Phase because of their crossovers.

Is Linear Phase Worthwhile? discusses this.

Simple Arbitrary IIRs discusses the effect of noise in measurements.

airvoid, do you have a link to your "automated IIR/FIR algorithm that spits out anything with noise"?
 
Also, there are not many publicly available tools to synthesis IIR filters.
Isn't this what we refer to as a "PEQ"?

And if so, the tools are plentiful.

But perhaps you mean a more free use if the IIR parameters and not standard filters - if so, e.g. CamillaDSP offers as I understand it in the form of letting the use set a1,a2,b0,b1,b2 freely - in addition to all the standard filters.

//
 
We were the first to measure speaker phase on a regular basis and also check for MINIMUM PHASE. (NOT Linear Phase)

In some 2 decades of doing this, I've only come across 2 drive units which were not Minimum Phase (This experience does suggest some modern units are also not Minimum Phase but IMHO, these are not good drive units)

Care to tell us what your experience is and which drive units you have checked for Minimum Phase?

On the other hand, multi way speakers are usually non Minimum Phase because of their crossovers.

Is Linear Phase Worthwhile? discusses this.

Simple Arbitrary IIRs discusses the effect of noise in measurements.

airvoid, do you have a link to your "automated IIR/FIR algorithm that spits out anything with noise"?
https://www.researchgate.net/public...eaker_management_system_with_FIRIIR_filtering

This more general paper is all I found by a quick search on Rainer Thaden at FourAudio who has developed the algorithm we are using, it’s simply part of the software package when implementing their DSP.

He is currently on vacation but I will ask him if he published a paper specifically on this.

I am also away out deep in the Swedish countryside, but I’ll be happy to look at the data when I get back to my computer.
I have good data on some BMS drivers and Seas drivers.

Best

-a
 
  • Like
Reactions: TNT
re' "Also, there are not many publicly available tools to synthesis IIR filters."

maybe so, but I'll bet nearly everyone posting here has one == REW
it has limits, but its a good starting point although as Mark said, not nearly so convenient as FIR-autoEQ

To be honest, that is really the only reason why I'm always looking for a better FIR platform and software. It may be true that FIR is a crutch and can be misused, but its also true that time is a limited commodity.
 
For almost 100 years since Kellogg and Rice, we have had speakers with all sorts of maladities, thermal compression, suspension non-linearities, motor non-linearities, enclosures 'issues' and crossovers that might be adequate with a few volts. Once some power gets applied, the whole thing falls apart. Woofer fs changes, the inductance vs stroke and amperage changes. Crossover coils can saturate, etc, etc.

There is zero point in trying to take a speaker that suffers from thermal, suspension, motor issues, poor design, and trying to 'linearize' it with FIR, I get that.

From what I have gathered from the inferences so far :

Hypex amps are inadequate, antiquated junk.
FIR is for people that wear pocket protectors and speak in algebra.
You can't use FIR without understanding the math behind it.


Did I miss something?


Hypex is definitely not junk!
Software is on the worse side and there have been a lot of junk code in it (meaning some stuff was not properly tested before it was deployed). I bet Hypex underestimated How huge a task it is to make something from scratch.
Completely common situation.

1500 tabs pr channel is not much, but maybe this is decent trade off with latency. If you want to have linear phase till 20 hz, then you will get very long latency. Doesn’t matter at all for audio. Very much do for lipsync in video.

Any way who wants linear phase at 20 hz?

Thinking of scenarios where FIR could be used:

Sidemounted woofer. Either rotate phase to get linear phase or the opposite steer directivity. Beolap 90 or Kii Audio Three comes to mind.

Get linear phase through crossover. Ie: use common LR but then instead make linear phase so driver behave like perfect fullrange. Forget about ie 200 hz crossover with 1500 tabs in the bank. I think you have two options: Make filter of driver of type linear phase. Or use IRR filter then afterwards correct via global EQ. Last option could be tested on any PC with Rephase, and free trial of JRiver with convolver. Any PC from after 2005 or so can run 65000 tabs no problem. With that many tabs you have room to play a lot.
 
No buffering would be the ideal situation but some software coders prefer to use buffers for convenience.
Not so much for convenience, but for computational efficiency. Processing one sample at a time, one must set the parameters for one sample, call the subroutine, set the next parameters for one sample, call the next subroutine, etc. Processing a block of samples at a time (often called a "chunk"), one must set the parameters for many samples, call the subroutine, set the next parameters for many samples, call the next subroutine, etc.

In addition, processing many samples per chunk in a tight loop can often make very good use of the pipeline in modern processors, while processing one sample at a time often cannot.
 
  • Like
Reactions: lrisbo
But sadly, the insight into systematic design that T and S provided (alignments etc) is not used much. There has been a long regression back to trial and error.
That's not necessarily a bad thing -- people have finally discovered that EQ can be used to convert whatever alignment one gets into whatever alignment one wants (within reason). For the 2nd-order sealed-box case, the Linkwitz Transform has been around since 1978, but has only recently found general acceptance. For the vented-box case, coincidentally I just wrote an (unpublished) paper on how to extend the Linkwitz Transform to 4th-order systems.

It uses a lot of (gasp!) math.
 
  • Like
Reactions: lrisbo
Not so much for convenience, but for computational efficiency. Processing one sample at a time, one must set the parameters for one sample, call the subroutine, set the next parameters for one sample, call the next subroutine, etc. Processing a block of samples at a time (often called a "chunk"), one must set the parameters for many samples, call the subroutine, set the next parameters for many samples, call the next subroutine, etc.

I guess my riposte to that would be you've already chosen to use subroutines as a matter of convenience, certainly not for computational efficiency.
 
I guess my riposte to that would be you've already chosen to use subroutines as a matter of convenience, certainly not for computational efficiency.
There is always a compromise to be made between execution time, program memory, data memory, reconfigurability, code maintainability, code re-usability, development time, and probably a few other things that don't immediately come to mind.

EDIT: Latency! I forgot I/O latency.
And numerical precision.
 
Last edited:
I don't know how else to answer this. It has become an argument between empirical design and analytical design. Empirical design can result in good solutions; analytical design generally makes solutions easier and more complete.

Thanks for the continued follow up. Yep, I think it has distilled down to empirical vs analytic...and not sure that's even really worth delving into much.
Both are clearly necessary.
The way I see it, without prior underlying analytics we would not have anything to work with empirically, to begin with.

And pls don't think I've got anything against math....just the opposite really. I adore math, my best subject in grade school..
Math minor in undergrad. Wanted a math major but was hell bent on getting into best biz grad school I could, so majors in econ and finance took precedence.
After b-school, my profession was ultra math intensive.
Then in retirement, for 7 years I coached bright middle school kids for the national MathCounts competition. We had huge success...it was amazing how much the little kiddo's could learn to do in their heads.
Problems like how many zeroes does 300 factorial end with? ...were routine questions expected to be answered without any form of calculation aid, in under 45 seconds. Very fun working with the kids...everyone loved math, including their coach! 🙂

My pushback re math in this thread, has been against the idea that FIR or IIR's underlying math, has to be fully understood to achieve best real world results.
I think that idea is horse-hocky.
 
My pushback re math in this thread, has been against the idea that FIR or IIR's underlying math, has to be fully understood to achieve best real world results.
No, it's about more than that. The math (in this case it's actually arithmetic) is straightforward. But what it needed is understanding of the interplay between the frequency domain and the time domain -- bandwidth and duration, causality and delay, spectral shape and time domain oscillation, etc. Those plus the standard implementation considerations of numerical precision and accuracy, execution speed, and so on.
 
  • Like
Reactions: newvirus2008