ARTA

mwbrennwa,
I have no data for example shown, but I remember that I used analytical solution for impulse and frequency response of filter.
The quality of minimum phase calculation with Hilbert transform depends on how you will simulate response above or bellow passband. It is arbitrary.
If you have more systematic approach I would like to hear it.

Best,
Ivo
 
Propagation delay setting

Hi,

I'm struggling with the use of the propagation delay set in Arta->Setup->measurement.

1.) After restart of Arta (1.9.1) the setting is gone. Is this the intended behavior?

2.) After setting the propagation delay the delay is correctly "added" to every dual channel measurement (FR2), but when using the Arta->Record->"Spatial impulse response group record" using Test/Setup->"Dual channel measurement mode" activated, the propagation delay is not "added" to the spatial measurements (resulting in incorrect "sample 300" starting point).
Again, is this intended, did I miss something?

regards,
Armin
 
Hi,

I'm struggling with the use of the propagation delay set in Arta->Setup->measurement.

1.) After restart of Arta (1.9.1) the setting is gone. Is this the intended behavior?

2.) After setting the propagation delay the delay is correctly "added" to every dual channel measurement (FR2), but when using the Arta->Record->"Spatial impulse response group record" using Test/Setup->"Dual channel measurement mode" activated, the propagation delay is not "added" to the spatial measurements (resulting in incorrect "sample 300" starting point).
Again, is this intended, did I miss something?

regards,
Armin

ad 1) it is intended so. I recommend to make cross-corelation based estimation of delay in each measurement session.

ad 2)
- Measurement in Fr2 mode is real-time mode that allows us to remove delay, if we want so.
- Measurement in Imp mode does not remove sound propagation delay as it is important data in room acoustics analysis, but we can remove delay shown in frequency response as phase/group delays by entering value "PreDelay" in toolbar edit box. This delay is always referenced from cursor position, and must be estimated for each change of cursor position.

Ivo
 
If you use quarter inch mic with minimized body size (to reduce reflections) then you will have mic that does not need FR calibration.
If you have mic with slightly worse characteristic, then phase response will still be unimportant for group delay and resonance detection...

VituixCAD thread mentioned so...
Quite many measurement mics have "slightly worse characteristic" from down to 2 kHz but especially at the top octave. I added minimum phase calculation to VituixCAD for calibration file to make phase response more realistic within 2...15 kHz, assuming that mic is small such as measurement should be. Not perfect because - as you already wrote - minimum phase extraction is based on assumptions (slope estimation) and there is no way to know absolute truth about magnitude below and above available range. Due to this limit I programmed minimum phase estimation so that automatic slope detection has some limits above the highest known frequency to avoid aggressive changes in calculated phase. Result looks more believable (imo) than nothing, though absolutely correct phase is not very significant information when all measurements are processed with the same settings. User can disable calculated phase if not happy with it.
 
Last edited:
Hi Kimmo,

I did not say that calculation of minimum phase is waste of time, but user has to be aware of eventual benefits, and when to apply it.

Minimum Phase is concept of system behavior when system has straigth path from input to output (for loudspeaker or microphones it means recording without wave reflection - possibly in anechoic chamber, and for object dimension much smaller than wavelength).

Minimum phase concept is very usefull in filter design, when we have target function given by magnitude of frequency response. It can be used in crossover design - when we do not have phase but magnitude only of fr. response - and as you said it will give some error.

Ivo
 
I did not say that calculation of minimum phase is waste of time, but user has to be aware of eventual benefits, and when to apply it.

I know and agree.

Another option to create compensation file is IIR equalizer, which would also manipulate phase response. I suppose that method would be politically more correct than minimum phase extraction, and wouldn't rise so much discussion how minimum-phase device mic is. Natural consequence of that discussion could be new question: should we compensate baffle step of speaker with IIR shelf or linear phase FIR shelf, if diffractive/reflective radiator is not perfect minimum-phase? I think there is consensus that IIR is better (though not perfect) despite of delayed diffractions.

Anyway, I have compared compensation files created with both methods. Difference in phase was very close to zero deg with my Clio MIC-01 if automatic slope detection for minimum phase extraction uses whole last octave (or slope is set manually to 0 dB/oct).
 
I know and agree.

Another option to create compensation file is IIR equalizer, which would also manipulate phase response. I suppose that method would be politically more correct than minimum phase extraction, and wouldn't rise so much discussion how minimum-phase device mic is. Natural consequence of that discussion could be new question: should we compensate baffle step of speaker with IIR shelf or linear phase FIR shelf, if diffractive/reflective radiator is not perfect minimum-phase? I think there is consensus that IIR is better (though not perfect) despite of delayed diffractions.


I see that you are on road to get impulse response of total system, including crossover response. It would be new approach that gives more insight in time domain response.

In may work I have found that over-equalized high order systems have lot of troubles to retain natural sound. I always prefer low order IIR equalization which introduces small number of resonances (in peaks and dips).
Why?
Let me note that ripples in loudspeaker response comes from two sources: resonances and sound wave reflections. It is not good thing to equalize ripple (notches) that are reflections based with filters which use pole-zero-resonance compensation. In that cases filters just becomes sources of new resonances.

Ivo
 
I hope new version will be ready in couple of months.

Ivo

Hi Ivo,
If you are taking requests, perhaps you might consider the addition of automated measurements with a manual turntable similar to the tool that Kimmo had written some time back for ARTA? It is very useful and his tool was wonderful but no longer supported as he has moved on to focus on VituixCAD.
Thanks for the consideration...
 
Hi Ivo,
If you are taking requests, perhaps you might consider the addition of automated measurements with a manual turntable similar to the tool that Kimmo had written some time back for ARTA? It is very useful and his tool was wonderful but no longer supported as he has moved on to focus on VituixCAD.
Thanks for the consideration...

That feature is already implemented in ARTA Spatial recorder.
Just set "Stepping mode" to "Manual" and set "Pause time" to value that allows you to manually change loudspeaker angle before next recording.
Ivo
 
Propagation delay setting

Hi Ivo,
thanks for the quick reply.

2.) After setting the propagation delay the delay is correctly "added" to every dual channel measurement (FR2), but when using the Arta->Record->"Spatial impulse response group record" using Test/Setup->"Dual channel measurement mode" activated, the propagation delay is not "added" to the spatial measurements (resulting in incorrect "sample 300" starting point).
Again, is this intended, did I miss something?
ad 2)
- Measurement in Fr2 mode is real-time mode that allows us to remove delay, if we want so.
- Measurement in Imp mode does not remove sound propagation delay as it is important data in room acoustics analysis, but we can remove delay shown in frequency response as phase/group delays by entering value "PreDelay" in toolbar edit box. This delay is always referenced from cursor position, and must be estimated for each change of cursor position.


It works with the "PreDelay" setting in the toolbar edit box for the exported files - thanks for the hint.


Nevertheless, in the spatical measured dual channel pir-files the reference position ("sample 300") is still not correct, because of the ignored "PropagationDelay" setting in "Frequency Response Measurement Setup" Dialog-Box and so not "comparable" with normal FR2 Measurements.


It would be very helpful if in the "Record Spatical IR Group" Dialog-Box a new checkbox "Use FR2 Propagation Delay" could be added, which enables the use of the "propagation delay" setting in dual channel spatical record measurements.


regards,
Armin
 
Hi Ivo,

In the first version of ARTA I did not implement response compensation with mic. calibration data. After lot of requests I implemented it later. In ARTA only magnitude data are used, as manufacturer calibration data usually does not contain phase information. Why they ignore phase information? Possible answer is that there no exists method for correct mic. phase response measurement. The calibration is usually made by comparison with some reference microphone, but phase obtain that way is not true phase, as reference microphone does not have ideal response.
I must say that true mic. response phase cannot be calculated by minimum phase calculation as we do not have true mic. response neither before or after calibration.
Do you have any reason to believe that a small capsule microphone could be non-minimum phase, or have any measurements to show this is the case ?

For the type of condenser microphones used for measuring speakers, I can't think of any mechanism mechanical or electrical that should make them in any way non-minimum phase so I think it is perfectly reasonable to assume that they are minimum phase and thus attempt to calculate minimum phase phase response for their calibration profile if phase data is not provided. (Perhaps with a checkbox to disable this calculation to provide the option of current behaviour)
If both of this condition are met than we do not need to use compensated response in crossover design, on the contrary, it is better not to apply response (magnitude + phase) compensation.

The compensation can be used on summed total response – to give us insight in achievable tonal balance.

All this said, I recommend:

1) Find descent microphone that has flat response 100Hz-10kHz.

2) When designing crossover do not use the frequency response compensation in measurement results, rather use FR compensation in crossover total summed response.

3) If you measure total multi-way response use FR compensation.

Ivo
I see two flaws in your reasoning here - the first is the implication that a crossover is only to provide the the crossover function, eg high pass/low pass between drivers etc.

But a better name for it would be "speaker network" of which the crossover function is only one purpose. It is also there to provide equalisation of driver and cabinet related response errors both near and far away from the crossover points.

To do this properly the individual responses of each way should have microphone compensation applied to them, not to the final summed response.

The second problem is that your suggestion would be OK if the summed target response was the only response considered when optimising the crossover, (either manually or with a numeric optimiser) however the way I work I set a target acoustic response for each driver/way, such as L/R 4th order at 3Khz, and then optimise the network for that driver to reach that target, both the response at the crossover point and to equalise driver response errors away from the crossover point.

This can only be done if the measurement for that individual driver includes microphone compensation. So as far as I'm concerned microphone compensation must be baked into the individual driver measurements.

As you mention in your followup post, if the microphone compensation does not include phase compensation, but this applies to all driver measurements, the relative phase between drivers will still be correct and so will the summed amplitude response of the overall speaker. However it does still produce anomalous looking absolute phase response of the individual drivers and the speaker as a whole, so it would still be preferable to see the phase corrected.

If we use driver's responses measured with different microphones, then preceding reasoning does not hold and system design becomes complicated.
If we use minimum phase calculation it is not quite correct, as assumption of minimum phase systems does not hold in whole driver's frequency range.
I'm not quite sure what you're trying to say here. Yes, we should not mix and match different microphones or microphone compensation profiles within a single multi-way speaker project. To get any sort of accurate phase tracking all driver measurements should be taken from the same point in space with the same microphone and all processed with the same mic calibration profile, which I do.

However you then talk about assumption of minimum phase not applying in the drivers (?) whole frequency range ?

Whether a driver being measured is minimum phase or not doesn't seem to have any relevance to applying minimum phase estimation to the phase response of the measurement microphone calibration profile ?

There are two problems when we consider minimum phase:

First….

Minimum phase is characteristic of linear systems that can be calculated from the magnitude of frequency response but only for systems in which there are no wave reflection (such systems can be described by finite number of poles and zeros in plane of complex frequency). This mathematical definition practically means that responses of loudspeakers and microphones does not follow minimum phase in frequency range where dimension of object are larger than quarter of wavelength.
I don't believe this is correct. There is another condition that must be met for a non-minimum phase response to become possible - the amplitude of a delayed signal such as a reflection or diffraction must be greater than the original signal. If all delayed versions of the signal are lower in amplitude the response must still be minimum phase.

I can give two examples of the relevance of this in speaker and room measurements.

In other threads on diyaudio there have often been claims made of speaker baffle edge diffraction being non-minimum phase in nature, however not a single person has ever backed up that claim with measurements. It seems to be something that is seen as "intuitively correct" by many but only assumed and never tested. I've even debated this point with Earl Geddes on more than one occasion who seems to believe that baffle diffraction "must" be non-minimum phase but doesn't offer any measurements to show that this is the case.

I've made countless measurements of many speaker baffles using a single driver (obviously multi-way systems are usually non-minimum phase, so for this test a single driver that is minimum phase must be used) and I have never seen the slightest evidence that baffle diffraction ever leads to a non-minimum phase response.

And this makes sense, because the only situation where this could occur would be if the diffracted signal was greater in amplitude than the direct signal, which would not occur unless something was physically in front of the driver obscuring it, such as a pillow, but not obscuring the baffle edges from the listener. Which is not a normal situation in any traditional speaker.

The way you calculate excess phase and excess group delay responses in ARTA seems to be exceptionally precise, when measuring a signal that is known to be minimum phase but with a very lumpy frequency/phase response the excess group delay response is absolutely flat without the slightest wiggle - which is not the case in some other software which introduces significant errors where the actual phase response bleeds through significantly into the result.

It's possible to see perfectly the excess phase and excess group delay response of a multi-way speaker with a very non-flat frequency response with ARTA without any influence from the frequency response.

Despite confirming how accurate excess phase and excess group delay measurements are in ARTA, I can't find the slightest trace of non-minimum phase behaviour from baffle diffraction.

On the other hand a situation where the response can become non-minimum phase is the bass response in rooms resulting due to boundary cancellation.

For example at a frequency where a deep notch occurs at the listening point due to boundary cancellation you have the direct signal, one reflection that is delayed by exactly half a cycle and at nearly the same amplitude resulting in the direct signal being nearly notched out, and then another reflection at a different phase angle can result in a non minimum phase response and a sudden jump in phase.

So here it becomes non-minimum phase because the 1st delayed reflection neatly notches out the direct signal and the 2nd delayed reflection which is delayed even more becomes the delayed signal which is of greater amplitude than the now notched out original.

It's also possible to have a room bass response which is completely minimum phase, if no points of deep cancellation occur, and moving the location of bass drivers to achieve a minimum phase bass response at the listening position should be performed before attempting to apply bass EQ.

I guess my point is that systems are minimum phase a lot more often than we assume, as it takes very specific conditions for them to become non-minimum phase, in particular a delayed signal that is of greater amplitude than the original signal.
 
Last edited:
Hi Simon,

One thing I agree with you, that my two responses on minimum phase treatment in crossover design are confusing, on whether to use minimum phase estimation of microphone response and driver response.
Also I agree with you that it is more insightful to apply mic. compensation on individual driver before crossover response optimization.

I just wanted to show that phase of microphone compensation is not important in crossover design, as long as we analyze magnitude response and use target magnitude response in crossover optimization (as far as I know all known programs for optimization of crossover element values use only magnitude response to define error function).

Now, few words about minimum phase concepts. Here I can’t agree with you, as every system that has more than one path in energy flow is non-minimum phase, no matter how that sideband energy flow is low.
Examples of non-minimum phase systems are simple passive electrical filters like bypassed T-four pole network or LC all-pass filters, then all systems with wave manifestations (reflections and diffraction) - they have significant deviation from minimum phase when characteristic radiating dimensions are larger than 1/12- 1/6 of wave length. Even the ¼ inch microphone can’t satisfy this requirement in whole audio band. All loudspeakers have minimum phase only in smaller part of its passband.

Your reasoning is good if we look at practical side of design – a small phase error in crossover region will introduce small acceptable error.
But, everybody must decide which error level is acceptable as there is no theory for augment that decision.

Ivo