Question for someone who is intimately familiar with how auto calibration software w

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I have always wondered why all (at least the ones I have played with Audyssey, MCCAC YPAO) the auto calibration software seems to only run a single pass per channel? Yes I know some of them will allow you to adjust the position of the mic and run multiple passes but that is not what I am referring to.

As a software developer myself, I would think that the calibration software would run multiple passes per channel. The first with no calibration to analyze the raw signal of all the components in the circuit. It would then apply the correction it decided was best and run the analysis a second time to see how well it did. If needed it would apply additional correction and try the analysis again. It would do this until it couldn't get any better or "n" times whichever came first. I am familiar with the law of diminishing returns, but I have also seen people pay thousands of dollars for power cords.

I just can't imagine that every combination of components would behave in a similar manner to the corrections applied. If this assertion is true having the software run successive passes using the previous results as the base for the current pass, seems as though it would provide better results.
 
It would then apply the correction it decided was best and run the analysis a second time to see how well it did.
It doesn't need to do this because the system is considered to be free from significant nonlinear distortion. Then linear systems theory applies and it tells us once you have the raw transfer function and apply changes to it (mag and phase), the real response does not differ from an emulated one obtained by convolution.
That means not matter how the internal algorithms are working (incremental, even genetic) there is no need to re-measure the result because it is identical to a convolution.
 
Being an empirical kind of person vs theoretical, I would need to see the results from a number of post test results to really believe this.

My thought process is there are many different variables from the individual drivers, the speaker enclosure, the crossover, the length & gauge of the wire, the amplifier and the room acoustics which all contribute to the sound you hear. It would seem to me the each one could react in a less than predictable manner necessitating multiple passes.

I am sure someone has performed the empirical tests. The questions in my mind are;

1) Are there no differences?
2) Were the differences so small as to not warrant the expense & complexity to resolve?
3) Did running a second pass and attempting to re-correct end up making things worse?

I suppose if one were so inclined and had multiple receivers/sound processors with auto calibrating dsps they could daisy chain them, run the first one's auto calibrate with the second one disabled, then run the second one's auto calibrate and see if it made any additional changes. If the first one were perfect, the second one would make no changes. Then do an A/B listening test to see if the second dsp made it sound better or worse.
 
AX tech editor
Joined 2002
Paid Member
The way I read KSTR's reply above is like this.

You have a basket with apples, and you want there to be 5 apples in the basket.
You run an autocal sw that counts the number of apples, and adds or takes out the required number to get to 5.

Example. There are 3 apples in the basket and the target number is 5. The autocal runs, counts three, subtracts it from 5, and adds 2 apples into the basket.

What would happen if you would run the autocal again?

Jan
 
You have a basket with apples, and you want there to be 5 apples in the basket.
You run an autocal sw that counts the number of apples, and adds or takes out the required number to get to 5.

Example. There are 3 apples in the basket and the target number is 5. The autocal runs, counts three, subtracts it from 5, and adds 2 apples into the basket.

What would happen if you would run the autocal again?

Jan

Not to be argumentative, but,

What happens if sometimes the basket has oranges mixed in with the apples? What happens if the bin the the device is pulling from has tomatoes as well as apples? If you never take a second look, all you know is there are 5 spherical objects in the basket.

I think the metaphor is too simplistic for this use case.
 
AX tech editor
Joined 2002
Paid Member
Not to be argumentative, but,

What happens if sometimes the basket has oranges mixed in with the apples? What happens if the bin the the device is pulling from has tomatoes as well as apples? If you never take a second look, all you know is there are 5 spherical objects in the basket.

I think the metaphor is too simplistic for this use case.

Ahhh! But the condition stated was that the system is linear, time-invariant, which it is. So, no holes in the basket. No ghosts mucking up the plan.

I know it is simplistic but on the system level it's kosher. It might be useful to carefully read KSTR's post one more time.

Jan
 
What makes me question the validity of the argument is the sheer number of variables.

What I am to believe is that the algorithm will optimize a pair of B&W 801d speakers attached to a pair of McIntosh MC2KW mono blocks using 2 gauge whiz bang cables, as well as a pair of white van speakers attached to a Pyle Pro Amp with 18 gauge cables?:yikes:

I realize they would not sound the same, but I am to believe that in neither case above, a second pass of the algorithm (or a second algo) would not yield any better results? I find this difficult to believe.

Where I see the flaw in KSTR's logic is that I do not believe all systems will behave in a predictable manner. I do not believe that just because there is a dip of 6db at 300hz in two wildly different systems, that the software can accurately predict the required adjustment to correct both cases as much as it can be corrected. One could be caused by a null node in the room at the listening position, the other because of a driver weakness, or crossover anomaly. I would think each might require a different correction.

Not to mention the correction itself may have unintended consequences which may need to be addressed.
 
AX tech editor
Joined 2002
Paid Member
I do not believe that just because there is a dip of 6db at 300hz in two wildly different systems, that the software can accurately predict the required adjustment to correct both cases as much as it can be corrected. One could be caused by a null node in the room at the listening position, the other because of a driver weakness, or crossover anomaly. I would think each might require a different correction.

I don't see that as an issue. You look end-to-end, see a dip of 2dB (and you don't know, nor do you need to know, that it is the result of a -6dB in one part and +4dB in another part), and you correct that 2dB. Done.

But I should bow out and yield to KSTR, I'm sure he knows this much better than I.

Jan

PS Your title question is perhaps not the right one to ask. Perhaps the right question to ask is, can someone explain the intricacies of transfer curve correction and convolution.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.