Matt's Gedlee Summa Abbey Kit Build

Kirchner describes it as a measure of the non-linear transfer function of a unit. It measures the "sound quality" and "Maximum undistorted power". Looking more carefully at the description in the manual, it's referred to as an IMD measurement as described by Klippel, and as a RB measurement, or Rub and Buzz. My understanding from Klippel is that it was originally intended as a manufacturing QC device to ensure no mechanical problems in the assembly of a driver. However, he told me that it is the same 3D measurement system developed by Klippel in the 3D distortion analyzing device. As for how it works:

It contains three primary signals, a low RB, Mid RB, and High RB. It consists of two signals and is modulated at the non-linear transfer response of the system. The ratio of the two signals is 1:1.3.

The more I read the work the more I think I'm answering my own question. The Q-index, which he calls a total of all the noise and distortion in the speaker, seems like a Kirchner thing, having nothing to do with Klippel's work. It looks like it was developed for QC use if the system, and Kirchner is just trying to point out it's further potential uses.

Klippel's tweeter paper found that standard models of large signal performance applies to tweeters, with some caveats. He found large amounts of 2nd order distortion are possible due to certain assembly issues, mechanical problems, etc.

He describes the particular test signal in the paper like this:
"Increasing the number of excitation tones while keeping
a sparse spectrum (some frequencies are not excited)
leads to the multi-tone distortion measurement [15].
This stimulus excites all kinds of harmonic and
intermodulation components (like in a real audio signal)
which are summarized to an integral distortion measure.
This technique provides meaningful “fingerprints” of
the large signal behavior in a short time (Klippel, date unknown)."

Article 15 refers to the Czerwinski article on "Multi tone testing of loudspeaker components."

It kind of seems like what I have is actually a two tone, as in the old IEC standard two tone. I'm not sure, it is addressed in that paper as a way at discovering some of the non-linearities.

It also very clearly states that the room falls into the category of linear system, where as this is a test for the non-linear system. Again, seems like I answered my own question, assuming I'm interpreting that right, this would not indicate much about a room's setup.
 
As with most "measures" it seems like "audibilty" has been left out. There are numerous well defined techniques for measuring nonlinearity, but this is not audibilty. Its a big jump from one to the other.

I see no reason why one could not use a measure like Coherence in a room at LFs. I really think that cohernce is overlooked, but admittedly there are things that cause a low coherence in a real room that are not audibity or linearity issues.
 
Coherence is a function in signal processing which compares the inmput to the output and plots the % of the output thats linearly related to the input. Thus nonlinearity appears as a lower coherence number, perfect being 1.0. But, unfortunately, delayed reflections also lower the coherence so they confound the measurement in a real room.
 
Ok I had a feeling that's what you meant. I assume that was referring then to the choice of a Q index. What is a Q index. Is there a standard for the number. In the case of this measurement, it's a number in the 1000's, with 1100 being the claimed threshold of audibility.

We call coherence Concordance in my field. It can be an especially difficult task to calculate when you need to take into account time-sequence. I'm witting syntax, as we speak, to calculate concordance of two data sets, which only is based on the concordance of concern, i.e. what we are actually analyzing. I think this would be analogous to calculating a percentage or ratio of coherence which varies only when the difference matters, such as being audible or not. It would be nice if such a tool could be developed in audio. It would mean you don't need to develop a threshold number to argue over, such as the case with distortion or ratios, but instead would say, if it varies from one, it means there is an audible difference.
 
This was the goal of the AES Working Group, but we basically gave up. Such a thing was done for perceptual coders and works quite well, but when we tried to apply it to loudspeakers it failed. Clearly it is possible, but complicated, and no one has invested the time to do it.

That said, JBL has done something like thatexcept that JBL belives that only linear distortions matter so anything that takes into account nonlinear aspects of perception have been ignored. Right or wrong they claim that their measurement technique has about a 95% confidence in its rating correlating with the subjective impression. Whatever the correlation, I tend to agree that what they have done accounts for the vast majority of our perception. It seems that we should first adapt what they have done and then look to improve it with nonlinear stuff. But since acceptance of what they have done is minimal, what likelihood could we expect from the acceptance of a nonlinear metric?
 
gedlee said:
Coherence is a function in signal processing which compares the inmput to the output and plots the % of the output thats linearly related to the input. Thus nonlinearity appears as a lower coherence number, perfect being 1.0. But, unfortunately, delayed reflections also lower the coherence so they confound the measurement in a real room.

I am confused that a delayed reflection would decrease the value of the coherence.

My memory, and perhaps that is the culprit, is that coherence is unaffected by changes in amplitude or phase (linear operations). I conceive of a reflection as a linear phase shift across a range frequencies (shift in time rather than a simple shift in phase). Why does this reduce coherence?
 
WithTarragon said:


I am confused that a delayed reflection would decrease the value of the coherence.

My memory, and perhaps that is the culprit, is that coherence is unaffected by changes in amplitude or phase (linear operations). I conceive of a reflection as a linear phase shift across a range frequencies (shift in time rather than a simple shift in phase). Why does this reduce coherence?

A very perceptive question, because, honestly I don't know. And furthermore! I have asked this same question myself. It turns out that the first reflections are more coherent and as time goes on the coherence goes down. Its all linear and fundamentally what you say is also what I thought. But it doesn't happen that way.

It also turns out that a purely linear filter, know as a decorrelation filter will make the input and output have zero coherence. Odd but true. Someday I'll work through the math (Yea right!!) and see what is happening.

Great question though.
 
"I am confused that a delayed reflection would decrease the value of the coherence. "

Does the math say that the reflection alone is coherent, which makes sense, or that the reflection + original is coherent?

It seems intuitively apparent that the latter ought not to be coherent (more so when considering that the spectral balance is changed); witness the sonic mess you get in the farfield in a highly reflective room.
 
noah katz said:
[B or that the reflection + original is coherent?

It seems intuitively apparent that the latter ought not to be coherent [/B]


I guess its not intuitive to me. Coherence is said to be that portion of the output that is linearly related to the input - where is the system nonlinear? Time delay is linear. Its just not obviuos to me. I know that it is true that the reveberant field in incoherent, but why?
 
noah katz said:
"I am confused that a delayed reflection would decrease the value of the coherence. "

Does the math say that the reflection alone is coherent, which makes sense, or that the reflection + original is coherent?

Think of the simple case: a tone.

A tone plus its reflection is still a tone with a "new" amplitude and a "new" phase. No additional frequencies have been added. The operations are quite linear.
 
In this context isn't Coherency: the degree to which the waves of the output match the waves of the input.
Light waves ( which can be polarized ) two waves are coherent if the crests of one wave are aligned with the crests of the other and the troughs of one wave are aligned with the troughs of the other. Otherwise, these lightwaves are considered incoherent
Light produced by lasers are coherent light. Light from light bulbs or the sun, however, are incoherent light.
Shouldn't sound ( longitudinal waves ) require phase matching to be considered coherent?
 
Sorry, but the analogy isn't accurate. Because light is a vector wave (sound is a scalar wave), in addition to its phase in the time sense, it can also have a phase in the spatial sense. Coherent light has a constant polarization which means that its time phase and its spacial phase are all matched and constant. Light from a hot source (bulb or the sun) is random in both time phase and spatial phase, it is circularly polarized with random polarization. Its a complex topic, but suffice it to say that coherent light has no analog in a sound wave.
 
This gets complicated since I don't have a blackboard to use ....

That said, the problem is that coherence has a technical meaning and also a semantic or "general" meaning.

We are, or should be, using the technical definition. This is what the equation provides (which I can not write without much explanation that would be required). It is also what is used or approximated in hardware such as a two-channel spectrum analyzer.

So let me put forward a simple definition (in words) that captures the mathematical definition.

Coherence is a measure of the amount of output power that can be attributed to the input. The remainder of the output then being attributable to noise and distortion (non-linearities).

That is the best that I can do without a blackboard and without bringing up a number of technical concepts.
 
I was not trying to apply "light to sound" as a functional analogy.
But in the context just mentioned:
the problem is that coherence has a technical meaning and also a semantic or "general" meaning.
What is ( are ) the meaning and determinations of "coherency" ( at least in this context ).
A delay I can understand as coherent,
Wouldn't any phase shift constitute "non-linear" behavior or lack of coherency
 
pjpoes said:


I'm also finding that the waveguide isn't taking paint like the mdf does. In fact, given it's more perfect surface, I expected it to be smoother, but its far rougher. It might be the conical shape causing paint to lay up less evenly. It is sticking though, it just needs time to set. At least 24 hours before touching it.



Curious about this subject. I haven't finished assembly yet, but I am getting my supplies ready for finishing.

I bought the BIN primer in spray cans, and will probably be using spray paint (black lacquer) on them. I have never used a spray gun so maybe its not as tricky (as I feel) the spray cans are to use. I have only used spray paint once before, and never a spray gun. As I understand it though, to work correctly you need to keep the cans almost vertical, correct? So I assume I should be spraying the waveguide lying down face up, so that the angle of the spray hits the conical shape, and the gravity helps to drop the coating onto the surface, rather than standing up the enclosure and using the force of the spray to hit the inside of the waveguide? (-sorry that was awkwardly worded, just not sure how to reword for better clarity.)

-Tony