Klippel Near Field Scanner on a Shoestring

NTK's code use python for the whole thing which is already done, no need to reinvent the wheel for anyone else unless they want to.
https://www.audiosciencereview.com/...hematics-and-everything-else.9970/post-352067
If a person had the measured data, would that code produce a usable result? I ask because I don't remember if NTK said if the code was just what was used for the simulations or if it could be used to process real data.
Remember that we are taking data in the near field and as such one cannot just calculate the impulse response and let it go from there. You have to move the results into the far field which is why the full blown analysis of the radiation modes is required at all frequencies and not just below some "matching" point.
If a person was only interested in the horizontal polar response of a speaker, couldn't they do a near field scan and full analysis of radiation modes up to 1kHz, then move the microphone into the far field and collect "traditional" horizontal polar's above 1kHz and then knit the two data sets together? I realize that's not as desirable to some as being able to do a full 3D balloon plot, but a DIY anechoic polar plot from indoor measurement data isn't something we can do at all right now. If we can do full range model analysis, that'd be great, but we're still ahead overall with a simpler system.
 
Also on page 14 there is acknowlegement that in theory the directivity can be described by a very few points, which is perhaps what you are saying.
It is a fact that the number of data points need not be any greater than the number of modes being fitted. (It can actually be less and a least-squares fit performed, but the answers will then not be unique.) For symmetrical horizontal work this is about 12-15 as I said, but for full spherical work the number goes up rapidly. I suppose that Klippel sets the two spherical mode numbers (usually called m and n) to be equal, which is not required, and then find where the cutoffs are for these modes. If the n and m indices are set equal then the number of point required would go something like the equation that you showed. But it could be far less than that if some simplifying assumptions about which modes to use are made. In an automated system oversampling is not an issue (it just takes a little longer,) but if done manually it certainly is. My system was manual and hence I did everything that I could to simplify it.
 
  • Like
Reactions: 1 user
If a person was only interested in the horizontal polar response of a speaker, couldn't they do a near field scan and full analysis of radiation modes up to 1kHz, then move the microphone into the far field and collect "traditional" horizontal polar's above 1kHz and then knit the two data sets together?
They could, but what a waste! Why not just take all the data near field and calculate to the far field? That's what I did.
but a DIY anechoic polar plot from indoor measurement data isn't something we can do at all right now.
I have been doing it for more than a decade.
 
  • Like
Reactions: 1 user
@mbrennwa : I'm trying MATAA on Windows in Octave. I don't hear any sound and MATAA complains about API being UNKNOWN:

%>lq/d<dwdh"5⊅䙣Testing sound input/output...
The audio device that will be used for audio output is: ASIO4ALL v2
The audio host API is (output): UNKNOWN
Number of channels of output device: 2
Minimum sampling rate (sound output): 8000 Hz
Maximum sampling rate (sound output): 192000 Hz
The audio device that will be used for audio input is: ASIO4ALL v2
The audio host API is (input): UNKNOWN
Number of channels of input device: 2
Minimum sampling rate (sound input): 8000 Hz
Maximum sampling rate (sound input): 192000 Hz
The audio I/O test will be done using a sine-wave signal with:
frequency = 1000 Hz
duration = 0.100000 s
sampling rate = 44100 Hz
Starting sound I/O...

I've tried reading the manual, installed the mentioned 'redistributable', but no success this far. Do you have any advice?
 
They could, but what a waste! Why not just take all the data near field and calculate to the far field?
For this reason:
For symmetrical horizontal work this is about 12-15 as I said, but for full spherical work the number goes up rapidly.
In my opinion, there are two primary reasons to pursue a DIY near field scanner:
  1. full range, high resolution, anechoic data from echoic rooms.
  2. high resolution polar data.
Since the number one reason to do this is full anechoic data, full spherical work is necessary, and since the number of data points goes up rapidly for that, some might be interested in collecting a lower number of points. However, if the hardware is automated, and the data points can be optimized, then there isn't a reason to do a near field/far field merge.
but a DIY anechoic polar plot from indoor measurement data isn't something we can do at all right now.

I have been doing it for more than a decade.
Maybe I should have clarified that: full range, high resolution, anechoic polar plots from indoor measurement data isn't something we can do at all right now.

If I remember, you have said that you can use a 6ms IR window in the space you have for measurements, I used to have a room where I could do a 6ms window, the best I can get in the rooms I have to work with now is 3.5ms. Here's a comparison of those IR windows to a beamformed measurement with a 500ms window:
IR Windows.jpg

There's a resonance below 300Hz that completely disappears once a short IR windows are applied. Is this resonance audible? I don't know, but my data wouldn't show that it was there with a short IR window. Yes, I know that there are multiple ways to find an offending resonance, and there is every reason to use them. However, many of those methods are used because the frequency resolution from the primary dataset is too low to reveal those resonances. If you have a design process that identifies lower frequency resonances by a means other than frequency response, that's great, use whatever toolset works for you (and share tips and tricks!), but for me, I want to have a primary data set that has high enough resolution to see things like that in addition to those other analysis tools.
 
  • Like
Reactions: 1 users
@Tom Kamphuys I am very much clueless about Windows audio. You could try running the TestTone progam in a shell and see what happens. Maybe try to get hold of Shu Sang, who might be able to help.

However, I'd recommend going the PlayRec route with Linux. PlayRec allows much faster data acquisition than the TestTone stuff. Also, reading about the variable latency issues on Windows (see a few posts back), your life might be easier if you'd avoid Windows for this project.
 
But it could be far less than that if some simplifying assumptions about which modes to use are made.
I would be happy to have a system that worked like yours, particularly in dual planes. I would prefer simple over complicated robotic automation if it was good enough. I have tried for a really long time to encourage anyone who seems to me to have the mathematical ability to pull it off. Ultimately nothing came of it. Just like nothing came from the effort to use your old code before.

Are you willing or able to explain privately or publicly exactly how to replicate your process? NTK was able to replicate Klippel's approach from their documents, but the information from earlier on your approach was not enough to do the same thing. With your direct help maybe he can do it.

If a person had the measured data, would that code produce a usable result? I ask because I don't remember if NTK said if the code was just what was used for the simulations or if it could be used to process real data.
I believe it is just a matter of having a compatible data format and adapting the naming scheme of the measurements so the code knows which x,y,z coordinate they represent. NTK has been willing to help me with using ARTA pir files as input and using a cylindrical set of coordinates. It is my understanding that the code is flexible enough to accommodate different options. I'm almost certain that if a dataset was available he would help to make the code process it.
 
  • Like
Reactions: 1 user
Sure. But the phase is a property of the Fourier coefficients of a specific signal. It's not a property of the microphone. I therefore don't see how microphones can be "phase matched" or "phase tracking".


Ok, that makes more sense. The latency of the sound processing is indeed important (but that's something else than phase). I have never worried about Windows, but I am sure there are ways to get a grip at the audio latency. With Linux and macOS I never had any issues with reproducible timing of the recorded data. Repeated measurements are usually consistent to within the time resolution determined by the sampling rate.

Also, so-called "USB microphones" need not apply here.


I didn't think of the transfer function of the microphones. I simply tend to think that the data recorded from the microphone is already compensated for the specific transfer function(s) of the microphone(s). MATAA does that on the fly, so I usually don't spend a lot of time thinking about this. The MATAA method is to determine the phase-frequency response from the amplitude-frequency response, then compensate for both in the frequency / Fourier domain.
Time is phase and phase is time.

Give it a name, but what counts is how they sum up eventually.
Which by definition will change by just delaying a signal.

I have done a lot of data acquisition in the past and worked with companies who are big on this.
All of them think the same about phase and delay for this reason.
Having a proper reference is absolute key.

Most mic calibration files are only freq resp compensated, not phase compensated.
In same cases, some audio interfaces or system solutions have a ADC and DAC that aren't on the same clock.
Which can also give artifacts.

If you like it or not, the very vast majority of people use Windows. So that's gonna be the reference here as well.

Speaking of, something like this wouldn't probably that hard to do in LabVIEW (or that open source free alternative, forgot the name).
In that case you need to do a lot less mathematical solving, and just let basic control theory blocks do the bulk.

Been ages since I used that.
 
  • Like
Reactions: 1 user
Time is phase and phase is time.
I know that's what is baked into the brains of many. But that's like saying that time is frequency and frequency is time, which is certainly not correct. Like time and frequency are not the same thing, time and phase are very different things in mathematical and physical terms. Otherwise there would not be two different terms.

Give it a name, but what counts is how they sum up eventually.
Which by definition will change by just delaying a signal.
That's true for a given sine/cosine Fourier component. But not for an arbitrary signal in the time domain, like some test signal recorded by a microphone.

Most mic calibration files are only freq resp compensated, not phase compensated.
Most? I have never seen a "microphone calibration" that specifies phase vs. frequency, simply because it's not necessary. Microphones are considered minimum-phase systems, so the phase-vs-frequency function is simply determined from the amplitude-vs-frequency function.

If you like it or not, the very vast majority of people use Windows. So that's gonna be the reference here as well.
Whatever. I am not going to lead this project, and everyone is free to use whatever he/she wants, even if it's the wrong tool for the job.

Speaking of, something like this wouldn't probably that hard to do in LabVIEW (or that open source free alternative, forgot the name).
In that case you need to do a lot less mathematical solving, and just let basic control theory blocks do the bulk.
The maths behind the Weinreich / Klippel stuff is always the same, no matter which software tool is used.

In the end it boils down to what the product of the project should look like. Do you guys want to build a full "Klippel clone machine" with hardware and software that works as an integrated system and is meant for copying by the DIY masses? Or do you simply want to build a one-off thing for use by one person and to demonstrate the DIY feasibility? Or something else?

A clone-for-the-masses thing would surely benefit from a little computer control like an RPi that controls the micropone arms, does the audio I/O, and processes the data. This could run any software that fits the job, no matter what people already have on their home/office computers.
 
That's true for a given sine/cosine Fourier component. But not for an arbitrary signal in the time domain, like some test signal recorded by a microphone.
Don't know exactly what you're trying to say, but phase or time delay are extremely important for summing ANY signals. Whatever that signal is or how complex it is. That is totally irrelevant.

Btw, most freq resp measurements are done with pure sine waves to improve SNR.
All other types will give worse SNR.
There are basically very little reasons not to use a sweeped sine wave.
 
Don't know exactly what you're trying to say
I don't know exactly what you don't understand. I give up.

Btw, most freq resp measurements are done with pure sine waves to improve SNR.
All other types will give worse SNR.
There are basically very little reasons not to use a sweeped sine wave.
A "sweeped sine", i,e, a chirp, is not a sine wave. Consider this (where t is time):
  • What is the Fourier transform of y1(t) = sin(2 pi f0 t), where f0 is the frequency of the sine wave?
  • What is the Fourier transform of y2(t) = sin(2 pi f0 (k^(t/T)-1)/ln(k)), where f0, T and k are the parameters of the chirp?
 
I'm almost certain that if a dataset was available he would help to make the code process it.

I'm trying to break things up into small pieces. I can move the stepper motor with the tic controller software, ticcmd and arta_tic. I can simultaniously play and record sound in Python (and I'm working on doing this in Octave). Python can story .mat files that Octave can read. MATAA can do stuff. Although NTK's code doesn't currently run on my computer, he's willing to help.

Biggest gap is the hardware. I've some ideas. I've order some stuff. But I guess that will take some time and experimentation.

Is it possible to cheat on the dataset? What if we measure multiple times* in one point in space and then again in one point in space a bit further away? Would that work? The reconstructed radiation pattern would ofcourse not match reality, but we might use that for NTK's program. I guess it would reconstruct a spherical radiation pattern.


* We might even get away with multiple copies of 2 measurements, but we might need independent noise in all copies. I don't know.
 
  • Like
Reactions: 1 user
I would be happy to have a system that worked like yours, particularly in dual planes. I would prefer simple over complicated robotic automation if it was good enough.
I can understand your position because it has always been mine as well. But what seems to have happened here is typical "features bloat" where people want everything all at once - fully automated 3D high resolution in a DIY package - good luck with that.

First I would do software in one plane, exactly like I did, with manual data taking (trivial hardware, some serious software tasks, but maybe NTK's software is good enough as it stands, I don't know.) Get that to work and then add pseudo 3D, ie. a simple addition with a few more points that allows for some vertical polar resolution. I worked on this enough to know that its doable. With a one plane system one can always turn the speaker on its side and get a vertical polar.

Once that is done then perhaps add in a two sphere data set that eliminates room reflections. I have never actually done this, but it's all in the Weinreich paper. This is a huge step that would take a lot of math and software to complete and IMO does not add much value since it only adds info for LFs that can be obtained in other ways.

Then automation makes sense, but not until one has a strong grasp on the math software calculations. As to the hardware implementations I can be of no help.

Many years(?) ago I posted here a full description of the mathematics that I used in my system. I don't know how much more I can add to the discussion than this. I am fully willing to describe my system in any detail that one wants, preferably in public. I have not been secretive about any of it, but I cannot do the coding myself and we have already learned that the code that I have is not readily viable to integrate into a new UI (as I expected.)
 
  • Like
Reactions: 1 users
I simply don't get what everyone means by "phase" in this context. The term "phase" relates to the angular argument of a sine (or cosine) function.

However, as I see it, the microphone(s) will record the test signals from the speaker and its room echoes. The signals can be Dirac pulses, white or pink noise, MLS, sweeps, etc. They will not be sinusoidal.
Phase is where a signal from a signal source ( loudspeaker in this case ) is in time and angle from the point of measurement. The math that you quote is a way to quantify this.
 
First toys have arrived!

View attachment 1273936

That's already more commitment than with arta_tic, which I developed without the tic board and stepper motor.
Nice! What are you going to use to communicate between the stepper drives and the computer?

I'll go manual until a viable CNC system makes it's way to the threads. Right now I'm busy helping my oldest son make a kitchen cabinet set. See what happens when you were a Cabinetmaker? :)
 
But it could be far less than that if some simplifying assumptions about which modes to use are made.
As I would expect, there is strong agreement between your statement and with Klippel's process:
1708195710019.png

Based on the information Fluid posted, device symmetry is one of the big the keys. So there would be benefit to allow the user to inform the system how complex the DUT is.