Klippel Near Field Scanner on a Shoestring

I thought I'd share an update to this project:

With thanks to a suggestion by user NTK over at Audio Science Review, I tried an experiment tonight (the technique is detailed here and here, and conceptually very similar to this). The method is to sum multiple impulse responses from along an axis radiating from the speaker under test, the idea is that randomly differing reflections will be smoothed out but the desired response from the loudspeaker will remain. However, previous examples of this technique have been performed in rooms, so a minimum of six reflective surfaces and modal behavior of the room are needing to be overcome. This experiment was to see how the technique would fair outside with only a reflection from the ground to contend with.

It was a quick and dirty experiment in my back yard, so road construction, neighborhood and bird noises are in there. Plus I didn't want to run the measurement sweeps too loud since it was after 8:00PM... and I wasn't super careful with mic placement. So this ain't laboratory grade. Nevertheless, I think the results are still significant. The process was to sum 40 individually taken IR's in Audacity and export that sum as a .WAV file for import into REW. The measurements themselves were taken on roughly the tweeter axis of a Behringer B2030P starting at 37" away from the speaker and ending at 71" away. The speaker and microphone were about 7' above the ground. Below are the results with no smoothing or IR windowing. The black line is the processed data; the gray line is a single one of those measurements used in the process for comparison; the red line is a Behringer B2030P measured by Amirm at Audio Science Review on his Klippel NFS; the green line is another one of my 2030P's measured several years ago in my living room just to emphasize the difference. Again, no IR window or smoothing has been applied to any of this data.
View attachment 1060738

As another comparison, in the linked paper by Daniel Krol, where this experiment was inspired, 100 measurement points and algorithmic processing were used to get the following in comparison between anechoic and this quasi anechoic in room measurement method:
View attachment 1060742

More investigation needs to be made to refine the process, and to see just how few data points are needed to get useful data, but it is a promising technique for DIY'ers to get a better look at what a speaker is doing below 1kHz... even if it's not a Near Field Scanner on a shoestring.
You linked me this post on ASR over a year ago and now I have a new speaker project coming up. I'm thinking of getting a few cheap microphones which are pretty linear between 100 and 5000Hz and summing them through my ADA8200 and CamillaDSP. If that works well, I'll add my calibrated MM1 to the measurement apparatus and have CamillaDSP 'splice' the gated HF part with a linear phase XO.
Do you have any advice for me? I have 8 inputs on the ADA8200 so I could use up to seven of those cheap mics. Have you tried summing up less than your 40 IRs to see how many are required for good results?


Preliminary data for https://www.thomann.de/intl/the_t.bone_bodymike_tan_akg.htm
1737556572271.png
 
You linked me this post on ASR over a year ago and now I have a new speaker project coming up. I'm thinking of getting a few cheap microphones which are pretty linear between 100 and 5000Hz and summing them through my ADA8200 and CamillaDSP. If that works well, I'll add my calibrated MM1 to the measurement apparatus and have CamillaDSP 'splice' the gated HF part with a linear phase XO.
Do you have any advice for me? I have 8 inputs on the ADA8200 so I could use up to seven of those cheap mics. Have you tried summing up less than your 40 IRs to see how many are required for good results?
Sorry about the late reply, I came down with COVID this week. So please bear with me if this response is a bit loony.

That's an interesting idea to use a multichannel mixer and Camilla to try and do the summing in real-time. One thing to keep in mind regarding this method is that all the mic inputs need to be time aligned, so you would need to apply appropriate delays to each microphone so that the resulting output is as though the sound from the speaker reached each one at the same instant in time. That's why doing the processing in REW is nice, since all the IR's have their peaks aligned. It's likely doable live with DSP, but it might be... fiddly.

I have had good success with less than 40 IR, here's 15 and 4 compared to ASR's Klippel measurement of a B2030P:
Klipple 15 4.jpg

Basically, the more IR's are summed, the smoother the result. But you don't need crazy high numbers. Another thing that I have observed is that the signal to noise ratio for the far mic positions is, of course, worse than the positions closer to the speaker. So I think doing this method "vertically" helps you out. In other words, keep the mic on your desired axis and it's distance to the baffle the same but raise the speaker and microphone in increments and sum those results. This can also allow a situation where the ground reflection from one IR helps smooth out the reflection from another. Here's a DIY speaker to show that with summing just two measurements: see how one is high at 100Hz, 300Hz, 500Hz, and 700Hz, and one is low?
Low and High.jpg

Here's what happens when those two measurements are summed and compared to a ground plane measurement of the same speaker:

Results.jpg


The effect below about 200Hz is particularly interesting to me because the "horizontal" Delayed-Sum Beamforming technique tends to have issues below 200Hz in my experience (as can be seen in the B2030P measurements).

One final thought: directional microphones can be awesome. Here is a cardioid mic pointed vertically, an omni in the same position, and ground plane.
Cardioid v Omni.jpg

I would probably splice in a IR gated omni measurement above 1kHz or so, but below that, I think it's an excellent option to get reflection free data. A supercardioid may work better since the center of the null would be pointed closer to the angle of the ground reflection.

Oh, and keep in mind that all of these experiments have been outside. Only having one reflection to deal with makes things waaaaaay easier.
 
How much dud you move box and mic upwards?
I think in the example that I showed one measurement was at 41" and the other was at 72". I was trying to have one reflection path length half of the other. But that being said, I was experimenting with a little desktop speaker. If I was using "vertical" Delayed Sum Beamforming with a large speaker, I probably wouldn't be trying to be too ambitious in how high I got the speakers, just that I collected some vertical measurement points. Maybe a crank up stand? But then again, I don't want to be responsible for breaking speakers that you worked very hard on. 😳
 
Like i said, 2 double espresso's (the real one's) and it dawned on me i had measurements on-axis of my PTT4.0M08-NAC04 midrange, and that in Acourate i can align in time the impuls. Thank you espresso's and @aslepekis for waking me up ;-)

The measurements at 46, 70 and 100cm, plus the sum:
Beamforming-46-70-100cm.png


A significant improvement!
Now the level of the sum adjusted by 9.5dB, and the main reflections pointed at:
Beamforming-46-70-100cm-leveled-details.png

The floor reflection is well damped with a woolen carpet (~5cm thick) and sound absorption plate.
Here a picture of one of the measurement sessions:
20240712_105157.jpg

So yes this helps in getting a much cleaner result, without large numbers of measurements. be it of course on one axis only.

Note that even the high frequency "noise" stemming from the aluminium cone breakup etc is somewhat damped as well.

Next will be measurments at the final positions in the room etc ;-)
 
Sorry about the late reply, I came down with COVID this week. So please bear with me if this response is a bit loony.
That's an interesting idea to use a multichannel mixer and Camilla to try and do the summing in real-time. One thing to keep in mind regarding this method is that all the mic inputs need to be time aligned, so you would need to apply appropriate delays to each microphone so that the resulting output is as though the sound from the speaker reached each one at the same instant in time. That's why doing the processing in REW is nice, since all the IR's have their peaks aligned. It's likely doable live with DSP, but it might be... fiddly.

I have had good success with less than 40 IR, here's 15 and 4 compared to ASR's Klippel measurement of a B2030P:
View attachment 1412013
Basically, the more IR's are summed, the smoother the result. But you don't need crazy high numbers. Another thing that I have observed is that the signal to noise ratio for the far mic positions is, of course, worse than the positions closer to the speaker. So I think doing this method "vertically" helps you out. In other words, keep the mic on your desired axis and it's distance to the baffle the same but raise the speaker and microphone in increments and sum those results. This can also allow a situation where the ground reflection from one IR helps smooth out the reflection from another. Here's a DIY speaker to show that with summing just two measurements: see how one is high at 100Hz, 300Hz, 500Hz, and 700Hz, and one is low?
View attachment 1412016
Here's what happens when those two measurements are summed and compared to a ground plane measurement of the same speaker:

View attachment 1412017

The effect below about 200Hz is particularly interesting to me because the "horizontal" Delayed-Sum Beamforming technique tends to have issues below 200Hz in my experience (as can be seen in the B2030P measurements).

One final thought: directional microphones can be awesome. Here is a cardioid mic pointed vertically, an omni in the same position, and ground plane.
View attachment 1412026
I would probably splice in a IR gated omni measurement above 1kHz or so, but below that, I think it's an excellent option to get reflection free data. A supercardioid may work better since the center of the null would be pointed closer to the angle of the ground reflection.

Oh, and keep in mind that all of these experiments have been outside. Only having one reflection to deal with makes things waaaaaay easier.
Thanks a lot for your answer. I hope you're doing better already. There was some research that suggested the post covid mental drowsiness is less prevalent in people that used their brains for complex tasks (or even some crosswords if I remember correctly) during the illness. So you didn't only do me a big favor with your answer. 😀

Summing the mic responses is going to be fiddly, especially when I do my measurements outdoor and have to account for temperature and humidity. I would like to try that indoor, also since it's the middle of the winter here. But the results with even four mics look very promising. Nearfield or in-box spliced to the multiple mics for MF spliced to one mic for HF should be pretty much perfect then. I'm really looking forward to that increased mids resolution. That was always a problem for me as can be seen with the Swissonic A-Series.

Also, looking at the measurement setup in the post above: A recangular box around the microphones lined with mineral wool should make them pretty cardioid.
 
Thanks a lot for your answer. I hope you're doing better already. There was some research that suggested the post covid mental drowsiness is less prevalent in people that used their brains for complex tasks (or even some crosswords if I remember correctly) during the illness.
Yup. It iz what it iz. 3rd time having COVID. We will survive.

Some thoughtful posts all around.
 
  • Like
Reactions: aslepekis
Thanks a lot for your answer. I hope you're doing better already. There was some research that suggested the post covid mental drowsiness is less prevalent in people that used their brains for complex tasks (or even some crosswords if I remember correctly) during the illness. So you didn't only do me a big favor with your answer. 😀
Happy to help! And thanks for helping back. 😀
Summing the mic responses is going to be fiddly, especially when I do my measurements outdoor and have to account for temperature and humidity. I would like to try that indoor, also since it's the middle of the winter here. But the results with even four mics look very promising. Nearfield or in-box spliced to the multiple mics for MF spliced to one mic for HF should be pretty much perfect then. I'm really looking forward to that increased mids resolution. That was always a problem for me as can be seen with the Swissonic A-Series.
Yeah, it's winter here too, so not a whole lot of out door experiments happening for me either. At least we got above freezing today!

Agreed on the increased resolution for the midrange. That was one of the big driving forces behind all this for me. Your plan sounds like it has promis.

One more comment on delayed sum beam forming indoors, since the important thing is the geometry between the microphone and drivers, each of your measurement points could be taken at different heights and in different rooms and it would probably be helpful in the summing process since the more dissimilar the reflections, the better.
Also, looking at the measurement setup in the post above: A recangular box around the microphones lined with mineral wool should make them pretty cardioid.
Perhaps, but I'd just a cardioid mic, personally. 🙂
 
I haven't been working on the table top NFS for the last months. To be honest, I just didn't feel like it. I kinda proved it could be done. It's one of my pitfalls to loose interest at that point where others can seem to go on forever on the same topic.

I've started to pick things up again very slowly. We'll see how it goes...

I was trying to do some cable management back in October. The easiest cable management is with less cables. So I bought an arduino form factor esp32 board that supports wifi or bluetooth. Running fluidnc on it should give a grbl interface over wifi (websockets) and has a nice CNC based gui running out of the box.
Let's see how it goes. It's at least something new to tinker with.
IMG20250201191515.jpg
Usb cable is only for power.

Another part that I found interesting was the AI addition to PyCharm (trial). This is what it says about the measurement points generating code. Quite impressive.

The selected code defines a Python class called SphericalMeasurementPointsSorted. This class is designed for generating points arranged on a sphere (and an inner shell) and subsequently sorting them based on cylindrical coordinates. Here's a breakdown of what the code does:

### Key Features and Functionality

1. Initialization (__init__ method):
- Initializes the class with parameters for the sphere and its surroundings:
- nr_of_points: Number of sampling points to generate on the sphere.
- wall_spacing: Distance between the outer and inner spherical shells.
- radius: The radius of the outer sphere.
- speaker_height, speaker_width, speaker_depth: Dimensions of the speaker or an obstacle that the sampling points must avoid.
- Computes spherical grid points using the formula for spherical coordinates (theta for vertical angles and phi for horizontal angles):
- m_theta quantifies how many latitude circles pass through the sphere.
- A nested loop determines theta and phi values for all points distributed evenly over the sphere.
- Converts spherical points (theta, phi) into Cartesian coordinates (x, y, z) for the outer and inner radius of the defined sphere. Inner points are offset by the wall_spacing.

2. Collision/Obstacle Check:
- Determines whether any generated points fall within the bounding volume of the speaker or obstacle.
- Bounding conditions (bound_x, bound_y, bound_z) filter points to ensure none overlap with the obstacle. If a point does overlap, an exception is raised.

3. Cylindrical Sorting:
- Transforms Cartesian coordinates (x, y, z) into cylindrical coordinates (r, theta, and z):
- r: Radial distance from the z-axis.
- theta: Angular position in degrees around the z-axis.
- z: Height along the z-axis.
- Points are sorted primarily by their theta value, and secondarily by their z value to ensure a structured order.

4. Iteration Over Points:
- Provides a method next to iterate sequentially through the sorted points. Each point is given in cylindrical coordinates (r, theta, z) encapsulated in the CylindricalPosition data type.
- Keeps track of the current index being accessed.
- A ready method signals completion of traversal.

5. Reset Mechanism:
- reset rewinds the internal pointer back to the beginning of the list of points, allowing for re-execution of the traversal.

6. Registration Hook:
- Includes a register method to integrate with a factory pattern system by registering this class with the identifier "SphericalMeasurementPointsSorted".

---

### Practical Use
The SphericalMeasurementPointsSorted class is useful in applications requiring structured point sampling on and inside a sphere while avoiding obstacles. This might include:
  • 3D sound measurements or simulations (e.g., for speaker testing).
  • Sampling for spherical harmonic analysis.
  • Collision-aware point distribution in robotics or computer graphics.

---

### Output of the Code
The class ultimately generates points distributed over a spherical shell, avoids obstacles, and provides them in cylindrical coordinates for usage in algorithms or analysis steps.
 
I did quite some software refactoring. Trying to integrate the ESP32Duino neatly, I had a period where everything would crash. I needed to make a class diagram to see what was used where. That showed some weird relations.

This is the situation after the refactoring:
1741364005391.png

I think the structure should even be clear for non-sw developers. The lines with a little square indicate a has-a relation. The open triangles indicate an is-a relation. Only one of the classes below is present. This acts as an option, switch. The 'mocks' do nothing; they help in testing and allow me to test only parts of the complete objects tree.

I ordered some more ESP32Duino's + CNC shield. I hope I can also replace the TIC and control the rotating table wirelessly (the ESP32Duino is way cheaper too). The software is ready for that. The IRotator interface has two functional implementations: TicRotator and GrblRotator. GrblRotator can use either an arduino or an ESP32Duino.
 
The extra boards arrived today. I made the changes indicated in this video:


Both ESP32Duino boards (hidden below the red CNCshields) have a different ip address and that's enough to control them separately.

I started the software and everything started rotating 🙂

IMG20250308204459.jpg

(USB is just for power)

I plan some more tests and then slowly build up the hardware again.