Klippel Near Field Scanner on a Shoestring

a test case to reproduce

Code:
using System;
using System.Runtime.InteropServices;
using System.IO;
using System.Numerics;


namespace Smooth
{
    class Program
    {
        static double[] parse(String input)
        {
            var lines = input.Split('|');
            var n = lines.Length;
            var vals = new double[n];
            for (var i = 0; i < n; i++)
            {
                vals[i] = Double.Parse(lines[i]);
            }
            return vals;
        }
        
        static void Main(string[] args)
        {
            double[] dataIn = parse("3.72526296e-05|5.09016873e-05|6.63042274e-05|8.34622579e-05|1.02298897e-04|1.22588259e-04|1.43873918e-04|1.65384019e-04|1.85999733e-04|2.04478233e-04|2.20575230e-04|2.38870409e-04|2.74201035e-04|3.42342845e-04|4.41093709e-04|5.41365150e-04|5.90392959e-04|5.86344944e-04|5.85859271e-04|6.14657646e-04|6.47684903e-04|7.15555761e-04|7.82675127e-04|7.16797402e-04|7.56132818e-04|6.93179605e-04|6.65578126e-04|5.70543066e-04|6.78094958e-04|5.18569639e-04|5.20194636e-04|5.68871826e-04|6.99442756e-04|8.01558009e-04|9.02082084e-04|7.05317036e-04|6.13709079e-04|7.04141267e-04|7.79495030e-04|6.34446944e-04|6.02785733e-04|4.88566812e-04|3.41827165e-04|4.42934164e-04|4.74154089e-04|3.92026914e-04|5.35163214e-04|4.68319409e-04|5.28945306e-04|2.33644852e-04");
            double[] freqs = parse("20.|23.02790799|26.51422731|30.52835934|35.1502125|40.47179295|46.59903621|53.65391591|61.77687193|71.12960612|81.89830125|94.29732727|108.57350879|125.01103851|143.9371346|165.72855457|190.81909527|219.7082284|252.97104337|291.2696955|335.36658736|386.13954578|444.59929651|511.90958454|589.41034051|678.64435438|781.38798741|899.68653379|1035.89493585|1192.72466332|1373.29769001|1581.20864218|1820.59635598|2096.22626831|2413.58528128|2778.99098875|3199.71743921|3684.13993865|4241.90177584|4884.1061891|5623.53739595|6474.91508564|7455.18744063|8583.86852026|9883.42672265|11379.73205804|13102.57113719|15086.24012671|17370.22747503|20000.");
            int numpts = freqs.Length;
            for (var i = 0; i < 6; i++)
            {
                Console.WriteLine(">> Smooth " + i + " - " + numpts);
                double[] copy = (double[])dataIn.Clone();
                MeasCalcs.Smooth(copy, freqs, numpts, i);
                int nans = 0;
                foreach (double j in copy)
                {
                    if (Double.IsNaN(j))
                    {
                        nans++;
                    }
                }
                Console.WriteLine("Smooth " + i + " NaNs: " + nans);
            }
            System.Threading.Thread.Sleep(20000);
        }
    }

    class MeasCalcs
    {
        [DllImport("MeasCalcs.dll", CallingConvention = CallingConvention.Cdecl)]
        public static extern void Smooth(double[] dataIn, double[] freqs, int numpts, int atype);
    }
}

I realised I was using freqs.Length-1 in my code but this function already accounts for that in the definition (i.e. the arrays are declared as 0:NumPts-1 so I don't need to -1 this time). This reduces the number of NaNs but doesn't completely eliminated them, CB Moore seems particularly troubled.

I had a chance to look into this. It looks like, I could not trace the exact error since it seemed to work most of the time, but it appears that there are not enough points for the algorithm to work properly. I have never tested it below 200 points. When the points get too few then the critical bands are not even a single point wide and the algorithm fails. I could probably make this work at some point, but for now try and run it with 200 points and see if it still fails. It seems to always return a single Nan at the last position and I'm not sure why this is either, but for now could you just ignore that point?
 
I will give it a try with some longer datasets.

No problem ignoring the last value, it just means the displayed data ends slightly early but I don't think that matters so much really.

I have spent some time tidying up various bits and pieces to make it more robust. I probably now need to package it into an exe.

Before I do that though, @gedlee is it ok to package the DLL in the exe and make that publicly available?
 
Last edited:
Sure, that's a great idea to get some more input on the features. You can distribute the DLL, there is no issue in that.

The remaining two DLL calls are to calculate two things: first is the power response and the second is the velocity distribution at the surface of our fictitious sphere.

With the known power response we can also find the DI(for any theta) as a ratio of the square of the pressure response (at the desired theta) and the power response. To Me DI is crucial to a speaker design and its calculation was a major motivation to me to do the entire polar expansion project. It must be shown on the plots somewhere. But here is the problem. Since we have not done the calculations absolutely (only relatively at 0 dB at 1 kHz.) The DI will not be scaled properly. But we know what its values MUST be at the lowest frequencies so this will then set the absolute level. This works almost all of the time, but sometimes it does not give good scaling with negative DI (an impossibility) over some limited range.

The velocity reconstruction has been useful to find causes for particularly interesting (usually bad) regions in the response. The axial hole found on waveguides can be seen to be a reflection off of the edge of the waveguide for example.

The code is:
Code:
Declare Function CalPower Lib "MeasCalcs.dll" (ByRef DataIn As Complex, _
                                                   ByVal NumCoef As Integer, _
                                                   ByVal freq As Double, _
                                                   ByVal farnum As Double) As Double

Complex(8) function CalPolar( DataIn, NumCoefs, Theta, freq, FarRadius, SourceRadius)

IMPLICIT NONE
! Specify that CalPolar is exported to a DLL
!DEC$ ATTRIBUTES DLLEXPORT :: CalPolar
!DEC$ ATTRIBUTES REFERENCE :: DataIn 
!DEC$ ATTRIBUTES Value :: NumCoefs 
!DEC$ ATTRIBUTES Value :: Theta 
!DEC$ ATTRIBUTES Value :: freq 
!DEC$ ATTRIBUTES Value :: FarRadius 
!DEC$ ATTRIBUTES Value :: SourceRadius 

integer, intent(in):: NumCoefs

complex(8), intent(in):: DataIn(0:NumCoefs-1)

Integer icoeff, j, ia, ij
real(8)  Theta,  freq

Declare Function CalVelocity Lib "MeasCalcs.dll" (ByRef DataIn As Complex, _
                                                      ByVal NumCoef As Integer, _
                                                      ByVal angl As Double, _
                                                      ByVal freq As Double, _
                                                      ByVal MeasRad As Double) As Complex


Complex(8) function CalVelocity( DataIn, NumCoefs, Theta, Freq, MeasRadius)

IMPLICIT NONE
! Specify that CalVelocity is exported to a DLL
!DEC$ ATTRIBUTES DLLEXPORT :: CalVelocity
!DEC$ ATTRIBUTES REFERENCE :: DataIn 
!DEC$ ATTRIBUTES Value :: NumCoefs 
!DEC$ ATTRIBUTES Value :: Theta 
!DEC$ ATTRIBUTES Value :: freq 
!DEC$ ATTRIBUTES Value :: MeasRadius 

integer, intent(in):: NumCoefs

complex(8), intent(in):: DataIn(0:NumCoefs-1)
real(8), intent(in) :: Theta,freq, MeasRadius

From what you say, all FORTRAN function calls will cause a memory leak. How does one get around that?
 
Last edited:
Sure, that's a great idea to get some more input on the features. You can distribute the DLL, there is no issue in that.
Thanks. I'll see if I can get that working.

<code snippet>
the code starts with CalPower and then shows the definition of CalPolar, is it the wrong code pasted in?

From what you say, all FORTRAN function calls will cause a memory leak. How does one get around that?
not all calls, it's only CalPolar that has this problem because it returns a value it has created and hence must have allocated memory for. The other calls manipulate an array that the caller has allocated and passe din. This means the fix to CalPolar would just be to add another parameter that the function writes into instead of returning a value.

Having said that it occurs to me that the result might be allocated on the stack hence wouldn't be a leak (probably depends on the fortran memory model & how the function is written). I should probably write a quick test to verify this.
 
the code starts with CalPower and then shows the definition of CalPolar, is it the wrong code pasted in?

Yep!
Real(8) function CalPower( DataIn, NumCoefs, freq, SourceRadius)
use deffs

IMPLICIT NONE
! Specify that CalPower is exported to a DLL
!DEC$ ATTRIBUTES DLLEXPORT :: CalPower
!DEC$ ATTRIBUTES REFERENCE :: DataIn
!DEC$ ATTRIBUTES Value :: NumCoefs
!DEC$ ATTRIBUTES Value :: freq
!DEC$ ATTRIBUTES Value :: SourceRadius

integer, intent(in):: NumCoefs

complex(8), intent(in):: DataIn(0:NumCoefs-1)

Mark - it would probably be simple enough to let the user insert a reference for 0 dB if one is known. This may not solve the DI issue however.
 
Mark - it would probably be simple enough to let the user insert a reference for 0 dB if one is known. This may not solve the DI issue however.

Pretty much what I was thinking. But integrating the known reference into the directivity index is where the added value would be most useful. Perhaps a place to enter in a know 1khertz SPL that is absolute and would be global throughout the application. But I read that .EXE is near. maybe to late for the little brain storm.
 
I know this is all for the convenience to measure indoors..... is there a serious problem with measuring outdoors - except weather?


THx-RNMarsh

How about the 4 to 5 months up here called winter?

Makes decent anechoic measurements just a bit on the hard side when the daily high is minus 15 Celsius. And of course there is snow and freezing rain on the warmer days. Then when you get tired of that you get a week or two of minus 25 Celsius days between January and February to mix it up a bit.

:cool:
 
I know this is all for the convenience to measure indoors...
The previous two comments are pretty decent reasons but in any case it seems you miss the point.
Most of the work so far is to do accurate directivity, doesn't make much difference indoors or out except the time window.
Once we have all that, it's not much extra work to do indoor pseudo-anechoic measurements.
That sort of accurate measurement is almost a Holy Grail for audio so the cost benefit is excellent.
A bit of effort required on the theory, but only once, and it's for fun anyway.

Best wishes
David
 
Last edited:
Disabled Account
Joined 2012
Ok Just checking about the way it has been done vs indoors. Nothing wrong with either.
Yes, it would be pretty cool to do it indoors and be just as accurate as outdoors. Check. Of course, you are going to have to check your indoor results with an anechoic chamber results for accuracy. After that, it should be well accepted.

I did very close mic measurements indoors of mid/tweeters about 40 years ago to determine their power response. I had access thru ME dept at university of calif at davis to verify my indoor work. Agreed very closely. But harder to do at low freq indoors.


THx-RNMarsh
 
Last edited:
This is not quite correct. The phi angles can be Fourier Transformed as they ARE simply sine and Cosine, but the theta angles are not. They are Legendre polynomials and not pure sine and cosine.

I think you complicate this more than it needs.
The Legendre polynomials are polynomial in cos(theta) (or cos(θ) for compactness) and can all be reduced with standard trig identities.
For l = 1 they are just sin(θ) and cos(θ)
For l = 2 they are
  • sin(θ)sin(θ), which = 1/2 - 1/2 cos(2θ)
  • sin(θ)cos(θ) = 1/2 sin(2θ)
  • 3 cos(θ)cos(θ) - 1 = 1/2 + 3/2 cos(2θ)
And so on, they all reduce to simple sin and cos and harmonics.
Similar to the way we have apparently complicated fractional order Bessel functions, but they turn out to be just sin and cos with radial drop-off factor.
I expect it's possible to derive the radial solutions directly rather than as special cases of Bessel functions.
Probably possible to similarly derive the θ functions as the usual trig functions rather than as Legendre polynomials in cos(θ).
Maybe that would make it clearer but it seems like too much work to me.
If anyone else has done it, however, I would be keen to see it.

Best wishes
David
 
Last edited:
Since any set of functions can always be expanded into any other set of functions (given some mild constraints) what you are saying is not really new to me. It seemed to me that you were saying that the theta variation were just simple sines and cosines, like the phi variation, which they are not. That they can be expanded into a set of sines and cosines is not the point.

The nature of all of these functions comes from the way in which they are derived from the original differential equations, the same way that sines and cosines are defined. I have never seen anyone solve these DiffEq directly in sines and cosines, but to expand the derived functions in terms of sines and cosines is common. Numerically however things can get tricky. When I was doing the calculations of the Oblate Spheroidal waves functions I did so by noting that all wave functions are asymptotic to sines and cosines at large kr arguments. By numerically integrating backwards to zero I could get the form of the solutions, except that as kr -> 0.0 the solution failed to converge for orders greater than 2. When I found code for a numerical solution to these functions it did not use sines and cosines at all, probably for this exact same reason. These wave functions are always best solved directly in the forms that come from the DiffEq and not as sines and cosines.

Another example is the angular wave solution when the angle cannot go -pi -> pi. Legendre polynomials are still the solution but with different parameters. These functions cannot be expanded neatly into sines and cosines. (This is shown in the appendix to my book on my website.)

PS. For the interested reader I recommend Morse and Feshbach, "Methods of Theoretical Physics", chapter 6 where all separable wave solutions are derived from a fundamental look at general classes of DiffEQs. I found that my son had this exact same approach in his college DiffEq course this summer, so it's still the method used today, even though "Methods" is almost 80 years old.
 
Last edited:
I had a chance to look into this. It looks like, I could not trace the exact error since it seemed to work most of the time, but it appears that there are not enough points for the algorithm to work properly. I have never tested it below 200 points. When the points get too few then the critical bands are not even a single point wide and the algorithm fails. I could probably make this work at some point, but for now try and run it with 200 points and see if it still fails. It seems to always return a single Nan at the last position and I'm not sure why this is either, but for now could you just ignore that point?

The error is too few points in every case. I could flag this in the code, but doing smoothing on too few points where the bandwidth of the filter is narrower that one single point is probably not going to be valid. Better is too use more points so that smoothing can be done over a range of data values.

I'm guessing that 200 points is likely to be the lower bounds on the number of log points. I initially used this low value owing to computer speed, but these days, I would likely use 300->400 points and 2048 FFT data points (4096 samples.)
 
Pretty much what I was thinking. But integrating the known reference into the directivity index is where the added value would be most useful. Perhaps a place to enter in a know 1khertz SPL that is absolute and would be global throughout the application. But I read that .EXE is near. maybe to late for the little brain storm.
I can easily add a dB offset to the graphs.

What's the benefit though given this is designed to show data from an intimately related set of measurements (i.e how they relate to each other is the important bit)?
 
I would agree, but some want to see "real" numbers, like 96 dB for 2.83 volts at 1 meter at 1 kHz. The thing is that it all scales with level - there is always a volume control. It is important to know absolute levels when matching drivers, but I just do this by taking all the data in one setting. If you always use the same level and the same distance then even if the data is done separately it still works. But there are so many places where there is gain in the path that it is hard to be sure that they are all what they should be.