Klippel Near Field Scanner on a Shoestring

Hi Scott,

I'm not sure if you misunderstood me, CuPy is for using the GPU for multi-dimensional array math.

CuPy

I will go back into lurking mode. :)

Mogens

Modern FORTRAN compilers (Intel) have built in multi-processor array compilers which will parallel process any steps that are applicable. One must write the code in a certain way to take advantage of this feature and I understand that it is massively effective at run time reduction. My code does not use this capability because it is very fast as it stands and doesn't need this capability. But it could be modified to take advantage of multiple processors like the Intel I7 if that became necessary.
 
The GPU based gaming graphics are so far beyond this there is no comparison, real time rendering at 4k UHD. It's a different world.

Yes, but that is highly specialized for gaming graphics. The kind that we do is trivial by comparison. Kind of like hitting a fly with a sledgehammer. It's the number crunching behind the scenes that is time consuming and even that is virtually instantaneous with today's computers.

As Klippel's example shows, it is the software math complexity that is where the real work is done, not the graphics or the hardware.

PS. I thought about it and a full 3D mega point model would NOT be required to do the dual scan room reflection elimination. It could be done with just a few points at LFs, which is the only place that its needed.
 
Below is an HTML of the MathCad file.
.

Many thanks for that, the code actually is quite readable. I will study it thoroughly and, no doubt about it, come back with questions.

I think it's pretty exciting that you come to the conclusion that not many points are needed for low frequency soundfield seperation.
If I understand Klippel's documents well, the more points you have the lower the error, where -20dB error is taken as "good enough" . (This is of course based on the anechoic chamber spec of an minimum absobrtion value of 0.99 to be called anechoic), where the measurement error is < 1dB

Kees
 
Does that sound reasonable? If so I will post the VB.Net interface code and you can work from that.
How do you want to share the dll? Will you publish it generally or do you want to deal with that offline? For a basic proof of concept, I think it just needs the DLL and some way to verify that it is basically behaving itself, i.e. at least 1 interface call with sample input and expected output. This seems like it would be sufficient to demonstrate a python wrapper should work.
 
Email me seperately for the DLL, attached below is the details of the interface, which should be enough to do something simple.

I would suggest creating an array of reals to pass into FFT and then pass the output of FFT to LogToLin which takes the n linear FFT points and creates an equivalent log spacing while maintaining the complex nature of the data. Note that as with most FFT subs, the 0 point and the N+1 complex data are returned in the first complex number as real(0) as the real part and real(N+1) as the complex part. The complex part of both these points must be zero (guaranteed.)
 

Attachments

  • Declare Sub CalSpatial Lib.pdf
    73.4 KB · Views: 71
I think it's pretty exciting that you come to the conclusion that not many points are needed for low frequency soundfield seperation.
If I understand Klippel's documents well, the more points you have the lower the error, where -20dB error is taken as "good enough" . (This is of course based on the anechoic chamber spec of an minimum absobrtion value of 0.99 to be called anechoic), where the measurement error is < 1dB

Kees

Of course more points means better accuracy, but how much is enough? One really has to do real world data analysis to determine how many points are required to meet expectations. That is how I determined that 13 angular points is sufficient. You start with as few as you think you can get away with and keep adding more points until you reach stability at the highest frequency of interest.

Trying to determine this analytically would be far too difficult and pointless anyways once you had real data to work with.

My feeling about Klippels approach is that he wanted to do a massive project to inhibit others from following with competitive products. If we can show that we can do it "on a shoestring" then that will turn the community on its head. I have been doing this far longer than Klippel and I had the background for it, he didn't (he's a EE.) I have been writing books and papers on sound radiation for almost 40 years. This area is my specialty.

Also, we will need some Beta testers who can take data to prove out the process. I can take the data on my system but I have to clear my living room and set it up and then remake the room, its a real PITA, so some help here would be appreciated.
 
It's the number crunching behind the scenes that is time consuming and even that is virtually instantaneous with today's computers.

Just kidding, I assumed with the numbers you were talking about a modern processor would be virtually instantaneous. Many large companies and scientific organizations run all their computers as a "farm" these days paralleling large jobs over the network. There are still circuit simulations that take hours or days to run.

I'm glad you found someone, good luck.
 
Many large companies and scientific organizations run all their computers as a "farm" these days paralleling large jobs over the network. There are still circuit simulations that take hours or days to run.

These days FEA projects of a million nodes and more are common - on a PC! I wrote an FEA program for my PhD thesis and a few hundred nodes took over night to get back from the "system". It probably ran faster than that, but we were on a "time-share" system at the campus.
 
Also, we will need some Beta testers who can take data to prove out the process. I can take the data on my system but I have to clear my living room and set it up and then remake the room, its a real PITA, so some help here would be appreciated.


If it would be helpful, I could make a BEM model of a room with a loudspeaker in it. I could then make complex response balloons (as frequency arrays ) in any desired resolution at different distances and with and without the room.

That way we would have a verification dataset which is "perfect" in that we know exactly what the simulated anechoic response should be and thus can check the error vs input points easily.

I could make i.e. an cardioid system with horn so that thee rear radiation is a bit more complex.

Kees
 
Last edited:
Kees

Thanks for the offer, but I am not sure how one would sue such a model. The basis of comparison for this measurement technique is the old way speakers were measured. Its not as if those methods were inaccurate, just inconvenient. How we would get this data on an identical speaker for comparison is not clear to me either though.

How would you use the model? We will only be gating initially and so room reflections won;t be part of the problem.
 
We could compare the BEM model data with the room included, to the BEM model data without the room included (simulated anechoic).

I understand that the first steps are with gating (of <3ms because of floor reflection?), but I guess that for the next steps comparison of the anechoic (BEM model) data with the processed data of the BEM model with room included could be useful, especially because there won'tbe any small placement/temperature etc real world error's in the data and so it will show the efffectiveness of the algoritms.

I think that even for the first steps it might be useful to have data where e.g. 10 degree of axis is exactly that and not 10.5 degree.
I personally find it very difficult to have the angles and point of rotation exact during real measurements, I know that normally this isn't such a big deal but maybe for verifying the algorithms it would be better not to have these errors.

I would make a true 3D model, so we could also simulate asymetrical enclosures .

I can also make some real acoustical measurements, but I have a real small (typical Amsterdam ) room, so the refflection free window will be rather small.

Kees
 
I know that normally this isn't such a big deal but maybe for verifying the algorithms it would be better not to have these errors

Actually it is the exact opposite. How robust the algorithms are to real world errors is the core of success. Without going into too much detail, the inverse problem (finding the source velocity distribution from the pressure) is singular as ka ->0. The noise (errors) in the measurements get magnified by this effect and this runaway situation has to be handled somehow. Without real-world errors in the data, one might not ever even see the real-world problems.
 
..The noise (errors) in the measurements get magnified by this effect and this runaway situation has to be handled somehow.

Yes, I replied to Kees on this way back in post #78, so you may have missed it.
This may not be so easy, one could say that as we "zoom in" then any errors in the data are expanded too.
Or, as mathematicians would say, "the inverse problem is ill conditioned".
I don't know if this applies to exactly your idea but my intuition is that there will be serious problems with data sensitivity to noise, measurement inaccuracy etc.

Do we need source velocity reconstruction?
Not to simulate anechoic chamber response measurements AFAIK.

...Do you think any of the maximum entropy iterative techniques have any use here?

But the maths sounds like fun;)
Do you have a specific reference that you recommend?

Best wishes
David

This does raise the question of how accurate our measurements need to be.
An inexpensive stepper motor gearhead has backlash typically within one hundredth of a radian.
I expect that's adequate, anyone have experience?
 
Last edited:
But the maths sounds like fun;)
Do you have a specific reference that you recommend?

One of the classics http://bayes.wustl.edu/sfg/why.pdf but there are many. These guys would tax Cray XMP's in the day and yes they used FORTRAN. :D
I was just curious since optical and acoustic problems use some of the same mathematical tools. 20yr. ago I wrote an FFT in assembly as part of an evening grad school project for the 68040's FPU and a day job customer (GE Medical) asked if they could use it in a CAT scanner.

Here's a totally different approach with pictures of sucking a useful image out of noisy degraded data. ftp://www.adass.org/adass/proceedings/adass98/puetterrc/puetterrc.html
This is possibly just a distraction but it was fun stuff when the embarrassment of the first Hubble images was obvious.
 
Last edited:
Scott and David

I don't think that either of you has exactly the right idea. This issue is not what is discussed in #78 and from what I could tell from reading Scott's links it's not the same as image reconstruction.

Theoretically this issue is not a problem, but when it comes time to implement the concepts in code, they will blow up. It took me almost ten years to discover the whys behind this and to develop a work around. Hence, at this point this is one area that I am not willing to discuss in detail. It will have to remain proprietary for now.

My experience with these techniques is that once you get past the singularity issue above, they are extremely robust to errors and noise, but you have to get past the singularity issue. Errors and noise in the measurements just don't fit the model so they get ignored. The singularity issue is unlike image processing, but the rest is quite a bit like image processing.

David - we do not have to do source reconstruction, but the singularity issue is present even if you don't.
 
One of the classics...

Thanks, that whole site is fun even if it's not exactly what we need.

Theoretically this issue is not a problem, but when it comes time to implement the concepts in code, they will blow up. It took me almost ten years to discover the whys behind this and to develop a work around...

I had concerns about data sensitivity and singularity at r=0 but not well defined, hence my question and "AFAIK".
I will think about it some more, I wonder how it plays out for the Klippel NFS.

Best wishes
David
 
Its not the r -> 0 problem as much as it is the k -> 0 problem. r never goes to zero. If you look at my equation 3 and 6 you will see a ratio of Hankel functions. These grow very large as k -> 0 at ever steeper rates as m grows. If this is not handled in the code the calcs will eventually overflow.

I suspect that Klippel uses a matrix approach and singular value decomposition which deals with issues like this quite readily. My approach, as shown in my paper, is not matrix based and uses integrals to find the coefficients. Those integrals blow up for m>0 as ka -> 0 at an ever increasing rate with m.
 
Last edited: