Measurement technology

Status
Not open for further replies.
REW does complex modal calcs for its room sim. Not sure how it could be done any other way with a frequency domain approach, otherwise where would the damping feature?

To my knowledge REW does it the same way that everyone else does, but that is not strictly correct. They add damping as a complex value to the Green's function, but the modes themselves are not complex, they are real. They cannot plot out power flow for example. The difference between complex modes and real modes is not widely understood and is mostly misunderstood. Just because there is damping does not mean that the modes are complex.

This is well discussed in Morse and Ingard, except that they don't actually solve the complex modes problem, they just point out the differences.
 
To my knowledge REW does it the same way that everyone else does, but that is not strictly correct. They add damping as a complex value to the Green's function, but the modes themselves are not complex, they are real.
Ah, understood. That is what REW does (there is no 'they', by the way 🙂).

This is well discussed in Morse and Ingard, except that they don't actually solve the complex modes problem, they just point out the differences.
In Theoretical Acoustics? I wonder how significant the differences are, given the already substantial simplification in considering the space cuboid and using lumped vales for absorptions of the surfaces. Very off topic for this thread though, apologies for the diversion.
 
I don't know "he" from "they", so I just used the generic term ("he" could have been "her" for all I know!)

Yes Theoretical Acoustics. The reason that I brought it up is that the differences are very large, making the simplified model somewhat questionable. Given that even in a cuboid the differences are large, the differences between a simplified model and reality must be enormous. (Its not completely off topic, just a little.)

The complex model requires a knowledge of the damping factors on each wall, floor and ceiling - kind of a big time guess, especially at these LFs. And calculation times are excessive.
 
My main intent is the database not my modal technique. I have no problem with using a lessor resolution technique, but then that opens the door to the "apples to oranges" problem. I would also like to point out that only with my technique can I give you the DI and power response, which I personally find critical. To get those I must use impulse responses and not just frequency responses.
To avoid “apples to oranges” the database would need to contain only polar maps created with the same technique. Perhaps you could consider two databases. All data you receive(impulse or just frequency response) could be processed with a lower resolution technique and put in database A. If data is impulse and determined to be of high quality and properly time synched it could also be processed with your modal technique and added to database O. So you would have database A with lots of speakers processed all in the same way(Apples to Apples) and a subset of those in database O all processed in the same way but higher resolution, including DI, and power response(Oranges to Oranges).

I thought ARTA would not export an array of impulse responses? That is kind of a minimum requirement for usefulness. I am not going to deal with 14 separate files and names on my end. But maybe someone else would write some code that could combine the separate files into one of the correct format.
That seems reasonable considering you are the one doing all the processing.
The HOLM format you use is simple enough, just a column delimited format which is easy to convert to.
I have an Excel VBA based file combiner that works on ARTA *.pir files.
If people use the ARTA standard naming convention for the files, I will volunteer to convert the files to your HOLM format.
Naming convention is: <name-prefix>_deg<num>.pir
For example…
Ls5_deg0.pir
Ls5_deg10.pir
Ls5_deg20.pir
Ls5_deg30.pir

It wouldn’t be difficult to add IFFT functionality to create the HOLM impulse set from groups of *.frd files as well.
I am still waiting to hear back about the REW *.mdat file format which can contain multiple measurements.
 
You need one measurement for each driver, on each axis you want to model, plus at least one "multiple driver" measurement. This can be pairs of drivers, or three of more drivers as long as there is enough frequency overlap. The interference measurement is very sensitive, and you can just automatically fit the N-1 relative delays using a fitting algorithm.
This technique would only provide properly synchronization between the drivers for a given angle. It does not provide the proper time relationship between the measurements sets taken at different angles that the modal analysis requires. For example, the attached plot shows impulse response for measurements at 60, 80, & 100deg from one of the HOLM time synchronized data sets posted earlier in the thread. Note the delay in arrival time as you rotate further around the cabinet.
 

Attachments

  • Impulse_sync.png
    Impulse_sync.png
    32.9 KB · Views: 229
Marcel

I have often thought about just what you are saying. But what does one do if the listening axis is not the normal axis? Then that makes the situation much more complicated.
I can't see why shloud be presentation of polar map dependent on listening axis. It just describes how the source radiates, regardless of listening position.

I use polarmaps to find angles for which the DI is near flat and smooth and the direct sound is likewise.
How exactly does this work? You do this just by looking at the polarmap? To look at DI you need to know the total radiated power first. I just can't see why this all should be limited (or easier) to a linear scale map, where the radiated power presented can be strongly misleading (e.g. "hot spots" near the zero axis doesn't mean a lot of energy at that frequency, etc.)

Or maybe we just use polar maps in totally different ways...


The other problem is that polar map allways describes only a single plane of radiation and if the radiation pattern is not axially symmetric, it's not representative any more. As I understand, your input data (i.e. the method encouraged here) are taken in the horizontal plane only. So you assume symmetry of radiation and calculate total sound power/DI from that? Shouldn't there be at least the same set of data for a vertical plane to be incorporated? It may work well for your speakers but how will this method describe e.g. speakers with rectangular horns or more generally with highly non-symmetric patterns. Or it's simply better than nothing? 🙂
 
Bolserst

Lots of things are possible, but given the low level of interest thus far I am not going to put much more further work into this. If down the road we have a growing database then I would consider two databases and IFFT, etc. etc.

Mabat

I explained all the limitations of the technique in my write-up, apparently you did not read it. I think that we do use polar maps differently. Yes you do need the power response and that is shown. If the listening axis is not the normal axis then the look of the entire plot would change with a nonlinear scale.
 
Yes you do need the power response and that is shown.
Yes, and it's shown in a separate Bode plot to be readable to the eye, as for any particular polar or DI. That's what I was talking about. But for what you are talking about you would not need polar map at all.

If the listening axis is not the normal axis then the look of the entire plot would change with a nonlinear scale
But why? Since you assume axial symmetry about the normal axis (i.e. 0 deg in polar map) in all the calculations, the weights don't change - why should they? I still don't understand. So be it.

I explained all the limitations of the technique in my write-up, apparently you did not read it.
I've read it the first day the link appeared. I didn't remember that paragraph (wasn't it added later?). Thanks.


BTW, in a few weeks we're going to measure this - http://www.diyaudio.com/forums/multi-way/261419-os-waveguide-profile-2.html#post4047332
So I'll do it also the way you described and send you the data.
 
Last edited:
To avoid “apples to oranges” the database would need to contain only polar maps created with the same technique. Perhaps you could consider two databases.

Thinking about this, consider:

If there are two databases then we all know what the characteristics of each of them are. In what way is this any different than just having one database and a notation about what these characteristics are? I just see no point in maintaining two different databases if there is a single presentation app.
 
I agree that 1 or 2 databases does not make alot of difference, as long as we clearly know how the data was processed.

What would be nice, is that you would make the raw data available also for download (in wathever format you received it, no processing). This would allow other users to compare your plots in their own tools, and would also be "future proof" when new tools or calculation methods become available...
 
Hi _wim_

No I don't think that is reasonable since the raw data is very big. It would tax my website too much. The files that are produced after the analysis are quite small and I have no objection to making those available, but raw data is just too much volume to handle.

I have the color option working - I will post some examples shortly - but Microsoft is still wrestling with my VS problem. This is costing them a whole lot of money as I am now on my third debugging engineer. The problem seems to be deep in the API - way deeper than I would ever have been able to sort out. We are at two weeks and counting!!
 
Last edited:
Sorry for the drift.. I recommend a fresh load of the OS.
The time saved trying to chase down problems can be immense.
If it works, boom you are done. If not, you know it's not an OS problem.
Of course I don't know your computer history or any other facts of the situation.. So you could have loaded the OS fresh already.
Fixing computers and SW is what I do. MSCP#2342, 25 years retail repair shop.
 
Don - no not quite. They have not tried to recreate it, but they have confirmed that it is not a mistake on my part. The error has been traced to the API's in a recent release of an add-in to Visual Studio (Azure tools). So we know that it is not in the OS. The computer is actually quite new and runs the latest OS, so the possibility of it being contaminated is pretty low anyways.

Years ago the first thing that I would have done was reload the OS and I would do that regularly every couple of years just to be safe. But these days that is less practical and less effective than it used to be.
 
Status
Not open for further replies.