Nearfield/Farfield curve splicing

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I still believe that those peaks are out of band. Could you plot both on the same plot using something more comparable like a sphere of the same volume as the enclosure with a driver size more like a real driver. I am willing to bet that below 500 Hz the two will not differ by more than one dB. In my work I fit between 200-300 Hz, sometimes as high as 400 Hz, never 500 Hz.
Here is a comparison with a cube of the same volume as the sphere. Specifically, a cube of side L=48cm. This is a good check of the very-low-frequency behaviour of any diffraction algorithm. Anyhow, we approach a 1dB error at about 250Hz. So, your claim about the 1dB deviation at 500Hz would certainly hold for a cube of side L=48/2=24cm.
 

Attachments

  • shapes.png
    shapes.png
    62.9 KB · Views: 209
That would be my concern - the primary baffle peak above the step frequency. Since we all tend to use rectilinear boxes we will almost always have to deal with some significant baffle diffraction peaking at the top of the 4Pi - 2Pi transition. This peak contributes to, and extends the step in amplitude. And its presence is part of what I am using to align the near-field and far-field response data with.

I recognize the potential errors in my model at lower frequencies, but there is a point where near-field data is full blended and that is controlling the level, not the diffraction model. What I need is reasonable accuracy in the 150 - 1200 Hz range to establish the correct alignment for blending the data.

Jeff
Hi Jeff,

I agree that your blending will be perfect if its done over the range where diffraction is accurate. However, if, for example, the modeled diffraction makes an error X dB at 80Hz, then I think your anechoic reconstruction will also have the same X dB error at 80Hz.
 
I still believe that those peaks are out of band. Could you plot both on the same plot using something more comparable like a sphere of the same volume as the enclosure with a driver size more like a real driver. I am willing to bet that below 500 Hz the two will not differ by more than one dB. In my work I fit between 200-300 Hz, sometimes as high as 400 Hz, never 500 Hz.

Ah, Jeff beat me to it... but here is another more crude plot, attached. I superimposed Olson's figure for the 24" wide sphere on top of a simulation of the response from a 2" diameter driver centered on a 24"H, 24"W cube that I calculated using a popular baffle diffraction simulator from the DIY community. The message is the same regarding the lowest frequency peak in the oscillations, here around 500Hz.

For my measurements, I blend them together starting at 200-300Hz and going up to 800-1000Hz. This is right where the big hump is located. The errors in the DIY model below 200Hz are inconsequential because they are below the blending zone causing the data to be taken from the nearfield measurement only.

It's interesting to compare the result here with that generated using the eigenfunction expansion... although we are using different sized objects. Thanks for posting that, Jeff. Maybe we can overlay the two models for the same size object?


.
 

Attachments

  • 24W_sphere_vs_24W24H_cube.JPG
    24W_sphere_vs_24W24H_cube.JPG
    34.7 KB · Views: 204
Last edited:
Here is a comparison with a cube of the same volume as the sphere. Specifically, a cube of side L=48cm. This is a good check of the very-low-frequency behaviour of any diffraction algorithm. Anyhow, we approach a 1dB error at about 250Hz. So, your claim about the 1dB deviation at 500Hz would certainly hold for a cube of side L=48/2=24cm.

Thanks John that's a big help.

As I said I seldom fit much above 250 Hz and 1 dB is certainly acceptable to me. But isn't the real point that I have a spherical model and as you point out it is quite readily available and has been for decades. A cube takes a capability that neither I nor any other DIY (or most professionals for that matter) actually have. So while I would love to have that extra 1 dB accuracy and higher frequency capability its just not practical.

And remember I have been doing this for a long time. Well before your paper on the subject. I am quite satisfied with it given my limitations.

And just so people understand the diffraction model that you calculate is different for every box. There is no closed form calculable solution for it. That makes it inaccessible to the rest of us.
 
The errors in the DIY model below 200Hz are inconsequential because they are below the blending zone causing the data to be taken from the nearfield measurement only.
Hi Charlie,

I responded to Jeff B. along these lines already. I would say errors in the DIY diffraction model are of no consequence above the blending region. Below the blending region, however, you are still using the diffraction gain curve to correct the nearfield. Typically there is still gain at 100Hz. In other words, if the diffraction gain curve goes totally crazy at 100Hz, then your reconstructed anechoic response will also go crazy at 100Hz. The 3rd plot in post 47 illustrates the effect.
 
Thanks John that's a big help.
LOL! I went to the optometrist the other day and the young assistant commented on my last name. I mentioned that in the 90s I used to get called John (rather than Jeff) quite often, after John Candy. She had no idea who he was!

And just so people understand the diffraction model that you calculate is different for every box. There is no closed form calculable solution for it. That makes it inaccessible to the rest of us.
Right. Its a fairly complex algorithm, although more elegant for sure than BEM or FEM. The convergence properties are subtle and its limited to intermediate frequencies. Note that to contruct the full diffraction curve, I splice the MFS method to a second method which is described in the paper. So, to add insult to injury, there are two separate theories, not just one.
 
Hi Charlie,

I responded to Jeff B. along these lines already. I would say errors in the DIY diffraction model are of no consequence above the blending region. Below the blending region, however, you are still using the diffraction gain curve to correct the nearfield. Typically there is still gain at 100Hz. In other words, if the diffraction gain curve goes totally crazy at 100Hz, then your reconstructed anechoic response will also go crazy at 100Hz. The 3rd plot in post 47 illustrates the effect.

Ah, I see what you are saying now. Sorry for the thick skull over here :)

...if there was only a way to get accurate info in the 75-300Hz band without having to splice different measurements together. Now that would be convenient to have in the ol' toolbox.

.
 
Ah, I see what you are saying now. Sorry for the thick skull over here :)

...if there was only a way to get accurate info in the 75-300Hz band without having to splice different measurements together. Now that would be convenient to have in the ol' toolbox.

.

I leave to have dinner and the thread goes nuts :rolleyes:. Yes, Jeff C. is correct, since we add the diffraction model to the measured near-field data we will extend any error in the diffraction model into this data too. So, if we are off 2dB at 80 in the model we will be off 2dB in the final simulation as well.

Fortunately, one of the rules of crossover design is that we should typically use the level at 150 - 200 Hz as our reference level, and hopefully we will be in pretty good shape in this area.
 
I guess you guys have Access to AES papers. If I remember right it was in some of 2013 or 2012 Journals, an article by Danish guys about using spherical model as basis of a speaker simulating software. Good results. Sorry I can't find it now, but I will post a link later on.
 
Jeff C

Would you agree that if the cabinet had large edge radi that the spherical model would become a better fit and at some point it would be even closer to reality than a model with a square edge?

The model I wrote for our spreadsheet does include driver directivity and how that weights the "illuminating" of the edge, as well as an approximated effect due to selected edge radii.

Jeff B.
 
Jeff C

Would you agree that if the cabinet had large edge radi that the spherical model would become a better fit and at some point it would be even closer to reality than a model with a square edge?
For most DIYers, 3/4" is probably a typical edge radius, and we all know that is not going to have any significant effect on the first few extrema of the diffraction gain. And even for edge radii as large as 2" (on, say, a 12" wide baffle that is 24-36" tall), I suspect the diffraction will retain most of the features of the rectangular box in the 500-1kHz range.

Can you show an example of a box you consider a good approximation to a sphere?
 
The problem is that I am not talking about 500- 1kHz. I would not use data beyond 300 Hz. Are you not able to simulate a radiused edge? Mine are typically 1.5" and some have been 2". In the limit as the radi becomes larger the diffraction has to approach the sphere. The unanswered question is how fast.
 
The problem is that I am not talking about 500- 1kHz. I would not use data beyond 300 Hz. Are you not able to simulate a radiused edge? Mine are typically 1.5" and some have been 2". In the limit as the radi becomes larger the diffraction has to approach the sphere. The unanswered question is how fast.

I agree that the directivity will be inconsequential in the region we are discussing. However, the only point I would still quibble over is that I believe that slope in the transition phase 200 - 500 Hz will be too shallow using a spherical model. The reality of a rectilinear enclosure, even with a healthy radius, will still have a steeper slope as it moves toward the first baffle peak. Depending on the technique used to merge data, this may or may not make a difference.
 

I REALLY dislike the fact that the AES makes even members (I am a member) pay forthe publications (remember, lots of the research was public funded), so I am glad that lots of authors publish on there websites to. Here is the Ronald Aarts publication:
http://www.extra.research.philips.com/hera/people/aarts/RMA_papers/aar11pu4.pdf
 
I agree that the directivity will be inconsequential in the region we are discussing. However, the only point I would still quibble over is that I believe that slope in the transition phase 200 - 500 Hz will be too shallow using a spherical model. The reality of a rectilinear enclosure, even with a healthy radius, will still have a steeper slope as it moves toward the first baffle peak. Depending on the technique used to merge data, this may or may not make a difference.

I agreed that it would make a difference, but to me the difference is inconsequential. One should not use near field data any higher in frequency than absolutely necessary because strange things happen in the near field that do not propagate to the far field. At the lower frequencies all shapes with the same total volume act the same.

And what I don't understand is that the diffraction curve for a box is different for every box shape and size. If the user does not have Jeff's program then what do they do? Using a single diffraction curve would be no more accurate than my using a spherical model - probably less. At least in the spherical model I can adjust the volume and the size of the radiator. This would be better than using a single diffraction curve for all speakers.

Granted the ideal is to use Jeff's stuff to model the exact enclosure being tested. Is that going to happen?

Or is there something else that I am not seeing?
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.