Acoustic Horn Design – The Easy Way (Ath4)

Member
Joined 2004
Paid Member
Something like this would probably help to break the last tiny bit of diffraction around 9 kHz on-axis.
(See https://www.diyaudio.com/community/...-design-the-easy-way-ath4.338806/post-7228070)

1682144303997.png
1682144476761.png


I'm still only not sure about the looks...
 
  • Like
Reactions: 1 users
Member
Joined 2004
Paid Member
I accidentally generated the WG as octagonal (Mesh.AngularSegments = 8), so I let it solve.
It's without the rear cover (not implemented in 3D yet) so it's a bit different overall but otherwise it's very smooth. I've always wanted to try the effect of the discrete angular segments and it seems to be a perfectly fine approach...

1682145704639.png
1682145979593.png
 
  • Like
Reactions: 1 users
Member
Joined 2004
Paid Member
This is with the rear cover in CircSym mode (no smoothing applied neither in the above graphs nor this one):

1682235713538.png
1682235722524.png


Typically the CircSym mode gives smoother results than a full 3D mesh. I'm not sure it's the case here.

And solved with a higer resolution mesh:

1682236533857.png
1682236510701.png


Seems a bit erratic, so probably mostly numeric noise, probably will be also quite smooth.
 
Last edited:
Maybe this could have some appeal after all, especially due the relative ease of manufacturing.
This is for a 2" throat, just a quick R-OSSE sketch, approx ø600 x 400 mm -

ATH-BIG-1.PNG
ATH-BIG-2.PNG

ATH-BIG-res.png


The DI rises (~10° would be probably the best listening axis), but due to the big size it's not so terrible in absolute numbers - thanks to the overall narrow beamwidth the power response doesn't fall as rapidly as I would assume intuitively.
 
Last edited:
  • Like
Reactions: 1 user
Member
Joined 2004
Paid Member
Now it would be interesting to generate a virtual in-room impulse response (i.e. including reflections), using the simulated polar data, to be convolved with a sound track and played via headphones (would have to include a HRTF). It would be certainly still far from a real experience but maybe some general trends could be observed (?) - compare wide/narrow directivity, DI slopes, different room sizes and placements, etc.
 
Last edited:
  • Like
Reactions: 1 user
Yeah that would be very interesting, tried to figure it out with web audio api last year which is relatively simple to use, but not enough skills to get in actual loudspeaker directivity data :) There is very rough directivity stuff available for web audio api like simple "conical" directivity of single point source and so on. For real stuff we'd need full balloon stuff, multiple sources and at least partially realistic room. I think the architectural acoustics software like Odeon can do such sims, but they are so expensive its impossible for hobbyist to play with.
 
Last edited:
Balloon data is available from JBL for example, in EASE format I believe https://jblpro.com/products/m2#downloads
Try to get that into some simple virtual environment.

It would be beautiful if :
export data from VituixCAD ( or VACS which I think can export balloon data), upload it to a software / webpage. Set width, height, depth of simple rectangular room, set stereo triangle and toe in, listen. Realtime adjustability, swap speakers and position of things. Room could be quite simple, like some simple reflection / absorption coefficient per wall and some assumption of furniture damping, so one could roughly emulate room they have. It would be too complicated to include reflections and diffraction of the furniture I guess.
 
Last edited:
Probably not, but thats the vision I dream about in short :D Connecting graphs with perceived sound quality without spending years building stuff.

Data from VCAD would be excellent as it would (hopefully) allow including all the xo stuff as well.

I guess main thing would be to hear directivity in play, to find out listening distance and how it relates to DI at some particular frequency. I'd like to investigate vertical early reflections audibility, also flutter echo and so on, importance of positioning, diffraction, lobing, MTM directivity bandwidth. Million things. Thats all :D Quite simple environment could do as long as the speakers, the source, data was accurately included and listener and speakers freely positioned in the virtual room. If it worked with reasonable accuracy I think it would bunk many myths and help to get to better sound in general.

I mean, we can assume that when speaker directivity is "optimal", what ever that is, then the sound is optimal, what ever that is. Is the sound optimal outside? is it optimal with headphones? Whats optimal? I mean its gonna sound my room because thats what I have, do I like more of low DI speakers or high DI speakers and from what frequency? Is there big difference with smooth DI through crossover than not? Is there audible difference with windowed quasi anechoic measurements with their error marginal compared to more accurate anechoic/klippel data available online?

Even though the virtual environment wasn't exactly like in reality it would be still possible to zone in to some preferences I think, to get more insight to various things if nothing else.
 
Last edited:
  • Like
Reactions: 1 user
Here is one paper investigating game engines, which says they didn't test diffraction or directivity in the paper. It mentions Steam Audio occlusion having some directivity properties but screenshot here shows its similar simple directivity system what web audio api provides.
https://www.researchgate.net/public...and_a_middleware_for_interactive_audio_design

Here is another paper, where source directivity is included http://gamma-web.iacs.umd.edu/DIRECTIVITY/docs/paper.pdf

I hope some of you can crack it, include "accurate" speaker directivity in :)
 
Last edited:
  • Like
Reactions: 1 user
Member
Joined 2004
Paid Member
It would need to combine a polar response of a source with a HRTF (which "varies significantly from person to person", but I believe that something would be better than nothing). I'd be happy with the first and second order reflections for the start, without any scattering. The room boundaries could be probably handled as a simple frequency-independent damping. But this all would not be a small task, I assume. The principle is simple though and "assembling" the total impulse response would not have to be that difficult.

This could be helpful: http://recherche.ircam.fr/equipes/salles/listen/index.html
 
Last edited:
Member
Joined 2004
Paid Member
Here seems to be something useful https://www.virtualacoustics.org/VA/documentation/configuration/

Quick look and it seems some directivity files are being loaded. Hopefully something that works and not too tough to setup :)
Yeah, that framework seems like it already does all that's needed. That and this one: https://www.ita-toolbox.org/

"Generally, both sound sources and receiver can have a directivity. For sound sources, VA expects a energetic directivities with a one-third octave band resolution. For sound receivers, VA expects an HRTF data set."

- What are energetic directivities? :)
 
  • Like
Reactions: 1 user