Yeah, I would say it's the right time to leave these crossover simulations, acqiure real data and continue with those. In a way it's easier (if you have the space). And there was a time, not so long ago, when this was done all with passive filters, imagine that 🙂
Yeah for sure real measurements. It's nice to experiment with sims, as long as data is somewhat reliable, to scope how much there is playroom with xo and sizes and shapes of things. For example vertical listening window as touched in few posts recently. All this is unnecessary if one goes with the pre-existing stuff, but I'm customizing the printables already so checking whether it's worth it to also change the profile a bit.
Turns out 3d printing is art in itself and all kinds of tests and prorotyping needed even with ready models so I've had to learn how to manipulate things and might as well make a custom device, if for no other reason than just for curiosity.
Turns out 3d printing is art in itself and all kinds of tests and prorotyping needed even with ready models so I've had to learn how to manipulate things and might as well make a custom device, if for no other reason than just for curiosity.
Also remember that Vituix expects the point of rotation for the measurements to be at the baffle face the same as CTA 2034. Rotating around the true acoustic centre and processing with Vituix may give varied results.Summa summarum, there is nothing new with my effort and excersize, except good reminder to always double check the data
Hi fluid, yeah that's what I was chasing, make the simulation data and whole procedure akin to VituixCAD recommendation / expectation of measurements and how it's used in simulator. Initial attempts few days ago yielded large error, but it turned out to be bad data. There is small difference in this particular example, but mainly far off-axis as demonstrated by last three attachments on https://www.diyaudio.com/community/...-design-the-easy-way-ath4.338806/post-7831828 . It also makes small difference in DI, because the far off-axis data is late and lower in SPL in case of waveguide rotated around acoustic center, compared to rotating it around mouth plane.
Do you see any mistakes /errors trying to comply with the standard procedure? Images in the last post are done like that: the waveguide is rotated at "baffle plane", as is the woofer box and both on their own on-axis and at same distance. Only Y-coordinate is adjusted in simulator to reflect reality, the waveguide atop of the woofer box, mouth at baffle plane.
Do you see any mistakes /errors trying to comply with the standard procedure? Images in the last post are done like that: the waveguide is rotated at "baffle plane", as is the woofer box and both on their own on-axis and at same distance. Only Y-coordinate is adjusted in simulator to reflect reality, the waveguide atop of the woofer box, mouth at baffle plane.
Last edited:
What if there's no obvious baffle? Isn't this basically irrelevant? The crucial thing is to measure at long-enough distance to avoid near-field effects, but the rest is marginal.
Baffle is just handy common reference, which helps to arrange the measurements in standardized and organized way. Microphone 1m (or what ever it takes to get far enough) away from baffle, rotating at baffle axis, spinoramas on-axis for each transducer, with same distance from mic to rotating axis the baffle). If transducer has huge offset to baffle, then move the DUT or mic to measure at the distance to the "stepped baffle" and compensate with the Z coordinate in simulator.
With dual channel measurement and all impulses windowed with same window leaves real acoustic offsets in the data so that user doesn't have to guess them and manipulate Z accordingly. All this is for is to make the simulation reflect reality as closely as possible, with as reliable and low error data as feasible in home conditions with one mic.
This procedure could be done in multiple ways, anything as common refrence, it's just operators responsibility to make sure the data is good when compiling it together in a simulator. By good I mean so that it reflects reality, the complete speaker. For example here if waveguide is rotated at the throat, but used in system mouth at baffle plane, the waveguide data needs to be moved back with z coordinate to reflect this reality. Other option is to rotate around the mouth axis (common reference axis with woofer) and no z offset is needed because the data already reflects reality. Baffle plane here is just the common plane for all the transducers (their measurement data) on the DUT are referenced.
ps. it's snowing outside just now, bicycle season is over, crosscountry skis out the closet 😀
With dual channel measurement and all impulses windowed with same window leaves real acoustic offsets in the data so that user doesn't have to guess them and manipulate Z accordingly. All this is for is to make the simulation reflect reality as closely as possible, with as reliable and low error data as feasible in home conditions with one mic.
This procedure could be done in multiple ways, anything as common refrence, it's just operators responsibility to make sure the data is good when compiling it together in a simulator. By good I mean so that it reflects reality, the complete speaker. For example here if waveguide is rotated at the throat, but used in system mouth at baffle plane, the waveguide data needs to be moved back with z coordinate to reflect this reality. Other option is to rotate around the mouth axis (common reference axis with woofer) and no z offset is needed because the data already reflects reality. Baffle plane here is just the common plane for all the transducers (their measurement data) on the DUT are referenced.
ps. it's snowing outside just now, bicycle season is over, crosscountry skis out the closet 😀
Last edited:
I'm afraid there was a lot of words and posts to really know, but this above sounds right. It matters more if you are going to virtually move the sources in Vituix because that is where the data is altered most from what was actually measured or simulated elsewhere.Do you see any mistakes /errors trying to comply with the standard procedure? Images in the last post are done like that: the waveguide is rotated at "baffle plane", as is the woofer box and both on their own on-axis and at same distance. Only Y-coordinate is adjusted in simulator to reflect reality, the waveguide atop of the woofer box, mouth at baffle plane.
It can present a problem with some constructions but it is rare that there is no practical baffle or baffle line to pick. Whether it makes much difference depends on how far out it is and whether Vituix is being used just to represent the measured data or manipulate the data as above.What if there's no obvious baffle? Isn't this basically irrelevant? The crucial thing is to measure at long-enough distance to avoid near-field effects, but the rest is marginal.
It is important to try and follow the guidance and standards as close as reasonably practical if the data is to be compared against other sources.
There have been a lot of times where not following the measurement instructions has led to good looking graphs and bad sounding speakers and much head scratching to work out why.
Some of the standards are as clear as mud but kimmo does a good job of trying to make Vituix a useful design and comparison tool
https://kimmosaunisto.net/Misc/speaker_review_feedback.pdf
What problem? I can imagine a free standing waveguide with basically naked woofer and this would be a loudspeaker like any other. This is just an illustration that it doesn't really matter what exact point is chosen as a reference. You simply choose one, somewhere in the center of the construction, most obviously, and then simply measure far enough from that point. If there are "rules" that are tied to a "baffle", they are superfluous and only obfuscating the real issues, IMO.It can present a problem with some constructions but it is rare [...]
Is any temporary oscillation a transient?
A transient is just something that occurs and then goes away. In physics it is the part of the response signal that dampens out leaving
only the steady-state.
Without high frequencies no change will be abrupt but that doesn't stop it being transient.
This is the waveguide definition for the asymmetrical freestanding device from the post cited below:
Code:
R-OSSE = {
R = 160 - 33*sin(p)^2
r0 = 12.7
a0 = 10.85
a = 60 -18*sin(p)^2
k = 0.4 + 2.5*sin(p)^2
r = 0.18 - 0.07*cos(p)^2 + 0.05*sin(p)^2
b = 1.2 + 0.2*cos(p)^2 - 0.5*sin(p)^2
m = 0.8 + 0.1*sin(p)^2
q = 5 - 1*sin(p)^2 + 1*cos(p)^2
}
Mesh.LengthSegments = 10
Mesh.AngularSegments = 20
Mesh.ThroatResolution = 5
Mesh.MouthResolution = 30
Mesh.WallThickness = 8
Mesh.RearResolution = 20
Mesh.ZMapPoints = 0.5,0.1,0.76,0.733
Mesh.SubdomainSlices = 35
Mesh.InterfaceOffset = 25
Mesh.InterfaceDraw = 0
ABEC.MeshFrequency = 1000
ABEC.NumFrequencies = 100
ABEC.Abscissa = 1
ABEC.SimType = 2
ABEC.f1 = 500
ABEC.f2 = 20000
; ____ VCAS Output ____
ABEC.Polars:SPL = {
MapAngleRange = 0,180,72
Distance = 2.0
}
ABEC.Polars:SPL_H = {
MapAngleRange = -180,180,72
Distance = 2
NormAngle = 0
FRDExport = {
NamePrefix = hor_tweeter
PhaseComp = -2
}
}
ABEC.Polars:SPL_V = {
MapAngleRange = -180,180,72
Distance = 2
Inclination = 270
NormAngle = 0
FRDExport = {
NamePrefix = ver_tweeter
PhaseComp = -2
}
}
; ____ Ath Report ____
Report = {
Title = "65DG"
Width = 1200
Height = 800
NormAngle = 0
}
; ____ File Output ____
Output.ABECProject = 1
Output.STL = 0
Output.MSH = 0
View attachment 1374135 View attachment 1374134 View attachment 1374133
I am failing to find the time for designing a more refined device from Ath, but I consider this approach the silver bullet.
If more effort was invested, maybe even developing the tools to make certain shapes attainable, many issues can be solved. In the example, however, as you can see resonances are pronounced. I am impatient to work the complex form currently. I will share the code the other day.
View attachment 1374132 View attachment 1374131
View attachment 1374130 View attachment 1374129
The free standing waveguide option in ath allows to combine two different profile depths already, to some extend. I do not remember which parameter again enables the bending-forward of the vertical section, this helps to extend pattern control for the vertical angles.
Attachments
Like when I play a music track in the middle of silence? Never have thought about it that way... 🙂A transient is just something that occurs and then goes away.
I better won't ask what's not a transient.
Even whole human life, or existence of a civilisation is transient in this sense. But I knew I should not respond back... 😀
@camplo - if you remove that part than dampens out (which is the wide-band part), you're left with something that's very close to a steady state. It's just have a "smooth" ends.In physics it is the part of the response signal that dampens out leaving only the steady-state.
Last edited:
No reason to despise clarity. I actually want to know the full answer too, I was hoping that you knew it... The question would be, in acoustics, where do we draw the line between transient and steady state, or at least I think thats the question. It would seem that if you would use a metric like cycle time to define it. Since perception of frequency isn't constant with ms duration vs frequency
@camplo, if you claim that low-frequency transients do exist, you should be able to illustrate what you mean by that with an example, at least. To me a signal low-passed at 100 Hz doesn't really contain transients as I understand it in terms of music. Sure, it still must have a beginning and an end. Does this make it transient? To me this is useless concept.
Transients are perfectly frequency-domain thing and vice versa, steady state has its perfect represetation in time. It's just that transients are wide bandwidth (which is needed to build them up), whereas steady state can be anything.transient response <=> time domain
steady state response <=> frequency domain
Interestingly, a multitone signal can be highly "transient" in its nature during its whole duration (when you look at its waveform), no matter how long, i.e. even as steady state. It's because the wide bandwidth. Limit that to a narrow band and won't have that character anymore.
Last edited:
FFT relates the two, I thought it went without saying
but for an imperfect human being, visualization helps understanding. Transients are best visualized in the time domain. You can calculate them as the convolution of the systems impulse response with the input signal. A mathematical impulse has infinite bandwidth and thus the assertion that transients are wide bandwidth.
One doesn't need a terribly high bandwidth oscilloscope to view the transient response of a subwoofer because ithe subwoofer acts as a low pas filter.
but for an imperfect human being, visualization helps understanding. Transients are best visualized in the time domain. You can calculate them as the convolution of the systems impulse response with the input signal. A mathematical impulse has infinite bandwidth and thus the assertion that transients are wide bandwidth.
One doesn't need a terribly high bandwidth oscilloscope to view the transient response of a subwoofer because ithe subwoofer acts as a low pas filter.
I guess you meant the "time response" of a subwoofer 🙂 You will see hardly any transients, unless it distorts badly.One doesn't need a terribly high bandwidth oscilloscope to view the transient response of a subwoofer because ithe subwoofer acts as a low pas filter.
but if the time response that I view is the response to a transient, then I will see the transient response 🙂
If you feed it with a transient, then I guess it's the transient response, yes. And that response won't contain anything close to a transient 🙂 But of course anyone is free to call it an LF transient anyway.
The thing is that the start up and shut down transient of a steady state frequency like above creates harmonics or as you call it, wide band content. Its as if you are saying the wide band content is the transient where I would say that there is a start up transient that creates the wide band content. Thats just a matter of semantics and perspective though.
This wide band content is above and below the fundamental. is it not? or is something else going on within my RTA.
All frequencies have start up transients, and group delay can be so high that its perceivable at LF.
I think I am starting to gain some clarity,
- Home
- Loudspeakers
- Multi-Way
- Acoustic Horn Design – The Easy Way (Ath4)