Violet DSP Evolution - an Open Baffle Project

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I fail to see why you direct this comment toward SoundEasy.
A revisit of the phase measurement discussion earlier in this thread might be in order. To recap, the greater the amount of excess phase the more difficult it is to locate the acoustic center and verify the system's linear phase. Hence, the more likely it is the front and back waves will end up misaligned. However, what I said is v16---and I guess v17---make removing excess phase quite tedious. That's quite different from saying they're incapable of it, though SoundEasy certainly emphasizes capabilities at the expense of ease of use.

But perhaps I've taken this article too much to heart. :p
 
A revisit of the phase measurement discussion earlier in this thread might be in order. To recap, the greater the amount of excess phase the more difficult it is to locate the acoustic center and verify the system's linear phase. Hence, the more likely it is the front and back waves will end up misaligned. However, what I said is v16---and I guess v17---make removing excess phase quite tedious. That's quite different from saying they're incapable of it, though SoundEasy certainly emphasizes capabilities at the expense of ease of use.

But perhaps I've taken this article too much to heart. :p

I don't know. Perhaps what I see as trivial is tedious to other. If you place the mic X CM meters from the speaker baffle and then when you look at the impulse screen in SE you adjust the position of the start of the time window with the cursor and/or arrow keys so the display shows Acoustic Distance = X cm that will remove all the excess phase from mic to speaker. What is left is the offset of the driver AC from the baffle surface.


There are codes that automatically place the cursor. However none of the approaches I am aware of places the start of the time window correctly for minimum phase response.

But that begs the question, where would you place the start of the time window for a linear phase system?

Perhaps what you conside tedious I consider a sign of just being lazy. ;)

As for my article, there is no need to know where the AC's are to assure that they are aligned, any more the Linkwitz needs to know where they are to compensate the for the misalignment using an all pass delay.
 
Last edited:
when you look at the impulse screen in SE you adjust the position of the start of the time window with the cursor and/or arrow keys so the display shows Acoustic Distance = X cm that will remove all the excess phase from mic to speaker
Not in v16, at least; in addition to the uncertainty of where the mic and driver ACs are I found SoundEasy was unstable at 96 and 192kSamples/s. Being forced to run at 48kS/s means the one sample granularity in window position causes an average of about 90 degrees phase error at the top of the tweeter band. This is linear with frequency, so it's not too much of an issue for woofers, but with tweeters I constantly hit problems where the minimum phase window position lay within the start of impulse response. As a result both SPL and phase varied with window positioning and one ended up jittering the window in an attempt to find the least bad compromise between the two errors. If you leave the window positioned where a bunch of phase wraps occur you'll avoid the SPL problems but the phase error from the window misalignment will be embedded in the data and you'll never notice it due to the wraps.

Net result is that, since v17 doesn't improve on v16 in this regard, getting truly clean tweeter data to feed to UE still looks like a hassle. You can call me lazy if you want but, given the understanding of SoundEasy I'm arriving at from our conversation over the past few pages, I'm more inclined to say I expected too much of it and hence invested too much effort in trying to make it work well. It's not like the induced errors are large so I agree living with them is the easiest solution.
 
Not in v16, at least; in addition to the uncertainty of where the mic and driver ACs are I found SoundEasy was unstable at 96 and 192kSamples/s. Being forced to run at 48kS/s means the one sample granularity in window position causes an average of about 90 degrees phase error at the top of the tweeter band. This is linear with frequency, so it's not too much of an issue for woofers, but with tweeters I constantly hit problems where the minimum phase window position lay within the start of impulse response. As a result both SPL and phase varied with window positioning and one ended up jittering the window in an attempt to find the least bad compromise between the two errors. If you leave the window positioned where a bunch of phase wraps occur you'll avoid the SPL problems but the phase error from the window misalignment will be embedded in the data and you'll never notice it due to the wraps.

Net result is that, since v17 doesn't improve on v16 in this regard, getting truly clean tweeter data to feed to UE still looks like a hassle. You can call me lazy if you want but, given the understanding of SoundEasy I'm arriving at from our conversation over the past few pages, I'm more inclined to say I expected too much of it and hence invested too much effort in trying to make it work well. It's not like the induced errors are large so I agree living with them is the easiest solution.
As John pointed out, knowing the absolute center is not a requirement. If you're trying to get a linear phase system response and are concerned with phase wraps and sample points, the focus is at the wrong point as I see it. The focus on absolute acoustic center is misleading. It's possible to use 48K sampling and get the same final system response as it would be with 192K sample rate.

Rather than try to set a start time marker and remove excess-phase to get the minimum-phase response (there will always be error in this method), it would be far easier to create driver models with matching measured phase, generate the minimum-phase response from these models and then empirically find the relative acoustic offset of these specific models. It's actually rather easy to find the relative offset of any two drivers to within fractions of a mm if desired. It requires a simple three measurement scheme, both drivers individually and a summed response of the two connected raw (being careful about the level for tweeter protection). This scheme will even work for a measurement system that does not measure actual phase, such as LMS.

The sample rate issue can be made totally moot. The end result with a 48K sample rate would be good accuracy of relative phase up to the Nyquist cutoff. More than that is pretty much not an issue since it's above any normal hearing range, but if you've got issues with stability above 48K, just make the actual measurements at 96K. This again only requires three successive individual measurements obviating the stability issue. If you can't at least make single measurements at 96K, I'd suggest that the issue is with the hardware involved, not the software.

Once you have the models of the three measurements that results in the minimum-phase response of each after applying the HBT, you need only adjust the relative offset until the combined response of the two raw measurements is an overlay of the measured summed raw response. It's easy to get the relative acoustic offset to within +/- 0.1mm. I believe it would be possible to better that, but the variability between measurements of even a single raw driver becomes an issue. That's really not of much importance anyway, since the phase response of any multi-way and non-coincident set of speakers will not be linear at almost any off-axis response even if the system is linear phase on-axis.

In essence, there is a way around the issue you have and it's not complicated. Now if you're trying to run SE in emulator mode by using the analog input and want to run SE at 96K or higher sample rate, then yes, stability would be an issue. But I'm not convinced at all the issue is with SE. The sound card and its drivers have to be considered as a possible source of any instability.

This goes back to v13 for my DF usage. I measure almost exclusively with LAUD that is limited to 48K, but I have no doubt that I could easily get a linear phase system response with SE using that 48K sample rate. I actually intend to do that at some point, I just have not delved into that yet.

Dave
 
Yep. However, that doesn't help you're trying to get UE not to unecessarily warp phase.
I don't see why it should "warp" it at all if set up correctly. The warping (or lack thereof) is related to the setup of the model, not the actual output. If you create the model correctly, the inverse phase of the UE should be correct as I see it. The only issue would the absolute phase above the Nyquist frequency that one cannot know. Below the Nyquist frequency, it's not an issue. Even modestly off-axis it's not an issue since even one that is perfect on-axis will not be off-axis.

With a sample rate of 48K I won't be concerned, I surely cannot hear anything above, oh, 22K. ;)

Dave
 
Seeing as SoundEasy crashed any time I tried to enter a driver I think I'll stick with parametric EQ in Allocator and PLParEQX3 and dispense with the hassle of making the driver model. :p
Having used SE through most of the releases, if you experienced crashes such as that, look at the hardware/drivers. It's not an SE issue from the sound of it. The dedicate PC I used for the SE digital filter was up and running throughout the weekend with no crash, admittedly no the UE. I actually can't recall when I started it, that was several days prior to the weekend. Of course I'm likely using a different sound card, a Delta 410. Very stable when running so far in several machines.

Could SE have a more user-friendly interface? Yes. That's a separate issue from stability and actual system output.

Dave
 
Seeing as SoundEasy crashed any time I tried to enter a driver I think I'll stick with parametric EQ in Allocator and PLParEQX3 and dispense with the hassle of making the driver model. :p

You really argue in circles. If you use the Allocator you are still faced with what the system phase is. Using it doesn't address the issue of excess delay at all. You still have to obtain measurements of the system. Start with the same measurements with both you will have the same problems with excess phase.

It would seem that you reason for posting is simple to complain about your inability to master even the simplest functions in SE.
 
It's not an SE issue from the sound of it.
Thanks, but (as I've mentioned earlier in this thread) my laptop runs everything else just fine and has a very standard OS. Most of the SoundEasy crashes I've experienced are just the usual native code AVs raised by apps with bad memory management; entirely consistent with the observable quality of the program. Mostly I'm surprised it works as well as it does for other people.

what you conside tedious I consider a sign of just being lazy
It would seem that you reason for posting is simple to complain about your inability to master even the simplest functions in SE.
John, these responses are unjustified. Could we keep this civil, please?

If you use the Allocator you are still faced with what the system phase is.
Yep. dlr's proposal, however, was to work around the phase limitations by creating a driver model. Which, from the standpoint of implementing a digital crossover, is essentially inputting the inverse of the parametric equalization one would like and then using UE as a parametric equalizer. I would certainly say the thread's starting traverse ground already covered, but I'm afraid you've lost me in asserting it's circular to express a preference for using an equalizer directly.
 
Last edited:
Yep. dlr's proposal, however, was to work around the phase limitations by creating a driver model. Which, from the standpoint of implementing a digital crossover, is essentially inputting the inverse of the parametric equalization one would like and then using UE as a parametric equalizer. I would certainly say the thread's starting traverse ground already covered, but I'm afraid you've lost me in asserting it's circular to express a preference for using an equalizer directly.
It's not a work around, it's addressing the issue head on. If you insist on only using direct measurements with "imprecise" minimum-phase (that's what that is, there is no measuring precise absolute acoustic centers), that is, having some amount of excess-phase remaining due to the limitation of sample point uncertainty, then you will have phase "warp" as you call it. If you want results as close to linear phase as possible, you have to create a model accurately. There is no substitute. You can certainly end up with "good enough", though I suppose. It doesn't matter whether you use SE or the Allocator, both require accurate driver data for optimal results.

It's not inputting the inverse of the equalization, it's inputting the direct minimum-phase response of both drivers with a precisely determined amount of relative acoustic offset, then calculating the inverse phase required to linearize the response. Any driver input such as one that is measured using only sample-point resolution as you propose will always have some amount of excess-phase remaining. Then, of course, the higher the sampling rate the better, but it's not required. Phase correction will be most accurate only from a true minimum-phase response of each driver regardless of the sample rate. Nothing more, nothing less. How one gets to that point is the issue.

Dave
 
Thanks, but (as I've mentioned earlier in this thread) my laptop runs everything else just fine and has a very standard OS. Most of the SoundEasy crashes I've experienced are just the usual native code AVs raised by apps with bad memory management; entirely consistent with the observable quality of the program. Mostly I'm surprised it works as well as it does for other people.


John, these responses are unjustified. Could we keep this civil, please?



Well that is the way you come off. Sorry if I offended you. The problems you seem to be having seem linked to your laptop install. Instead of being surprised about how well it works for others maybe you should consider that what it does for others is closer to the norm and your experience is the exception.

Anyway, back to the issue excess phase. You are worried about +/- a fraction of a sample and the effect on phase at 20 K. But consider the reality of linearizing phase. If 1 sample is 180 degrees at 20k, it's 18 degrees at 2k, 1.8 degrees at 200 Hz and .18 degrees at 20 Hz. If I have a speaker with a 30 Hz B2 low frequency cut off then at 20 Hz there is roughly 120 degrees of phase rotation. Thus, to linearize phase to 20 Hz, assuming no additional rotation at high frequency, at 20 K there must be a delay of 120/0.18 = 667 samples. So what is the big deal about one sample more or less at 20 K?

You still seem overly concerned with minimum phase. It simple isn't an issue. Consistency is. It is of academic interest and little more.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.