Great. I was realizing that you can't help but move the mic and/or speaker as you change fill amounts.In that case, I'll redo the measurements tomorrow to see if I can fix the problem, and hopefully, we can make some interesting insights.
So I'd run each of the various fill cases with the timing method i offered....ie first one with 0 Timing offset, second with the Delay found in first used as Timing offset. Do all three that way, and it should be apples to apples despite mic distance differences.
I'll go ahead and offer what i was thinking might be going on....
The way our FFT programs work to determine timing, is each frequency contributes a piece of energy that sums into the energy time curve. The peak, or near peak of that curve, is what sets T=0 and the resultant phase trace...where it is flattish and where the wraps reside.
The thing is, each frequency gets an equal 'vote' towards energy contribution & timing.
So the first five octaves, let's call it 20Hz to 640Hz gets 620 votes. Whereas the next 5 octaves 640Hz-20,000Hz gets 19,360 votes..
It's just the nature of linear data collection, and is why an impulse peak is so dominated by HF/VHF content.
Now my speculation with your examples, is that the polyfill is reducing the relative HF/VHF content, and moving the energy time curve ETC peak lower in frequency. I'd feel a little more certain about this if the HF/VHF magnitude (SPL) was coming down more as polyfill is added. Which doesn't really appear much in the the 3 starting graphs.
It looks like the major SPL change is smoother response below about 6kHz.
Our FFT software also establishes timing based on where SPL is flattest, as that too maximizes the location of the ETC peak. In your examples, it may be when polyfill smoothed out that <6KHz range, ETC peak simply became more pronounced lower in freq.
Figuring out which case is more likely, is why i wanted to see confident timing data before talking about this...
I wonder if you're only going to use it for mid-range, you could have the back of the box just stuffing between to bits of mesh, like a semi open baffle. I did use a piece of foam for the back of some experimental full range speakers ( 165cm duel cone car speakers ) and they sounded quite good, without the foam the resonance from the rear wave meeting an open space was dreadful.
The zig zag is due to the scale of the y / phase axis. The phase rotated beyond 360 deg an therefore continues at the opposite side. The straight line connecting the traces does not exist, it's just a standard way of drawing these scharts. Zig zag had nothing to do with reflections.
//
//
The ideal is no big phase shifts (the third one). Take a look at the GD plot. Between 1khz and 5khz, you don't want any negative larger than -0.5ms and humps bigger than 1ms. Nice and smooth slopes or flat is best.I recently ran an experiment with a small box and a midrange driver. The goal was to test the effects of different stuffing materials on the response. I shot these sweeps on REW and kept all the variables the same besides the materials used for stuffing. I'm not really sure how I should be interpreting the phase response graph and what is ideal.
Test 1: No stuffing, empty box
View attachment 1057460
Both the frequency response and phase response are jagged. The pattern is broken up around 13khz, I'm assuming that is some kind of driver breakup causing a spike in FR.
Test 2: lightly stuffed with polyfill
View attachment 1057461
In this test, the phase response seems to have flattened out significantly. My assumption is that this is a good thing since that means the different frequencies are more or less aligned closely?
Test 3: Tightly stuffed with polyfill
View attachment 1057462
With the tightly stuffed polyfill, the phase response has gotten very smooth and linear. The FR has also smoothed out quite nicely.
So my question is, what is an ideal phase response? Subjectively speaking, what should a good phase response sound like?
Be sure to check out the decay graph and also set your window to something short like 4ms to view 1khz on up.
I wonder what happens when the planar operates open back, no box.I definitely agree with you about the internal reflections superimposing on the driver's output. I probably should have mentioned that this is a planar driver (GRS PT6825-8) which I would assume makes it more sensitive to that kind of interference due to the lightweight nature of the driver.
In an application scenario, the dip at 1700 is probably the only issue thing holding this back from being a really good midrange. A lot of speaker-building literature (mainly guides on YouTube) seems to think that adding foam/stuffing materials is enough to eliminate standing waves, but maybe that isn't the case. Maybe a differently shaped box would be better for eliminating standing waves? something trapezoid-shaped or triangular.
Here are some interesting findings. Seems like I definitely had the delay set wrong. Here are two measurements, one with the delay set automatically and the other artificially set to zero. Obviously, the frequency response stays the same, but there is a significant difference in the phase graphs. The graph measured at zero delay has a much flatter response.Great. I was realizing that you can't help but move the mic and/or speaker as you change fill amounts.
So I'd run each of the various fill cases with the timing method i offered....ie first one with 0 Timing offset, second with the Delay found in first used as Timing offset. Do all three that way, and it should be apples to apples despite mic distance differences.
I'll go ahead and offer what i was thinking might be going on....
The way our FFT programs work to determine timing, is each frequency contributes a piece of energy that sums into the energy time curve. The peak, or near peak of that curve, is what sets T=0 and the resultant phase trace...where it is flattish and where the wraps reside.
The thing is, each frequency gets an equal 'vote' towards energy contribution & timing.
So the first five octaves, let's call it 20Hz to 640Hz gets 620 votes. Whereas the next 5 octaves 640Hz-20,000Hz gets 19,360 votes..
It's just the nature of linear data collection, and is why an impulse peak is so dominated by HF/VHF content.
Now my speculation with your examples, is that the polyfill is reducing the relative HF/VHF content, and moving the energy time curve ETC peak lower in frequency. I'd feel a little more certain about this if the HF/VHF magnitude (SPL) was coming down more as polyfill is added. Which doesn't really appear much in the the 3 starting graphs.
It looks like the major SPL change is smoother response below about 6kHz.
Our FFT software also establishes timing based on where SPL is flattest, as that too maximizes the location of the ETC peak. In your examples, it may be when polyfill smoothed out that <6KHz range, ETC peak simply became more pronounced lower in freq.
Figuring out which case is more likely, is why i wanted to see confident timing data before talking about this...
Automatically Set Decay (Box tightly stuffed)
Manually set decay (0ms) (Box tightly stuffed)
This does not match with my previous graphs though since it seemed like the phase response smoothed out as I added stuffing to the box. So I also removed some stuffing from the box to see if it would cause the phase response to wrap around more.
No stuffing, delay artifically set to zero
The phase response stayed about the same even though the stuffing changed.
This is a confusing result and definitely leads to more questions than answers.
Could it be that stuffing has no real effect on the phase response?
Since the results of the previous experiment could not be reproduced it might have been an error on my part during the measurements?
on a side note, the distance measured by REW's delay (I'm assuming via time of flight) is off of the real distance between the driver and microphone. Possibly because I am running the signal over bluetooth which introduces more delay. I have the microphone positioned about 1m away from the driver, but all of the distance estimates are somehow shorter?
Are you using a usb mic?
From the vcad manual:
From the vcad manual:
Note! Single channel measurement systems such as USB microphones with latency variations by default are not recommended for speaker engineering due to timing and phase variations and normalizations. REW should not be used with single channel connection or mode for far field measurements because timing is normalized by the program. Single channel connection and mode is acceptable for near field measurements only.
I'm using a Dayton audio IMM6 which is analog.Are you using a usb mic?
From the vcad manual:
Are you using the headphone out on the imm6 as a loopback?I'm using a Dayton audio IMM6 which is analog.
View attachment 1058339
Also, what is the mic connected to? A USB interface or laptop soundcard?
The soundcard should be calibrated in REW. This is done in the preferences window. For each measurement in REW, you should see the mic and soundcard calibration lines.
I started out with that same mic. A 2 channel mic interface that you can have a loopback is what is recommended to measure the phase. I got a used scarlett 2i2 for cheap on ebay.
Right now I have the mic directly connected to my MacBook Pro 2021 via the headphone jack. Not really sure how I would calibrate the sound card, but I do have the cal file for the mic configured. The calibration lines do show in the rew measurements, I've just got them hidden for cleanliness. How would I use the headphone out on the mic as loopback?
If you loopback one of the two channels, the measuring software will know what characteristics is the AD/amp/DA properties and what is the mic (delay etc). REQ will use both channels simultaneously when measuring. So you would like e.g. right output from headphone to go to right mic input and left mic input to mic and tell hat to REQ in the configs. But with the combined HP/mic 3,5mm jack I'm not sure its possible really. You might want an external sound "card" like the 2i2 etc.
//
//
Phase and magnitude (spl) responses are inherently connected and will either both change or both stay the same.Could it be that stuffing has no real effect on the phase response?
I suppose you need to set time windows (gated measurement) to exclude room influence. This will make sense for both magnitude and phase graphs.
https://kimmosaunisto.net/Software/VituixCAD/VituixCAD_Measurement_REW.pdf
Be sure to check out this doc.
edit: if you do give it a go you need to first hook the mic to the line in and do calibration in REW. Then hook it as usual and run the headphone out of the mic into the "line in" on the mac (if there is a line in on a mac).
Be sure to check out this doc.
See above. But that little mic is probably best for just getting a calibrated view of the SPLs. I never tried the loopback with mine.How would I use the headphone out on the mic as loopback?
edit: if you do give it a go you need to first hook the mic to the line in and do calibration in REW. Then hook it as usual and run the headphone out of the mic into the "line in" on the mac (if there is a line in on a mac).
Last edited:
Thanks guys, all this information is really helpful. I"ll definitely revisit these experiments after looking into a better measurement setup. I'll probably read up some more material on phase as well.
Nice work !Here are some interesting findings. Seems like I definitely had the delay set wrong. Here are two measurements, one with the delay set automatically and the other artificially set to zero. Obviously, the frequency response stays the same, but there is a significant difference in the phase graphs. The graph measured at zero delay has a much flatter response.
Automatically Set Decay (Box tightly stuffed)
View attachment 1058174
Manually set decay (0ms) (Box tightly stuffed)
View attachment 1058176
This does not match with my previous graphs though since it seemed like the phase response smoothed out as I added stuffing to the box. So I also removed some stuffing from the box to see if it would cause the phase response to wrap around more.
No stuffing, delay artifically set to zero
View attachment 1058180
The phase response stayed about the same even though the stuffing changed.
This is a confusing result and definitely leads to more questions than answers.
Could it be that stuffing has no real effect on the phase response?
Since the results of the previous experiment could not be reproduced it might have been an error on my part during the measurements?
on a side note, the distance measured by REW's delay (I'm assuming via time of flight) is off of the real distance between the driver and microphone. Possibly because I am running the signal over bluetooth which introduces more delay. I have the microphone positioned about 1m away from the driver, but all of the distance estimates are somehow shorter?
Sorry to be slow to get back, busy holiday weekend on the lake.
First ditto to the advice that's been given.
Strongly agree that if you think you will continue with measurements, to get a two-channel (or greater) USB soundcard and a XLR mic.
Doesn't have to be an expensive mic or soundcard. I have better ones as well as cheaper ones. (The cheap ones are still better than my ability to measure LOL.)
Heck, I don't even bother with calibration files for mic or soundcard anymore. But it took me maybe 10,000 measurements to get to that level of comfort/realization haha.
Would also recommend to continue to use the manual delay set method. When you first measure with 0 time offset, and hardwired loopback on the soundcard, you know you get a precise delay calculation based straight of the measurement, without further REW time shift estimations.
Using that measured delay as the new measurement timing offset, has proven the most repeatable method ime. And it gives the flattest phase traces that correspond with the driver's usable passband...
Your graphs make much more sense now, huh? 🙂
Again, nice follow through. If you want to remove the phase trace wraps like at 400 and 600 Hz in the last graph, apply some FDW widowing. The default setting works well enough for that and is pretty mild.
I just wanted to add that to click "Estimate IR Delay" and then "update" is the exact same as to manually input the delay that you read from the text-box. It's the same number.
When i work on integration in a certain bandwidth, i sometimes find it helpful to manually adjust the timing a bit to get a flat phase graph in that area. I find it easier to work with.
When i work on integration in a certain bandwidth, i sometimes find it helpful to manually adjust the timing a bit to get a flat phase graph in that area. I find it easier to work with.
I just wanted to add that to click "Estimate IR Delay" and then "update" is the exact same as to manually input the delay that you read from the text-box. It's the same number.
Hi skogs, my experience is, that is likely to be true as long as there is sufficient HF content.
But is not so likely to be true when working at the driver level, when drivers have reduced HF content..... like low-mids, woofers, or subs.
The "Estimate IR Delay" can vary quite a bit from the manual procedure.
Yes, me too.When i work on integration in a certain bandwidth, i sometimes find it helpful to manually adjust the timing a bit to get a flat phase graph in that area. I find it easier to work with.
For me the 'manual measurement timing offset method', consistently needs the least manual adjustment using the Offset t=0 slider (underneath the Estimate IR Delay button) compared using Estimate IR Delay, and then the slider.
BT will add variable delay, you'll get nonsense phase with it in your signal chain.Possibly because I am running the signal over bluetooth which introduces more delay.
That makes a lot of sense, I wonder why I haven't thought of that yet. A wired input is definitely optimal. Especially when looking at phase, which is basically delay.
BT will add variable delay, you'll get nonsense phase with it in your signal chain.
- Home
- Loudspeakers
- Multi-Way
- What does an ideal phase response for a driver look like?