Comb filtering and driver placement

Status
Not open for further replies.
Having searched around this forum a bit, it seems as though people give differing advice with regards to driver placement in MTM and similar systems, though all agree that comb filtering is a Bad Thing(tm), and should be avoided. Also, doesn't this happen in the crossover band in normal MT systems as well?

Could someone please shed some more light on the topic?

I would guess that placing the drivers so that the spacing between the acoustic centre (crossover delay taken into consideration) of each driver is an even multiple of 1/4 wavelength should do the trick? That is, from 1/2 wavelength and up.
 
For a vertical array comb effects take place on the vertical axis.

A MT array typically has 3 Lobes.

A MTM array typically has 5 lobes.

(Theorectically)

There is no "magic" spacing.

Keeping the driving spacing as low as possible and the crossover
frequency as low as possible are the only measures possible.

🙂 sreten.
 
Hi,
The subject have been discussed somewhat in this thread
http://www.diyaudio.com/forums/showthread.php?s=&threadid=30446&highlight=
I'm about to design a test box with MTM config tesing the use of a wideband driver instead of a tweeter. The drivers are Seas L15 and I'll test with a tweeter (ss9300) crossed at 2khz, 4:th order and a wide band driver (FE87E) crossed at 500hz, 4:th order, both timealigned. The test will hopefullt start sometime next week when the FE87E are delivered.
I'll get back with the result, hopefully within 2 weeks.

/Jesper
 
Thanks, sreten, that's what I was afraid off.

Jesper, yes, I've seen that thread, and posted in it.

I'm looking forward to seeing your results. I have a few suggestions:
* Make measurements in the listening position as well, to get a feel for how the reflections interact. I have a feeling the 10-12dB dip is going to be smoothed somewhat by the time it gets to the listening position.
* Be concerned with group delay, rather than phase. You indicated that you were worried about the relative number of cycles between drivers. That shouldn't interest us, IMHO. What should interest us, is the constancy of the group delay, not its value.

I have listened to a set of Usher M700, and found that they do a decent job of the imaging, given sensible in-room conditions. Those are MTM speakers, designed by D'Appolito.
 
angel,

Measuring at listening position give limited information in this regard, at least if you use standard measurement methods.
In MLS methods (using FFT) or sweep there is limited time information, you don't see when in time two waves have interfered with one another. For example, there will be no measurable difference for near and far reflections, if they give the same combfiltering effect in the frequency response, but they will sound different. Usually this is dealt with by using a small FFT window for the higher frequencies and large one for lower, but this is still a large compromise.
One way to deal with this would be to use a wavelet transform (time information is preserved) and try to correlate the measured result with perceived sound.

I'm not so much concerned about relative number of cycles between drivers (they should be within a couple of cycles), but the relative number of cycles between direct and reflected waves.

I read somewhere (can't comment on it's validity) how the human hearing works, and it sounded sound to me.
Within the tapered ear shell the frequency content of a sound produces resonance at different locations which in turn stimulates the nerves at that location, and we hear that frequency. Now, the time it takes for a resonance to build up is at least one wavelength, and lets say we have a time certainty of a couple of wavelength. This would mean that we have different time accuracy for high and low frequency, which is no surprise.
For this reason the number of wavelength (cycles) between direct and reflected sound is of interest and not so much the group delay. Few wavelengths difference would indicate that the human brain cannot distinct them as two different sounds but instead one distorted one.
A wavelet transform of a system pulse response would give similar result.

/Jesper
 
Within the tapered ear shell the frequency content of a sound produces resonance at different locations which in turn stimulates the nerves at that location, and we hear that frequency.

Essentially, yes. Though the nerves are not directly stimulated; instead, rods attached to them are. And the Q factors of the resonances vary.

Now, the time it takes for a resonance to build up is at least one wavelength, and lets say we have a time certainty of a couple of wavelength.

It takes time for a resonance to build up fully, but a signal will be detected before the steady state is hit.

And the time difference between the ears is pinpointed down to a ridiculously precise figure. In fact, I think time aligning left/right speakers is going to do a heck of a lot more for the sound than time aligning the drivers themselves.

This would mean that we have different time accuracy for high and low frequency, which is no surprise.

From a mathematical point of view, you cannot determine with exactness both the frequency and the time. You can detect either perfectly by completely ignoring the other, or make some tradeoff that tells you one thing with reasonable certainty, and another with less certainty.

In wavelet transforms, you do get high time resolution for high frequencies, with poorly pinpointed frequencies, and low time resolution for low frequency, with very exactly pinpointed frequencies.

I do seem to recall a wavelet introduction that claimed that this is the distribution of information our brain uses.

For this reason the number of wavelength (cycles) between direct and reflected sound is of interest and not so much the group delay.

The constant in the group delay is not interesting. (The distance to the speaker introduces a constant group delay, for example) But the frequency dependant part is. Constant phase gives linearly varying group delay, while constant group delay gives linearly varying phase.

Hence, it is most practical to talk about the frequency dependant variations in group delay, since that yields a non-linear variation in phase.

And at least one study I read, indicates that abrupt variations in group delay vs frequency is the most audible time-related phenomenon.

Few wavelengths difference would indicate that the human brain cannot distinct them as two different sounds but instead one distorted one.

That is essentially correct. An early reflection is perceived as part of the original signal, while a late reflection is perceived as an echo- part of the ambiance.

Managing reflections, both when they arrive, and how many times the sound will bounce before dying out, is one of the most worthwhile improvements to a system, in my experience.
 
Status
Not open for further replies.