I don't think I can publish the ones I have which shows everything step by step. However, there are some that shows everything as once (a bit messy) that are out in the open and they seem to be correct at first eyesight. I've attached those.Could you post those polars? That's a strong claim without some support.
Attachments
You may have already mentioned it, but what is the issue with a tractrix horns?
No real issue with them, I rather like them. But their beamwidth narrows with frequency.
Excuse me, my math skills are non-existent, but Wayne compared wgs to optical systes. Can one think of wg/horn profiles like lenses with varying magnification? A narrow beamwidth horn is like a lens that has stronger magnification in the middle like a fish-eye lens etc.
Certainly, an optical lense can be likened to an audio horn. Both are devices that modify the beam of radiated energy. Of course, there are a lot of dissimilar ways to do this, and electromagnetic waves are not exactly the same as sound waves. Sound waves are longitudinal and electromagnetic waves are transversal. But still, there is an analogy in the fact that both are waves and the devices we are discussing are beamwidth modifying devices.
The point I was making is if you look out across other disciplines, there is an almost universal attraction to the elliptical coordinate systems, e.g. oblate spheroidal and elliptic cylinder. Where wavefront propogation is being discussed, devices based on these coordinate systems are often found to have optimal properties.
Where I see this kind of thing, sort of unified theory, it always causes me to think "we're on to something."
I don't think I can publish the ones I have which shows everything step by step. However, there are some that shows everything as once (a bit messy) that are out in the open and they seem to be correct at first eyesight. I've attached those.
That is not strong evidence to support your claim. The polars are probably done with noise bands, as is common (a sort of averaging technique which makes them look smoother than they really are), and normalized. This makes them impossible to compare to a high resolution polar map. It also looks to be a diffraction device and that can't be seen in those types of plots.
Last edited:
Firstly, we most likely have a different view of what's overall the most important aspects. I value something being uniform (at least to a certain degree) over a wide area highly. For me that's also more important then whether the Q is high or low. If are going to be strict though, there are really only two kinds of speaker designs that are truly uniform. Bessel array and CBT. All other designs collapse at different degrees. But I think the K-402 comes out very well with a pretty constant polar down to 500 Hz. Moving the crossover below most of the vocal is also a clear advantage that way I see it.That is not strong evidence to support your claim. The polars are probably done with noise bands, as is common (a sort of averaging technique which makes them look smoother than they really are), and normalized. This makes them impossible to compare to a high resolution polar map. It also looks to be a diffraction device and that can't be seen in those types of plots.
But sure. We already know that data is not easily comparable. We need independent measurements for that. That's something you don't provide either and from what I understand you use some smoothing tehniques.
It doesn't have to be independent, there is no really independent data anywhere. It just has to be consistent and clearly identified as to how it is done. No one is going to "make up data", but virtually everyone "smooths away" some serious flaws.
Yes, I do "smooth", but I do it in a way that is clearly consistent with the way we actually hear and not picked for some convenient reason. Also, what I do is not possible without some serious programming as it is not at all easy to do. My data is smoothed in Equivalent Rectangular Bandwidths (ERB) according to Brian Moore (the expert on this stuff). These bandwidths vary with frequency. If there is a better way to do this then I don't know it. Do you?
Neither a Bessel Array or a CBT are "truly uniform" in the strictest sense of the word, which you seem to be using. We have had this argument before.
Yes, I do "smooth", but I do it in a way that is clearly consistent with the way we actually hear and not picked for some convenient reason. Also, what I do is not possible without some serious programming as it is not at all easy to do. My data is smoothed in Equivalent Rectangular Bandwidths (ERB) according to Brian Moore (the expert on this stuff). These bandwidths vary with frequency. If there is a better way to do this then I don't know it. Do you?
Neither a Bessel Array or a CBT are "truly uniform" in the strictest sense of the word, which you seem to be using. We have had this argument before.
That is not strong evidence to support your claim. The polars are probably done with noise bands, as is common (a sort of averaging technique which makes them look smoother than they really are), and normalized. This makes them impossible to compare to a high resolution polar map. It also looks to be a diffraction device and that can't be seen in those types of plots.
You are correct that they are normalized, however, this is also a normal practice. Your other two guesses are incorrect.
Before you guys start proclaiming that they are "beaming", look at the bandwidth this is measured over.
Incidentally this is a subset of the entire range of data. The original set of data was much more densely sampled
Your other two guesses are incorrect.
And you know this how?
And you know this how?
I have seen the data and I know the particulars on how they were measured.
Just to put this to rest: They were measured by a competent engineer, using professional equipment in an anechoic chamber. The measures were not some "basement special"
The CBT polar I've presented before is independent. It's done by NWAA labs:It doesn't have to be independent, there is no really independent data anywhere. It just has to be consistent and clearly identified as to how it is done. No one is going to "make up data", but virtually everyone "smooths away" some serious flaws.
Yes, I do "smooth", but I do it in a way that is clearly consistent with the way we actually hear and not picked for some convenient reason. Also, what I do is not possible without some serious programming as it is not at all easy to do. My data is smoothed in Equivalent Rectangular Bandwidths (ERB) according to Brian Moore (the expert on this stuff). These bandwidths vary with frequency. If there is a better way to do this then I don't know it. Do you?
NWAA Labs -- Complete High-speed Speaker and Material Testing
I see that as a clear advantage. Even if companies said they were using same smoothing, I wouldn't trust the measurements 100% when it's not conducted by someone independent and also verified in the field.
A designer may ask someone to do a certain kind of measurement for them. They would already know the results.
Talking about speakers we haven't heard is one thing, but now we are talking about measurements that nobody gets to look at?
Talking about speakers we haven't heard is one thing, but now we are talking about measurements that nobody gets to look at?
Exactly, how is this putting anything to rest?
Yes, I do "smooth", but I do it in a way that is clearly consistent with the way we actually hear and not picked for some convenient reason. Also, what I do is not possible without some serious programming as it is not at all easy to do. My data is smoothed in Equivalent Rectangular Bandwidths (ERB) according to Brian Moore (the expert on this stuff). These bandwidths vary with frequency.
So we could add resonances to a speaker response and it would sound exactly the same as a speaker without those resonances as long as their ERB smoothed graphs look the same?
Though for comparison reasons smoothing is often the only way to compare two devices, as Markus just said it is very easy to hide a problem area this way. There are so many ways to cover-up problems with not only smoothing but fudging of measurements that I just don't do it myself when doing testing, ask someone what happens when you slow down a strip chart recorder. I just look at the raw data even if it may look ragged, it is the only way to zero in on the fine details. For marketing and sales reasons you could never print the raw data, people would just think something is wrong when compared to other products that have had their response curves normalized and smoothed. That is the only reason to use these types of graphs, marketing and quick general comparison.
So we could add resonances to a speaker response and it would sound exactly the same as a speaker without those resonances as long as their ERB smoothed graphs look the same?
I would say that there is some truth to that. At some point a resonance cannot be heard.
Everyone uses smoothing, even Toole. The question is what is the right amount. I smooth even less than Toole at very high frequencies, but more than he at low frequencies. Using a smoothing that is the same across the frequency band is certainly not correct.
Read the work of Prof. Farina who strongly supports the use of ERB smoothing and has done subjective studies in this regard. Not exactly related to loudspeakers but to EQing cars. However it is applicable.
People used a constant smoothing bandwidth in the past because it is very difficult to vary it with frequency. In software it is not so difficult.
I would say that there is some truth to that. At some point a resonance cannot be heard.
I agree that there's some truth to it but I have yet to see conclusive data especially for stereo reproduction where small interaural differences can have a significant effect not only on perceived timbre but also on spatial characteristics.
For assessing the quality of a speaker I'd start with unsmoothed free field-data because all other graphs can be derived from it.
Read the work of Prof. Farina who strongly supports the use of ERB smoothing and has done subjective studies in this regard. Not exactly related to loudspeakers but to EQing cars. However it is applicable.
Any specific paper you're referring to?
People used a constant smoothing bandwidth in the past because it is very difficult to vary it with frequency. In software it is not so difficult.
Could you please provide details about how you do the smoothing?
Last edited:
As with ALL things in audio, the relationship is neither "perfect" nor "proven", but without question it is better than fixed bandwidth smoothingI agree that there's some truth to it but I have yet to see conclusive data especially for stereo reproduction where small interaural differences can have a significant effect not only on perceived timbre but also on spatial characteristics.
It is impractical and unnecessary to do a polar map with "unsmoothed" data of 2048 data points or more and it is not a good idea to do it with linear spaced data points. Hence some conversion from the high density linear spaced frequencies to lower density log spaced frequencies is required and some form of averaging has to be done in reducing this data as well. It only makes sense to do this reduction in a manner that is consistent with our hearing.For assessing the quality of a speaker I'd start with unsmoothed free field-data because all other graphs can be derived from it.
Yes, there is, but I don't have recall of it right at the moment. It was an AES preprint on EQing a vehicle and how he used different types of smoothing on the frequency response and found that ERB filters provided "the best" response according to a number of subjective blind trials.Any specific paper you're referring to?
Could you please provide details about how you do the smoothing?
Easy enough. At each of the desired frequencies (log spaced, I use 300 from 20 to 20 kHz) you find define a bandpass filter that represents the ERB at that frequency according to Moore (I also have a Zwicker critical band model, but the Moore is much narrower.) You pass the 2048 data points through this filter and then move on to the next point. This is very inefficient, of course, but since the computer is doing it who cares. It's fast enough to not be a problem. May not be for a real time display perhaps, but I don't do that.
It's windowed of course. Who has an anechoic chamber? 🙄
Its anechoic above about 200 Hz. Then fitted near field below that.
Its anechoic above about 200 Hz. Then fitted near field below that.
- Status
- Not open for further replies.
- Home
- Loudspeakers
- Multi-Way
- Uniform Directivity - How important is it?