Dome tweeter: Auminium vs Magnesium... marketing

@tmuikku I think we're basically in agreement. What I think is missing from the measurements is nailing down what happens to the impedance with 2-tone measurements. For instance, play a continuous 1kHz tone at a reasonable (but not devastating) volume, and see how it modulates the impedance at other frequencies. If one tweeter's impedance at 20kHz varies from 9 to 11 ohms, while another's only varies from 9.9 to 10.1 ohms (all other conditions being equal), then that would be really informative.

As a first step I would probably manually look at the voltage across a sense resistor, and look for amplitude modulation of the 20kHz carrier wave.

At a guess, ribbon drivers would probably crush this category, but they may also get demolished in some other rankings like the ribbon tension being modulated by low frequency output, evening the overall score. A comparison against small FR drivers could also be interesting (see if lower displacement makes a difference).
 
Hi,

you can search if there is Klippel data available for driver at hand. AudioXPress test bench is one source for Klippel data, from which distortion with excursion can be estimated. I think they don't measure the data for tweeters though. But as you already started thinking about various tweeter types and how they might cause distortion to happen you can now think the stuff through and evaluate if some driver could sound better than other, in given application.

For least distortion electrical (and mechanical) parameters should vary as little as possible with displacement that is needed when the driver is applied in a system. Required displacement can be estimated with the radiating area and crossover frequency (and slope) and target SPL. SPL is key thing, target needs to be known and system needs to be sized accordingly.

In addition to estimate which drivers have more or less distortion one could perhaps estimate if the distortion/difference is audible. I think its mostly low order distortion with drivers, and amount is basically pumping with the low frequency content the driver outputs. Distortion in general could be perceived as nice thing, makes music sound loud. But perhaps not when there is this acoustic amplifier with the driver that imprints the sound, highlights any distortion, the breakup, which could sound nasty. So, a driver with more distortion but less breakup could sound better, at least worth testing out if there is possibility. Perhaps there is not that much displacement required then the parameter variance with displacement means less. Also, if only tweeter distorts while woofer sounds clean, or vice versa, it probably also sounds weird / nasty within system context. Trying to think this, I think distorting tweeter sounds much worse to ear than distorting woofer. Also ears distort with high SPL, linear distortion gets more audible with SPL (per geddes), box sound gets audible and so on.

So, for any driver with unknown data one roughly estimate performance with these:
  • smaller breakup peak in frequency response would most likely sound better
  • high pass filter higher up would most likely sound better as it would reduce displacement
  • limiting bandwidth the driver is responsible for in a system would most likely sound better
  • using high tech driver would most likely sound better
  • if klippel data available, driver with less variance with required x would likely sound better.
  • all above especially important for active speaker systems with minimal circuit impedance
  • if passive xo can be tailored so that it considers impedance series with the drivers, it would most likely sound better

Or should it rather be written as "has potential to sound better" because audibility of this stuff could be quite small in importance. Expect low level listening sound fine with many systems but crank the knob to party level and problems come dancing before you. Well, use the info any way that feels suitable to you in your application 🙂
 
Last edited:
Not quite.. The proof of the measurements is in the hearing. Otherwise, you wouldn't have to eat the pudding, just measure the pH, salinity etc.
Also not quite, the proof is in double blind ABX testing with a big test group under the correct conditions.

Otherwise psycho-acoustics and other placebo effects are WAY to strong. 😉

Anyway that was obviously not the point here and no this is a lot more than just testing pH. This is testing a certain theory. Why? Because if that doesn't even hold up, further listening tests are also useless.
 
  • Like
Reactions: Boden
I wish there was a STANDARDISED waterfall measurement supplied with all drivers. (measured anechoic or outdoor)
There is just so much information gained from 'waterfall plots'!
Let's do that with burst decay graphs instead.
I never find them useful except for acoustic related stuff. (standing waves inside cabinets etc)

For drivers all issues can be seen in frequency response, electrical impedance and distortion graphs.

This is even more true for tweeters.
 
  • Like
Reactions: markbakk and morbo
"has potential to sound better"
You're the first person that I hear (read) this from that finally gets it! Bravo!

I would only change "better" for "has as little potential problems as possible".

For example, Erin's findings are perfectly in line with mine.

I have never heard a speaker with a good spinorama that sounded bad.
I have heard many speakers with a bad spinorama that sounded bad.
I have heard SOME speakers with a bad spinorama that sounded fine.

Important note: different or taste does not equal bad!

Plus I think results are always better by following a good structured method instead of just randomly trying stuff.

It's almost fascinating how many professional companies take that second option.
 
  • Like
Reactions: morbo
ABX gives me nocebo effects.
Haha, well unfortunately it's the least worst/bad method to be as little biased as possible. All other methods still have way to many variables to trick are bloody brains in believing stuff.

Nothing wrong with believing stuff, but it just won't give us any data or evidence unfortunately. So in that case, you might as well just completely skip it all together, since garbage in = garbage out.
 
Last edited:
Let's do that with burst decay graphs instead.
I never find them useful except for acoustic related stuff. (standing waves inside cabinets etc)

For drivers all issues can be seen in frequency response, electrical impedance and distortion graphs.

This is even more true for tweeters.
I was talking about 'spectral time decay'. The problem with "tone burst decay" is you're only seeing decay at one frequency.
Also, waterfall plots do show frequency response - at the picture top. This visual frequency response is also less 'smoothed out'
than a lot of dodgy F/R graphs.
I definitely agree that along with the above, impedance, and then distortion measurements are like the 'Holly Trinity' of measurements.
Like with Ohm's Law, if you know 2 factors, you can derive a third - but in this case, by knowing 3 factors you can start to derive a fourth -
being the likely 'sound' of a driver, plus deriving what attributes are electrical and what attributes are mechanical/acoustic.
 
I was talking about 'spectral time decay'. The problem with "tone burst decay" is you're only seeing decay at one frequency.
Also, waterfall plots do show frequency response - at the picture top. This visual frequency response is also less 'smoothed out'
than a lot of dodgy F/R graphs.
I definitely agree that along with the above, impedance, and then distortion measurements are like the 'Holly Trinity' of measurements.
Like with Ohm's Law, if you know 2 factors, you can derive a third - but in this case, by knowing 3 factors you can start to derive a fourth -
being the likely 'sound' of a driver, plus deriving what attributes are electrical and what attributes are mechanical/acoustic.
I don't think we are talking about the same burst decay graph.
See ARTA manual, it's the same as a waterfall diagram but instead of time, time periods are fringe used.

The problem with a default waterfall diagram is that you can't see higher and lower frequencies at the same time.
 
Last edited:
.... OK....
problem with a bunch of people during blind tests is they have different tastes and focus to listen to, musical culture and habits to hear as well as training to listen to aka education to hear and experience about the programs material.

Blind tests can be usefull for commercial purpose to find the wow approval of a setup for the selling Go, it is less evident when it comes to very well trainned ears or true musicians as public. Blin piano tuners are cool guys. And there is always the listening room, how do you blind test it, you remake at different moment the same people with sligthy different state of mind of the day in different rooms ? How do you do if you change one of the cable and a 0.1 uF cap, you perform another test or made them listening to with spl refreshing of their ears the same material during several hours of tests ?

It is quite near impossible to perform well and the purchasing director will say you, no we keep the basic Solen cap because the margin and the increase of copper on the central asia mines !

Here we can't perform ABX but with few people but we do not pay attention to the hours of dev as it is not commercial... at least for the non pro members as I am. So each time I see someone involverd in pro audio talking of ABX tests I have always a nocebo effect.
 
Last edited:
And there is always the listening room, how do you blind test it, you remake at different moment the same people with sligthy different state of mind of the day in different rooms ? How do you do if you change one of the cable and a 0.1 uF cap, you perform another test or made them listening to with spl refreshing of their ears the same material during several hours of tests ?
You are making matters way more complicated than necessary: let us say that the starting hypothesis is that e.g. harddome Brand X is supposed to sound better/is generally preferred by listeners over softdome Brand Y. Then both tweeters are equalized for identical SPL transfer functions in a similar box in one and the same room and the same midwoofer.
The participants do not know what they are listening to. If the outcome is not uniform, so preferences are divided, we then known the original hypothesis is debunked, i.e. there is no listener preference by listeners for X over Y. Unsighted is king: the rest is expectation bias.

It should be noted, however, it is far from trivial to attain exactly the same levels: unfortunately level differences is where most of the perceived differences stem from i.m.o.
 
imo basic blind testers do not the difference between objectivity (the nearer from what is acurate from a real instrument) and personal aesthetical tastes... it is biased, but to debunk as you say the gross evidences. You need a trained people pannel or you fall in a mix of personal preferences or a biased wow factors that focus on some frequencies.

of course I mean, when testing the whole product. I call it Blind tastes.
 
Last edited:
^ but it's not what you test or is tested in a blind test (accurate to real instrument): the question is do you hear a difference.
I agree with Boden you complicate things ( which are already complicated as you have to make statistic treatments of answers to have meaningful results).
 
You are making matters way more complicated than necessary: let us say that the starting hypothesis is that e.g. harddome Brand X is supposed to sound better/is generally preferred by listeners over softdome Brand Y. Then both tweeters are equalized for identical SPL transfer functions in a similar box in one and the same room and the same midwoofer.
The participants do not know what they are listening to. If the outcome is not uniform, so preferences are divided, we then known the original hypothesis is debunked, i.e. there is no listener preference by listeners for X over Y. Unsighted is king: the rest is expectation bias.

It should be noted, however, it is far from trivial to attain exactly the same levels: unfortunately level differences is where most of the perceived differences stem from i.m.o.
As I said before, it's the least worst method! 😉

Meaning, no it's not perfect, it has flaws, but it's a lot better than no dubbel blind test. @diyiggy
Btw, what you're describing doesn't have anything to do with the ABX tests itself, but the complexity of performing tests.

But the main message is all about awareness of the fact that we don't just listen and judge with our ears.
Or in much more simple words, when it looks pretty and expensive it is probably better, therefor we think it's better.
Even when it's not there (hooray for our frontal lobe in our brains)

With blind testing test kind of judgements are at least reduced.

Again not perfect, but at least better than none at all.

Anyway, dubbel blind experiments are fully respected, extremely well documented and standardized in science, so I am not gonna debate about the validity of it.

Btw, you can easily do dubbel blind ABX test online or with a small sample group.
It doesn't make the test less valid, it only makes the conclusion (sometimes) less valid.
There are obviously always exceptions, but that is the nature of science.
It's by definition a matter of statistics since we can't measure every possible situation (because that will take an infinite amount of time)
Meaning that there is always a change that the outcome will be different.

But knowing that a certain percentage falls within the conclusion of an experiment, is a much better starting point, than not knowing anything at all!
 
well I agree, it is like food, a part is devoted to the eyes. I talked of the results between two blind tests. Basicly if the two loudspeakers look the same between two tests to difference them when a tuning is involving the sound only, well, the veil in front of it is not useful anymore imho. Ok, conditions of tests matter and it is not the ultimate answer which the marketing dpt could believe (and maybe for the truth of it as it comes to sell the listening part of the product between two or several iterations... aka the wow listening factor, not only the best for long hours of listening)

Well it was an oof topic of mine about the blind test words, not a part of what I think as indeed I am looking for subjective tastes testimonies to judge a tweeter within my budget and road map. Mostly asked to limit the budget. I think the sb26 cdc will be tested and the cat378. Of course the price of the 2 ref coulld also drop for one pair of tweeter only, but I do not see clear evidence for a model yet. People agree the problem is more the crossover and cabinet skills that matter.