Do measurements of drivers really matter for sound?

You do not need Matlab for that: programs such as Ultimate Equalizer, REW, EqAPO, VituixCad can all do it, although their job (REW/VCad) is sometimes restricted to just generating filter coefficients..
You are absolutely correct.

I have not used the tools you mentioned, and I trust you that they are good enough,

I did not say you need Matlab. I said you need math skills. Then you can use tools like Matlab/Octave/SciLab/C/C++/Fortran/Python/Java/whatever else. Of course, you can use the specialized user-friendly tools written& tested by somebody else, as long as they provide the result you are happy with. In that case, you do not need any math skills.

All I wanted to say was that if you want to achieve linear excellence by passive design and simple amp, ... it was and still is very tricky but if you add DSP, it is much simpler. If you believe that non-linear distortions & Barkhausen are not an issue at all, that's all you need.
 
But if one is designing a speaker the most important and first questions should be what spl and low end limit is needed. Then one must determine directivity aimed. Then one should know how many drivers/ways will be needed and approximate size and xmax of drivers and size of the box needed to achieve the goal.
BINGO, Juhazi !

First, let's decide what size pond we want, what shape, and then how deep....

Then, like most of this thread, it's a matter of discussing how much does the pond rise when we pee in it! :D
 
  • Like
Reactions: 1 user
One thing to be aware of (as I am sure you are @mikets42 ) is that most DSP hardware does not implement the filtering in an exactly mathematically ideal way. For example, If we program (select?) a 2nd order shelf of +4 dB at 1.2k and Q=0.65, different hardware will implement in different ways. High Q PEQs are usually close to exact, and standard low pass and high pass filters are usually close to exact. But shelf filters and low Q PEQs can be different enough from ideal to matter.

j.
 
  • Like
Reactions: 1 user
One thing to be aware of (as I am sure you are @mikets42 ) is that most DSP hardware does not implement the filtering in an exactly mathematically ideal way. For example, If we program (select?) a 2nd order shelf of +4 dB at 1.2k and Q=0.65, different hardware will implement in different ways. High Q PEQs are usually close to exact, and standard low pass and high pass filters are usually close to exact. But shelf filters and low Q PEQs can be different enough from ideal to matter.

j.
I've been doing spk equalization (dirac-ization) since mid 90s, in all ways imaginable - on axis, over the listening area, accounting for nearby surfaces (early reflections) and not. That was a part of my work. At the time, no user-friendly software existed, nor did I need it.

If you are using 16-bit fixed point DSP, then you need FIR and windowed IR. 32bit IIR may also work, but generally not worth the efforts of debugging and testing. If you are using floating point DSP, you need state-space SOS biquads. In both cases, it does not matter which DSP you are using - AVX, TI, or ADI. The implementations must be bit-exact and tested through and through.

If you run into a defective shelf / low Q problems - fire your DSP engineers.
 
Speakers are Focal Spectral 908.1, ca 1994. Someone on this forum had exactly these midranges for sale - but in 2008. Unfortunately, Focal does not make anything suitable as a replacement anymore, I asked a local dealer already. I tried to measure some candidates in his shop (car stereo) ... no success. So... let's look at the other vendors, right? There are so many measurements published, right? But then you read the comments about how badly people disliked the drivers that measured so well. I got some drivers from solen.ca Again, no luck. There are acclaimed Purify drivers, but they are of much lower sensitivity. Little by little, I became very interested in the topic of measurements vs reality, and here we are:)
lucky boyz
 

Attachments

  • Screenshot 2024-03-25 at 16-59-33 Focal 5NV4211.png
    Screenshot 2024-03-25 at 16-59-33 Focal 5NV4211.png
    37.6 KB · Views: 41
Regarding tweeters: let's consider ATM tweeter from ADAM. That's how it measures on a tone, producing 80dBSPL:
View attachment 1284543
You can expect lots of distortions if you cross it at about 2500 Hz, right? on Mozart piano concerto it looks like that
View attachment 1284548
... quite a lot. But if you cross it at 5000Hz, the residual becomes practically inaudible:
View attachment 1284549

You can not expect such a difference from sine sweep test, can you?

Shure you can.

The tweeter shows some 5% (about -27dB) of distortion products at about 2150Hz. Above 3kHz, the tweeter displays distortions at about -45dB ... -50% which is well below 1% and rather at a fair 0.5% and even a bit below that. This is what the sinesweeps shows.

As you neither specify the filter type, nor show the logsweep of the mid, let's assume a standard 4th order Linkwitz-Riley filter and a idealized midrange with 0% of distortion.

Filtering at 2500Hz (red/green) is a misconception for this tweeter. The filter provides a delta of 5.5dB between the mid and the tweeter at 2150Hz, and so you get a distortion of -27dB + -5.5dB = -32.5dB for this frequency. Really not brillant at about around 2.5% of distortion. And you waste your tweeter's potential of 0.5% distortion by doing so.

Filtering at 5kHz (brown/blue) instead provides a delta of -29dB between the mid and the tweeter at 2150Hz. So you geet -27dB + -29dB = -56dB of distortion for this frequency, which is about 0.15% or so distortion. So this is overkill.

Target distortion for this tweeter might be at 0.3% to be sure not to waist anything. So you might rather aim at -50dB @ 2150Hz. This would be provided by an xover at about 4kHz (grey/black), which attenuates some -22dB at 2150Hz. -22dB + -27dB = -49dB. This would be the best compromize in terms of distortion v.s frequency optimizing. This compromize will result in a 0.3% ... 0.5% distortion figure for frequenies above 2150Hz.

See? No needs for fancy graphics for this kind of work. They might be useful for different goals. And sinesweeps, well applied, still are very useful.


Magnitude.png
 
I did not download the pdf, because I dislike the principle to have to make me "member" of any and every possible and impossible internet structure. I want to keep my data signature as slim as possible. So could you please directly attach this pdf into one of the appropriate diyaudio threads? I could imagine that this might be helpful also for others interested in your (really attractive seeming!) method, because infos here are incredibly dispersed, and having to proceed trough a maze/jungle of structures will act rather dissuasively on many readers for many individual reasons.

And then ... Could you detail respond / explictly object my post?
 
Thank you to one of my audio fellows having provided me the pdf by email. Page 14 of this PDF refers to a brown noise excitation signal. Wideband singals such as brown noise (and also Mozart) will inherently show a different distortion pattern than single frequency signals, due to multiple frequencies being randomly excited at more or less the same time. This leads to intermodulation products, adding up to the "harmonic" distortion products and the energy storage artefacts. Mozart does the same. Be aware that using multi signals, there is no way of discrimination between 2nd, 3rd and nth harmonic distortion products.

To assess the whole mix of harmonics and intermodulation distortion, I rather use multisine signals (opposed to noise or music), which are costant over time. They show the harmonics distortion producs and the IM distortion in a reproductible and very distinct way. As with wideband signals, selective analysis of energy storage and once again discrimination between the selective harmonic distortion products is not possible. Beware: When generating a multisine, then generate the sine peaks in a way it does not come in numerically spacings like 1:2, 1:3, 1:4 ... or so. Because by erroneously doing so, the 2nd, 3rd and nth harmonic distortion producs will conincide with the corresponding harmonic excitation frequencies and therefore will be masked.

A swept sine mas the merit to excite the device unter test (DUT) at one frequency only at the time. So you get more precise infor about the nature of the harmonic distortion producs. As the swept sine will not induce any IM distortion, the products of course are lower in intensity and density. Strictly speaking. In practice, it will not exactly be so. Because every DUT will show more or less energy storage decaying over time. So the faster you sweep, the more these energy storage artefacts will blur the result for the next higher frequencies. Therefore it is advised to sweep rather "slowly" for results ruling out these energy storage artefacts superpositions.

To assess energy storage, you best will go with linkwitz and with shaped tone bursts: https://www.linkwitzlab.com/frontiers_2.htm#M. You may also have a look at the so-called waterfall plots for another representation of energy storage.

So finally, it's a complementary set of standard test methods that will do fine for assessing a driver low-distortion merits and overall system useability:
1. The sine sweep. Choose the range of low distortion between two higher-distortion border regions. As exemplyfied in my former post. And listen to the sine: you will basically hear if a driver sounds clean or coarse within a certain range. Then use the range which is clean in both graphs and your ears.
2. The multisine method testing several items. Pick the driver with the lower rubbish carpet.
3. The shaped tone burst method. This test may further brutally restrict the useful range e.g. for a midwoofer down to lowish regions ...
4. The waterfall plot. It will have more or less show the same like the shaped tone burst method, as an overview.

For an example of swept sine vs. multi sine you may have a look at
https://www.diyaudio.com/community/...bac25-4-what-to-do.273211/page-2#post-6000414.
Yes, the multisine looks different than the swept sine. Same test conditions for all graphs, but beware: not the same scale.

Personally, I think that resorting to these four methods, you will get some robust analytical guidelines to set up an acoustically well behaving system.
 
Last edited:
  • Like
Reactions: 1 user
There are people who DIY loudspeakers for sine wave testing and those who DIY to listen to undistorted music.

Those who care about music (like the topic starter) naturally wonder what those tonal measurements have to do with sound fidelity. For them, sinewave testing is relevant if and only if it provides results fully consistent with subjective perception... which it doesn't. Those obsessed with sinewave don't give a **** about musical fidelity. For them, sinewave test numbers are the alpha and omega, nothing else matters.

They are radically different people without a shared purpose or language. It's useful to acknowledge the differences and avoid unnecessary arguments, conflicts, etc.
 
Yes, I am serious: I think that only those measurements that are fully consistent with the subjective perception of reproduced [well-recorded uncompressed unaltered acoustic] music are of any relevance.

I proposed a new technique for measuring non-LTI distortions which I found to be strongly correlated with my and my friends' subjective perception. The reception of this new technique was on the cold side. Moreover, a few people wrote that it is of no interest to them whatsoever because it does not fit into their paradigm. What I can say ... ok, I've heard you.

I have no interest in spending even a second arguing about beliefs.
 
Administrator
Joined 2004
Paid Member
Hi mikets42,
On this I will very strongly disagree with you.

I started my career doing service and designing speaker systems for resale. I used the best tools I could get at the time and going forward.

What I think skews your thinking is the old "we can hear what you can't measure" dogma. Truly badly outdated thinking by any knowledgeable people who do know what they are doing. I'm sure there are a lot of folks who measure, get it wrong, and claim they produced something perfect. You can do your measurements incorrectly, apply the results incorrectly or even use the resulting product incorrectly. That doesn't mean the measurements are wrong at all.

Top speakers today are not designed solely "by ear". In fact, there is a long history of products, electronic and acoustic, designed by ear alone that are later proved to be seriously flawed. The other issue is we have self important people who can't afford the instrumentation, are not properly trained who pretty much need to be seen as experts. Sorry, but if you can't pay these days, you can't play. You must invest the time to work in the field properly to have any voice applicable to reality. What that means is that you can't sit there, read reviews and "white papers" and not have direct experience to offer any valuable opinion on anything. Heck, most groups can't even manage to set up a comparative test eliminating variables enough to make the test valid. then they go ahead and write an authoritative report. Others lap it all up.
 
  • Like
Reactions: 1 users
Hi Mikets42.

Any progress ist measured towards what already exists: Anything new will need to have it's specific benefits to add itself into the long row of what already exists. And time will be the boolean arbiter.

What I did here until now is to breefly highlight the benefits of some old methods I personally use with success to lead me to DIY pleasant results. So now ... what is the offense towards you then? I honestly still try to understand the real benefit of your new method, claimed by you to potentially superseeding all others, and while doing so, I hope for a dialectic understanding sharing my input here with ... you. E.g. I find the aspect interesting to be able to listen to distortion products isolated from the input signal. Nice. But then? How will this help me further to improve my designs? Perform some SPL meassurements on these THD residuals? Your beautiful 3d-graphs hold so much information within them, so what is the "déjà-vu"-part discernible by traditional sinewave testing, and what is the true benefit, pinpointing to all these nonlinearities you describe in your paper? Please explain. Your method is too new, so you might explain several times from different viewpoints. And these viewpoints will be offered to you. Here. By misunderstandings, by critics, by questions, by dysbeleive. We are neither dummies here, nor snipers. Many of us are just educated sceptics. So of course you must be able to stand misunderstandings and also show some patience.

May I point to your own words from within your 2022 paper: "Digital cameras are much more complicated than any audio reproductions chain components including loudspeakers. Digital cameras progress has been astonishing, with new stars rising and false pretenders being rounded up in a course of few years. In a large part, this progress was stipulated by timely competent objective review sites with proper measurements, such as dpreview and dxo. The vendors’ marketing statements and supplied data became irrelevant. Nobody read them – everyone read the trusted dpreview. If a camera was not reviewed by them – it did not exist. Such reviews served as a market feedback error sensor, and this feedback control loop brought digital cameras from mid-90s infancy into maturity in less than 20 years. Nothing like dpreview have ever existed in the audio field."

Wrong. Dpreview for audio has existed. And if you take these your own words seriously, then it seems quite contradictory to implicitly blame all these brave and competent audio engineering people and sites having keenly measured and made the measurements public, and also developed speaker systems and single drivers, but unluckily having had to resort to to what you discard as non-perfect methods. Audiosciencereview, AudioXpress (=TestBench), Hificompass, Zaph an many others to name a few. All these non-existend dilletants, like us DIY-ers building for sine wave testing. We all strive to be dpreview. With our humble and maybe outdated, insufficient tools.
 
Last edited:
  • Like
Reactions: 1 user
I proposed a new technique for measuring non-LTI distortions which I found to be strongly correlated with my and my friends' subjective perception.
I have found your observations non-LTI distortions to be informative and most appreciated, and even got myself a pair of Audio-Technica headphones to hear for myself what -80dB distortion sounds like. I have also downloaded your Matlab code and would like to try it out someday, but this is where I might suggest something: see if a stand alone application can be written that will preform or analyze the measurement, or try and talk the writers of any of the often used measurement suites into incorporating this method into future releases. I realize that a dedicated person can learn new things if they apply themselves to it, but when faced with learning to code in software they have zero background in, or keep doing things the way they have been doing them, the vast majority of people will choose the latter. I mean, I'm interested in this and have been dragging my feet at even buying a personal license of Matlab since reading your thread from 2022.

If traditional methods of measuring distortion have been underreporting on the degree of non linear distortion, I'd certainly like to have a toolset that correctly reports on the situation, and even perform some controlled listening tests of my own.
 
Hi Anatech,

On this, I shall strongly disagree with you on several points:

1. We can not disagree, even mildly, about things that only one of us ever had any observation of. We can not argue about the height of the Toronto TV tower if I have never seen it (which I did:). We can not argue about Mt Baker elevation if I see it from my window and have summited it a few dozen times, but you have never seen nor climbed it. As soon as you get a few hours of listening to and analyzing the LTI residual by yourself, of drivers you know very well, then we can agree or disagree. Not sooner.

2. The "we can hear what you can't measure" dogma is actually true. Here "measure" shall read "measure by sinewave", right? You can measure it by other, more advanced methods. Moreover, everyone can hear the Barkhousen noise when it is separated from the main. Moreover, you can not unlearn hearing it. You'll be paying attention to it for the rest of your life:) and it will annoy you a lot. Barkhausen noise is somewhat correlated to the non-linear distortions but you can find ferrite- and neodymium-based drivers with approximately the same level of non-linear distortions - and neodymium will sound much "sweeter".

3. Alas, there are lots of expensive speakers designed by ear.
"Why don't all pros measure loudspeakers? It's a question all readers of professional loudspeaker and audio/video equipment reviews should be asking themselves. Purely subjective views are all well and good, and obviously listening will always be the final judge of performance, but without measurements how are manufacturers kept honest?" Whether these are top or not...

Hi Daihetz,

1. I decisively avoid any detailed interpretation of my graphs due to the danger of introducing a personal bias.
2. Zaph is good, true. audiosciencereview is repulsive snake oil.

Hi Aslepekis,

My typical test bench's UI is 2 HD screens full of constantly changing pixel-wide graphics. I do not know how to make a user-friendly application. Making such apps is an art that is completely out of the area of my competence. I publish my algorithms as GPL. Please, anyone who is capable of it, feel free to port them and make them user-friendly!
 
The following would actually need a thread of its own, but even so: suppose we set aside discussions about the validity of the test of Mikets42 and declare the test insightfull, what are the exact causes of these distortions and how should these be dealt with in moving coil loudspeaker motor design?
Are copper rings, extended polepieces etc sufficient, or is a more drastic approach necessary? It seems Purify is doing fine with a basically classic, but more refined "multi trick" approach.
 
@mikets42 - Have you worked out a method to quantify your measurements of non-LTI distortion? in other words, reducing it to a number or a matrix of numbers that could be used to compare two similar drivers?

I do not fully grasp the mathematics of how you extract the distortion from the musical signal, but I recognize the validity of this extracted signal.
 
Hi Aslepekis,

My typical test bench's UI is 2 HD screens full of constantly changing pixel-wide graphics. I do not know how to make a user-friendly application. Making such apps is an art that is completely out of the area of my competence. I publish my algorithms as GPL. Please, anyone who is capable of it, feel free to port them and make them user-friendly!
Thank you for the reply, and for publishing your algorithms as GPL too. I appreciate the hurdles of user-friendly software design, there is a reason good programmers and graphics designers cost what they do.

The graphics that you have posted here do provide some encouragement that it could be possible to have useful data presented in a fairly simple way.