Dynamics in Loudspeakers

Status
Not open for further replies.
"This is the point isn't it? If you can't quantify something that you think that you hear then you are just chasing your tail. The ear is simply not a reliable instrument. "

The hearing process is the final arbitrator. It does not matter a hoot what the specs tell us if the final result isn't pleasing. That is, as close to the original sound as we wish to pursue.

Your research into the issues above are interesting to all who have responded, and many more who have just read the posts.

OK, so I (and many others) don't like dome tweeters. My solution is to use something else. But if you found out why, and can use your research to remedy the problems, then your research is worth the effort. And our ears happier.

Geoff.

Edit: Why white noise? More stress in the upper region?
 
Geoff H said:


The hearing process is the final arbitrator. It does not matter a hoot what the specs tell us if the final result isn't pleasing. That is, as close to the original sound as we wish to pursue.


I accept your last sentence, but don't you see that it makes your first sentence irrelavent. If you define "sound quality" as "as close to the original sound as we wish to pursue" then the "hearing process" is outside of this deffinition - it is irrelavent. The fact that evaluation by "the hearing process" is notoriously unstable, biased, nonrepeatable and widely divergent, only makes it less attractive as an arbitrator.

You see, again, this is exactly my point. I am as much a music lover (maybe more) as anyone else, and I listen just as much (maybe more) as anyone else, but because I define "sound quality" independent of the hearing process there is never (or seldom) any need to resort to the subjective in my work or discussions. It's not that I don't listen carefully, I do, it's just that I don't feel that this is the ulimate "arbitrator" of the design of an audio system. It is so wrought with pitfalls that as an engineer its an unreliable tool.

The use of white noise was probably a mistake - pink noise would more than likely have been a better choice since it is closer to a real signal in spectral distribution.

When I return to Bangkok, I will rerun the tests with Pink noise.
 
Geoff H said:
OK, so I (and many others) don't like dome tweeters. My solution is to use something else. But if you found out why, and can use your research to remedy the problems, then your research is worth the effort. And our ears happier.

I think that the "why" is clear - they are not constant directivity, not very directional in their passband, and have poor thermal capability. They can have great on-axis frequency response, but IMO this is not a very important criteria. Its polar response (and maybe thermal compression) that matters and domes just don't do it. No piston based source can.

And, I think that my research has remedied the problem. But those discussions are more appropriate in another topic. They are well discussed in "Geddes on Waveguides" for instance.
 
gedlee said:


This is the point isn't it? If you can't quantify something that you think that you hear then you are just chasing your tail. The ear is simply not a reliable instrument.

While I do decry casual listening "tests", I must admit to going into our listening room virtually every day and listening. If there is something that I don't like after a very long time (weeks to months), I would certainly look into it. I have never found something that I could reliably hear that I could not find in the measurements.

If you don't have a nice tight closed loop correlating the subjective to objective tests then, for the most part, things will go astray. Most people start with the simplified subjective tests, but never close the loop. This is no way to find optimal designs of loudspeakers. I start with the objective test, show that it is reliable, repeatable and shows differences in systems and then I go on to scale the subjective effect using the objective test as the metric (ruler). Now I know the level of the effect subjectively and I can quantify a design or change on this subjective scale. NOW I can design the system to optimize it for the best subjective impression. Anything else is guesswork.



...
Apparently we have something in common.
😉
 
Geoff H said:
"This is the point isn't it? If you can't quantify something that you think that you hear then you are just chasing your tail. The ear is simply not a reliable instrument. "

The hearing process is the final arbitrator. It does not matter a hoot what the specs tell us if the final result isn't pleasing. That is, as close to the original sound as we wish to pursue.

Your research into the issues above are interesting to all who have responded, and many more who have just read the posts.

OK, so I (and many others) don't like dome tweeters. My solution is to use something else. But if you found out why, and can use your research to remedy the problems, then your research is worth the effort. And our ears happier.

Geoff.

Edit: Why white noise? More stress in the upper region?
I am interested in this thread because dynamics is a very important factor for realistic reproduction of music. However, there are so many aspects that could effect the sound quality that it is just necessary to rely on engineering methods to sort things out. One thing I know is that most audio systems do not have a polarity reversing option, this plus the fact that recording engineers do not always know the polarity of the recording setup. If you go and ask a recording engineer about the polarity of their recording system, most probably don't even know anything about it. To make things worse, to an electrical engineer, whether an amplifier that inverts the polarity of signal or not probably is of a minor consideration. This is why the ear may be used to determine realism or not, but attempts to lay out criteria to more reliably design a good system is very critical.

In addition to what gedlee mentioned about dome tweeters, domes themselves have resonance modes such that the energy in the diaphram is hard to dissipate. If the done is soft, the transients are not as good; if the dome is stiff, then the resonance is strong. In my experience, resonances above 20KHz still have a significant effect on music reproduction. I'm not going to argue with people on this subject because different people obviously have different ideas.
 
I dont see how you can rule out our hearing/ears/brain, as afinal judgement ... thats actually what we use in the end when listening to music😀 Something our ears can do and no measurement cannot is to tell us whether the sound is coherent or not but measurement may be used to explain WHY it sounds so good, or maybe it really dont ...we all know that good sound is a matter of a lot of different conditions and not the least the relationship between them ... and you can only measure them individually🙂
 
tinitus said:
I dont see how you can rule out our hearing/ears/brain ... thats actually what we use in the end when listening to music😀


I don't think that I ever "ruled it out" I simply said that it is unreliable and not an effective tool for evaluating audio designs.

If our mental capabilities are so powerful that a patient can report a headache cured from a placebo (a water pill) how can we rely on what we "think" that we hear. We hear what our brains tell us we hear and that is not always reality. To design an audio system we need to use tools and metrics that are more reliable. One has to "close the loop" on those tools to be sure that they truely represent what we hear, but that's different (and much more difficult) issue than relying on "our hearing/ears/brain" for evaluation. If you don't think that your "hearing/ears/brain" can be tricked, then you haven't been arround audio very long.

The point isn't that our hearing is useless, thats not the case, the point is that our brains, our emotions and most importantly our personal biases can often overide the reality of what we hear. When a large majority of people report hearing differences in amplifiers when no differences exist (ala Stereophile) then it is clear that bias to hear things when nothing exists is very strong. How can you rely on this kind of instrument?
 
tinitus said:
btw ..... in many cases I think the XO is the cause of compression, not the drivers


Having studied the problem I can tell you that the XO contributes, but only a small factor compared to the drivers. The drivers is where most of the energy is dumped and hence they receive most of the heat. In a compression driver, where we might pad it out by 3 dB or more, there is a greater effect, but its still less than the driver.
 
Hi Earl

I have been out of town other wise I would have replied earlier.
At the same AES meeting that Doug Button presented his paper on power compression, I had presented on the “elimination of power compression” on the Servodrive speakers (a system which used forced air cooling powered by the amplifier signal).
At the time, the effect of heating on low frequency drivers was my concern but the effect is present in all drivers.
As I recall, it seemed that once one exceeds about 1/10 to 1/8 rated power, one is also seeing a change in driver parameters and sensitivity.
With “modern” glue technology, there has been a large increase in PHC for a given size voice coil but that has been accompanied by a similar increase in changes in Rdc proportional to temperature.

Nowadays, Pat Brown from ETC (and Synaudcon) uses a tapered pink noise signal to evaluate a loudspeakers power handling, like your test, he judges that once one has any point in the response curve departed by 50% (-3dB) from ideal, one has reached the practical upper “usable” limit (while they are well able to go much louder before failing).

I would suggest pink noise tapered off at each end like Pat uses too, it is “broad” as opposed to White , in effect weighted to the hf end but not as brutal mechanically as “flat” pink noise.
With woofers, one see’s two main effects, one that the T&S parameters change due to increasing Rdc which changes the systems frequency response and tuning, also, that increase in Rdc lowers the sensitivity. That increase happened within seconds for a woofer, on smaller motor coils, which would of course be proportionally faster as the thermal mass is so much less.
Also, “how bad” the effect is depended on what kind of speaker alignment one was using, alignments which had deep impedance dips (like a vented box) showed the largest effect where a horn and sealed box showed the least effect (as the avg impedance is higher compared to Rdc).

While I have looked much less into the upper range issues, I have seen something worth mentioning. Damping goo, at least some types, have a loss which has a pronounced dependence on temperature AND recent mechanical history.
It seems likely to me that this is an effect present in some drivers which is separate from changes in Rdc but sort of related by level to power compression.
I think the dynamic performance of loudspeakers /driver is an area VERY ripe for research and if anyone can model / measure its effects, its probably you.
Keep me posted on your work if you pursue this (off line if you wish).

Earl you also mentioned in an earlier thread that you write most of your measurement routines, I was wondering if you use Matlab for that or what you use.
I got Mathcad years ago and have something’s written in it but it is clearly not as easy to use (as part of a measurement system) than what I have seen Charlie Hughes make in Matlab. With math being one of the harder things for me, I don’t want to go through the learning curve on something new unless it will be an advantage. I worked on various Hypersignal DSP setups hoping to get to test routines that did what I wanted but never got any of the “important” ones running properly
Not sure if that was always problems with bugs in the software or something I was doing wrong, they worked properly up to some level of complexity and then lost sync.
Anyway, got to run,
Best,

Tom Danley
 
You cannot fool a highly trained ear/brain ... sure there are ways to prove that you can, but listening to music ... no way ... and it has NOTHING to do with placebo effect or deliberate efforts to trick the mind, which ofcourse IS posible

btw ...XOs are really vital to the dynamics in ANY speaker, no doubt ... and why is that ... because it connects ALL the drivers
 
tinitus said:
You cannot fool a highly trained ear/brain ... sure there are ways to prove that you can, but listening to music ... no way ... and it has NOTHING to do with placebo effect or deliberate efforts to trick the mind, which ofcourse IS posible ... I think you are making wrong conclusions

That's just vanity and ego talking. We're all susceptible to it whether we wish to admit it or not.

se
 
tinitus said:
You cannot fool a highly trained ear/brain ... sure there are ways to prove that you can, but listening to music ... no way ... and it has NOTHING to do with placebo effect or deliberate efforts to trick the mind, which ofcourse IS posible

btw ...XOs are really vital to the dynamics in ANY speaker, no doubt ... and why is that ... because it connects ALL the drivers


Steve Eddy said:


That's just vanity and ego talking. We're all susceptible to it whether we wish to admit it or not.

se


Thanks Steve - I was going to say that, and I'll just reinforce your comments as they stand. People really don't get this important point - they think that they are above the human flaws that plague all of us.
 
Tom Danley said:

Nowadays, Pat Brown from ETC (and Synaudcon) uses a tapered pink noise signal to evaluate a loudspeakers power handling, like your test, he judges that once one has any point in the response curve departed by 50% (-3dB) from ideal, one has reached the practical upper “usable” limit (while they are well able to go much louder before failing).

I would suggest pink noise tapered off at each end like Pat uses too, it is “broad” as opposed to White , in effect weighted to the hf end but not as brutal mechanically as “flat” pink noise.


White noise was clearly a mistake, but I would prefer to stcik with noise types that are standard like Pink noise.

Earl you also mentioned in an earlier thread that you write most of your measurement routines, I was wondering if you use Matlab for that or what you use.
I got Mathcad years ago and have something’s written in it but it is clearly not as easy to use (as part of a measurement system) than what I have seen Charlie Hughes make in Matlab. With math being one of the harder things for me, I don’t want to go through the learning curve on something new unless it will be an advantage.


I use MathCad. MatLab and MathCad are fundamentaly different animals. MatLab is very criptic - kind of "C" like - and works only on vectors and matrices. Not a big problem with "audio" data as it is usually a vector of time samples, but I much prefer the more generic capabilities of MathCad because I can do so much more than just signal processing with it.

If you look at a MathCad file it is self documenting as it is written exactly like one would write out the math. MatLab has statements and if you don't know those statements its unintelligable.

I started with MathCad because my first copy cost me $49.95 when MatLab was 1000's. I have used it since version 2. Because of its documentation features it is becoming a standard with larger corporations (Ford and GM for example) for design analysis. MatLab is faster, but since none of what I do is real-time this is not such a big deal.

Used to be that MathCad had a supurb graphics add-on in Axum, but thats gone now (versions 12 and 13) and their built in graphics are about the same as Matlabs. I am now forced to export the data into a package like Origin. When one publishes books or marketing brochures you need very high-res graphics for printing and MathCads are not good enough. Neither are Matlabs.

You can do just about any signal processing with either of them, but I would contend that MathCad is the more flexible. MatLab has no algebraic capabilities (only numerical) and MathCad has a version of Maple's algebra processor - I use that a lot because I would never trust hand algebra anymore. Doing Matrix algebra in MathCad is quite impressive. There are lots of other examples.

Used to be that MathCad would choke on very large data-sets, but I haven't seen it do that in years. I have recently seen serious memory leaks where the memory usage climbs up until the system runs out of memory and you have to restart MathCad and sometimes even the operating system. I still prefer version 11 to the later version 12 and 13. I have complained to MathSoft about this on several occasions.

My vote would clearly be with MathCad.
 
Hi,

There seems to be some agreement surrounding the importance of a realistic (music like) test signal. There is also the need to be able to use this signal and quantify system response, with high temporal resolution.

To meet the second criteria, we need a spectrally dense signal, across frequency, so that noise doesn't overwhelm the response calculation at any point in time, or at any particular frequency bin in the FFT. I agree with using a frequency response view, as it’s the most intuitive metric available. If we can’t interpret it, even the best measurement is pointless.

In telecom, we used simulated speech generators as test signals, where noise was modulated by a temporally accurate waveform. i.e. the modulation waveform had dynamic statistics which matched average speech.

A similar tact can be taken for music. Many AES papers exist studying the temporal variations in music, where the duration of time “music” statistically resides at a given level (relative to 100%), is known. Different “types” or pieces of music are analyzed, and some conclusions drawn. Note that these studies are based on different integration times for the “peak” detector, which must be factored into the analysis.

Many AES studies have also been conducted where the spectral density of “music” has been estimated, and an IEC spec derived.

A representative test signal could be constructed in the following manner. Start with white noise, and then shape it to reflect the short term music signal dynamics as defined in the AES/standards work. Here’s a link to one of the pages in a good AES paper by Chapman:
http://www3.sympatico.ca/dalfarra/PhiTone/Chapman.jpg

Next, modulate the noise waveform with your chosen temporal envelope, to make it dynamically vary. I think Chapman’s paper may be of further guidance here. To be completely rigorous, recursively alter the noise feed spectrum until the final results match the target (changing the envelope also changes the spectral statistics).

Finally, perform the analysis. One really promising way to do this may be to use an MLS (mean least squared) or RLS algorithm to model the transfer function, as an impulse response, of the speaker. The reference signal is the input waveform, the o/p is the signal from the mic pre-amp.

Capture two synchronized waveforms: the input and the mic feed. The MLS algorithm can run off line (after appropriate delay synchronization of the two files), and calculate a new impulse response on a sample by sample basis. At every new sample, the FFT can be taken of the impulse response.

This can be handily displayed as a new frequency response every sample. Synch it with the input and what you get is a powerful graphic, where the frequency response is shown “modulating” with the music, in real time. Allow an option where the playback speed can be reduced, and add the second graphic of the time varying input, and you can watch the response change in slow-mo, with changes in the input.

The difficulty with any temporal analysis is that you need to capture enough time data to get a good FFT, that is, low enough frequency and high enough resolution.

If the FFT is limited to 200Hz, you see the response above that frequency, with good time resolution (i.e. only 5 ms of memory in the graph).

If it is felt that tweeter response issues dominate, shorten the FFT time window, which allows less time averaged into the calculation, and the ability to better capture fast heating affects.

Low frequency effects can be studied separately, and thereby not obscure tweeter compression effects.

Note that I’ve oversimplified the MLS/RLS side of things as they need memory in order to assign taps in the time domain. However, the same tradeoffs can be made there, to increase time resolution.

I’ve only just thought about this after reading these posts today, but it’s where I would start.


Cheers,
Dave

PS A second way this could be approached, and one I’ve had some good practical success with, is to bury a tone marker into real music, then just look at changes in the tone marker as the music plays, with its natural dynamics. This will also capture added distortion (harmonic and intermod) effects superimposed on the marker, which isn’t exactly what is being asked for. Still a darn good test for capturing gross temporal distortion + compression effects. Note that the tone marker can be moved in frequency, chirped within the signal etc, to try and arrive at some broader spectrum view. I used this for a different context, and still think the first method would be more appropriate, but threw it out for consideration.
 
Dave

I like a lot of what you say.

I am not sure that the adaptive algortithm is required though. Clearly if we are looking for time scales of a sample that would be necessary, but I think that the time scales are more like 10's of ms. The problem that I was having was in using noise I needed to do some averging and this meant that the real time length was many times 5 ms. I can only go down to 200 Hz. because I have to do gating. I tend to believe that the ear would have trouble detecting these slow modulations on LF's, but would find it easy to hear them above about 1000 Hz. (gut feeling here).

The adaptive technique could be very useful at minimizing the averaging required. Especially since it would be "trained" by the early data and would only need to track the much smaller changes in that data (much like ANC). The number of degrees of freedom in the MLS model could be quite low. One could find and plot the maximum deviation from the low power (linear) spectrum over the signal duration. This would yield a worst case measure for that loudspeaker.

What I did was much simpler however.

To me we need to quantify what we are looking for, then we can identify how best to do it.

Could you supply me with a wav file per what you suggest?
 
My next woofer will be a 21" PRO driver ... I reckon it will have low compression at loud level ... but my fear is that it will have poor resolution at low levels ... will there be mechanical compression due to too much stiffness ?
 
Status
Not open for further replies.