WTF!? Wavelet TransForm for audio measurements - What-is? and How-to?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
One can say wavelets come to the rescue when all the other methods have showed no results one is looking for :D

:up:

Your multi resolution plots IMO seem to be the best approach ever !

At a certain point there arises the question anyway, if we can speak of a "frequency content" at all - if the time scale is veeeeery short, when it is said that that "frequency" is to happen by analysis....
But thats another thread...


Michael
 
Last edited:
Now my point would be that if the wavelet has a "carrier" frequency of 10 K Hz it is because you are interested in the behavior of the system around 10 k Hz and interested in whether or not there are resonances near that frequency. But the time scale of a resonance with a peak frequency of fp is on the order of 1/fp. To put it another way, you don't look at transients by applying signals of slowly varying amplitude which is what you have if Fs = 100 and Fc = 10,000. The resulting envelope of the output is more indicative of the steady state amplitude response of the system, not the transient.

You are describing the limitations of the CSD method.

Transient = no problem for wavelet. Wavelet's ability is to alter the time-frequency resolution to suit your needs.

Here, a different type of wavelet showing 'transient' phenomena clearly separated in time domain.
An externally hosted image should be here but it was not working when we last tested it.



- Elias
 
I bought Sound Easy, not a too long time ago, as I was specifically interested in that analysis.
To my pity, after having it and checking out - no demo version available ! - I found no way to import impulse response files to process them any further….
:(

Michael

But you have the ability to measure the impulse with SE so why the need to import it? Anyway, if all you want do is concolve a system impulse with anonther function of time you can do that in something like Excel.

But I agree with Earl. ALl these things, CSD, burst response..... are all just post processing of the impulse. Burst response, frequency response, CSD, wavelets..... they are all just different ways of looking at the impulse response. The different formats may show different things more or less clearly, but there is no new information in them.
 
But you have the ability to measure the impulse with SE so why the need to import it.

Sure – but do I have to repeat all my previous measurements ? - and what about all the IR files we can give and get elsewhere – look at the horn honk thread for a good example...
???

Anyway, if all you want do is concolve a system impulse with anonther function of time you can do that in something like Excel.
.

For now I'd be happy to do true wavelet analysis in Octave – haven't got around yet.
Jean-Michel's "quasi wavelet" code though works like a charm for now...

Maybe you could drop some lines of Octave code ??
(Faster running and more elegant coding than in Excel – once knowing what to calculate (LOL) ! and way better visualisation of results – free software too)

But I agree with Earl. ALl these things, CSD, burst response..... are all just post processing of the impulse. Burst response, frequency response, CSD, wavelets..... they are all just different ways of looking at the impulse response. The different formats may show different things more or less clearly, but there is no new information in them.

Sure – but have you ever come across an "audio data mining tool" that is focused on the topic of short time frequency response variations due to recursive reflections ?

And what *you* think is the reason that we do not have such specific tools for quantification of quarter wave horn honk available ? – lack of general interest ? – lack of knowledge regarding this specific sonic pattern ? – lack of usability ? - .....

Michael
 
Last edited:
But I agree with Earl.

Oh Oh - your in trouble now. Agreeing with me in these threads is certain death.

A typical FFT, one which is windowed, is precisely a wavelet with the center frequency at each of the FFT bins and the envelope equal to the window length. Wavelets have the advantage of being able to vary the window length at different rates for different frequencies and to chose the center frequency. This is an advantage in that this is much closer to the way ear works than the fixed window length of the FFT. But its not much different than simply doing 1/N octave averaging. In signal processing parlance this difference is called parametric versus non-parametric spectral estimation. Of even more importance for impulse responses, IMO, would be Prony Estimation (http://en.wikipedia.org/wiki/Prony's_method), but further discussion of that would be hyjacking the thread.
 
Last edited:
Hello Jean-Michel,


Now the question is what we can learn from the phase plot


One observation can be made:

Here's J321 horn phase plot:
An externally hosted image should be here but it was not working when we last tested it.



If we choose a slice of phase at the t = 0ms, we have the conventional phase response for free! One can use this also when aligning cross-overs, for example.
An externally hosted image should be here but it was not working when we last tested it.


:D


Even to study simples 2 ways crossovers I find it interesting as in the example I gave:

http://www.diyaudio.com/forums/atta...audio-measurements-what-how-wavelet_phase.gif

Yes, this is interesting!


- Elias
 
The Impulse Response contains complete information with the exception of distortion.
What Wavelet Analysy alows is looking at certain restricted aereas.
The first that used Wavelets in High End Audio without even knowing ( i think that name came up later)
was Siegfried Linkwitz with his Cosine Burst Generator. Nowerdays he is using a Blackman Window.
I found that way very revealing because it shows lokal problems in stark contrast graphically.
 
That's the point! Until now the time domain effects have not had a good tool to be visualised.

- Elias

I do agree with this, but it's not phase that is the issue, not until the phase has shifted by several periods. But then it is group delay that is the more relavent parameter. Phase that is less than several periods is of no concern whatever. If the wavelet expansions were such that the window length approximated the critical bands then there would be some reason to believe that the results might show something that a parametric approach (i.e. FFT) would not really be able to do. But I don't believe that what we have been looking at has this characteristic.

Then, it would be extremely interesting to simulate the effect of the differing masking with sound level, as I beleive that this is the key to the perception of "harshness".
 
I know about that too.
I develloped together with Bill Waslo a way to seperate distortion called " Distortion Isolation in the Time Domain".
You know of cause about that.
For beginners new to this topic i did not want to make it too complicated.
Let me say that Distortion Islation is posible with a music signal but we found that distortion in a well designed loudspeaker ( typically under 1% harmonic distortion) is masked by the muisc so i am fully with you.
When i design a new driver for example with SEAS i use Multitone (AKA Spectral Contamination) and the Klippel analyser by default.
My current designs center on raising sensitivity without loosing tonal balance (AKA colouration), not to mention radiation pattern and cabinet colouration.
 
Hello,

I personally think the value of wavelets is evaluation of performance using music signals. The process would be to evaluate the difference between wavelet of input and output. The closer the match, the better fidelity.

Actually, I think the wavelet is the closest analytic signal to the music signal. But it also has clear mathematical definition, unlike music.

See here a pick from a random song, a bass drum waveform (non filtered):
An externally hosted image should be here but it was not working when we last tested it.


And here's an example wavelet:
An externally hosted image should be here but it was not working when we last tested it.


See anything similar :D


Yes of course one can compare input to the output as it's usually done and plot the residue or error signal.

- Elias
 
Understood - but its more than simply nonlinearities, it is all processing errors, noise and anything that is time variant. Best case these later aspects are negligable, worst case they are dominate. It might be hard to sort out what is what, especially for a loudspeaker which is anything but time-invariant.
 
I found a novel way to measure noise in speakers too.
I put in a high level multitone signal and supress the tones mathematically.
Then i do a back FFT and put it in the time domain. I than transfer int ito a wav. file and can listen to it.
I call it the Gerhard test because it needs to be called something.
I hope i can find an example of a very low rubb and buzz speaker a friend designed for me and post it here.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.