The room correction or speaker correction? What can we do with dsp power now availabl

Hello,

I see high interest about the subject here. I would like to offer my expertise collected in 14 years of serving professionals.
Let’s start with discussion with Dr. Floyd Toole about his paper "The Measurement and Calibration of Sound Reproducing Systems":
https://secure.aes.org/forum/pubs/journal/?ID=524

The full version of discussion is available here:
http://aplaudio.com/downloads/Reading_Dr_Toole.pdf

And what can be corrected – loudspeakers or room:
http://aplaudio.com/downloads/Equalizing_loudspeakers.pdf

BR,

Raimonds

p.s. #1 2016.01.08

Problem to solve:

How to get uncolored performance of a loudspeaker to meet high accuracy requirements of recording industry. It is completely different accuracy level comparing to sound reproduction for enjoyment.

Main points:

The idea that pre distortions introduced into loudspeaker’s performance will be eliminated (compensated, neutralized) by room distortions is utopia and incorrect.
We should have uncolored loudspeaker to be able to create uncolored acoustic image.

The loudspeaker must be treated as distributed parameter system not lumped.

The power domain is best to describe the performance of loudspeaker as distributed parameter system.

The loudspeakers placement issues must be treated as speaker’s issues, not room.
The loudspeaker’s placement into corner adds 7 more virtual loudspeakers that works as one system and must be measured as one system. You can make model for that by use off spreadsheet of Jeff Bagby. But best way is to just measure that in particular place – installation.


Background:

36 years in audio industry as recording and live sound engineer
14 years of serving professionals (studios, live PA) with detailed EQ that become possible from Sound Power Frequency Response measurements and now, from high timing resolution TDA measurements. FIR (convolution) eq is used from very start in 2002.
Published as patent in 2005. Known under trade mark coneq.
Customers: Community Professional Loudspeakers, Panasonic, Hitachi, Kenwood, Acer ...
 
Last edited:
Founder of XSA-Labs
Joined 2012
Paid Member
Raimonds,

Welcome! And thanks for posting this interesting subject. I have only begun to reach parts of your writing as these are lengthy papers.

It would be helpful if you can summarize the main points, the background, and the problem you propose to solve in the text within the first post rather than require readers to download the papers and read in detail.

As thread starter you have ability to edit post 1 forever. I suggest doing that rather than starting a new post.

Thanks,
X
 
Raimonds, if I may ask,
The phase linearization that you use, do you linearize phase as if it had output to DC?
Or does the phase curve follow the minimum phase of the speaker's FR plot.

Having tried both (*) I seem to have a preference for phase following the minimum phase trace of the FR response. Just curious what worked in your experience.

(*) flat phase over the entire bandwidth, compared to phase which follows the FR minimum phase plot, as if the speaker acts as one big minimum phase full range driver.
 
Very nice question and you have answered yourself : )
Yes, the minimum phase AFR correction is doing two tasks together – making amplitude and phase correction.
After that you have just non minimum phase part caused by crossover (in case of non linear phase) and in worse case – by inaccurately tuned crossover in terms of time alignment. An enclosure with phase invertor is cousing additional group delay also.

Than you may consider additional time/phase correction as it was in this example of tuning of Quested monitors:
delay estimation and measurement

The minimum phase correction is showing its beauty especially in LF where the main resonance of speaker is corrected in terms of an amplitude and phase as well by the negative group delay introduced by the correction.
 
Personally I don't have a crossover and use REW and DRC-FIR and use a combination of settings that give me as close to minimum phase correction as possible at the listening position. Using a frequency dependant view to judge that, but also to get there in processing.
Your tools are out of my league but the points you make in your paper make sense to me and has quite a bit of overlap to what I've been doing. I've experimented quite a bit to find out what works and what doesn't for myself.
Going in, one of my goals was time coherency. I was actually surprised to find my preference was different from what I thought was going to be best. I figured I needed flat phase all though the bandwidth as if my output could reach DC. I have no ports, it's a sealed line array, depending on boost for the low (and high) output and the use of EQ is mandatory to the design.
A REW wavelet at the listening position (left = right channel) looks like this:
spectrum.jpg

first 28 ms view.

Changing a view parameters I could linearize phase down to DC, or follow the FR curve's minimum phase. That last one has won me over. It's more foot tapping and enjoyable, natural sounding I'd say.
 
Last edited:
I would like to express that my previous post statements are true only if AFR correction is clearly a loudspeaker correction. In another words, the measured curve is representing only a loudspeaker and no room’s (or particular measurement point) artifacts included.
Therefore, to make this possible, you must have clearly loudspeaker’s AFR or SPFR.
My works are focused on this.
You should try TDA demo to see the timing picture in a little bit higher resolution as in mentioned wavelet : )
 
Interesting thread thanks.

Raimonds often find it hard to guess all the for me new abbreviations to learn as example AFR SPFR DFR etc, to not get them wrong and for best understanding is there a paper somewhere or could you help post some info what they relate to.
 
So I decided to take the plunge. I was a bit scared at first. What if this software would prove my DSP processing to be wrong. Anyway, I decided I could learn a thing or two just by trying it.

I started as usual, outlining my microphone in REW, making sure my mic is in the sweetspot.
The resulting Wavelet graph in REW:
REW%20vs%20APL.jpg


Next I moved on to the demo version of APL. Installed the required software, including the Matlab compiler and ran a few tests.
At fist I did get odd results at 96000. So I decided to try 48000. My difficulty was getting APL to run trough JRiver's processing. Not the fault of the APL software, just needed to find out how to get it to work. If you look at the uncorrected graph, I was seeing a lot of that. Until I found the right key to get it to run trough JRiver's correction.

Once I got there, confirming the audio route visually in JRiver I got these screen grabs:
APL_Demo_wesayso.jpg


And in 2D view:
APL_Demo_Wesayso2D.jpg


Mr. Raimonds Skuruls, would you say this is a reasonable result, now viewed in higher resolution? :)

I did already say I recognised much of my own priorities in your paper. A way of looking at this audio problem, actually. I enjoyed reading the discussion you had with Toole. I recognised a lot of it in my own rationale, where I differed from Toole's opinion. At least what he did make public in his paper.
I just used different software to get this, the stuff I had available to get me there. To me it does look like it works either way. As it should!

Even the listening impressions you quoted from a client(*) did sound very familiar ;). So I figured I should be alright running the Demo. Now you can see the minimum phase behaviour I spoke of at the low end. More pleasing than linear phase down to DC.

To show what my result is without my FIR correction:
APL_Demo_wesayso%20no%20cor.jpg

I keep claiming on this forum this stuff really works, I hope more people do try it.

(*) = on the Prosound website

Fun Demo! Thanks for the opportunity. Basically it seems we agree on a lot of grounds. At least I like to think that. It took me the bigger part of this your to get this. Using a lot of my time to find what works and what doesn't. Spitting trough all graphs in REW to find my answers and using DRC-FIR outside of it's intended scope. The better the measurements became, the more pleasing sound I got. With all genre's I listened to.
No brute force correction that only works at a single position. I'm using short frequency dependant windows (as short as I could get away with) to do the correction.

Disclaimer: All measurements were made in a "live" living room at the listening position. What I show here is a Stereo measurement of my corrected line arrays in my room. Corrected with REW, DRC-FIR and JRiver and a lot of work!
 
I knew I shouldn't have put away the measurement gear so quickly.
I'll get it up again. I bet I know what you're looking for, I already saw some of that in the AFR graph.

These are full range line arrays and I did see that graph but didn't make a screen grab of that tab.

Let me see if I can do another measurement, I have to pick up my kid from school soon.

The AFR graph, new measurement:
524507d1452605945-room-correction-speaker-correction-what-can-we-do-dsp-power-now-availabl-apl_afr_wesayso.jpg


And the 3D TDA to go with that:
attachment.php


If you need any other view, I need to close down in less than 15 minutes. I have the setup stand by right now.
 

Attachments

  • APL_AFR_wesayso.jpg
    APL_AFR_wesayso.jpg
    172.1 KB · Views: 951
  • APL_TDA3D_wesayso.jpg
    APL_TDA3D_wesayso.jpg
    168 KB · Views: 1,067
Last edited:
Founder of XSA-Labs
Joined 2012
Paid Member
Those results look really good Wesayso. I stand by my earlier claim that the Two Towers are perhaps the best measuring loud speakers in all of hifi for "regular sized room use in far field when it comes to phase coherency, time accuracy, and low energy smear". Need all those disclaimers as Scottmoose will quickly remind us that a broad claim of "best" over all areas is not possible.

Well done!

:worship: