|
Home | Forums | Rules | Articles | diyAudio Store | Blogs | Gallery | Wiki | Register | Donations | FAQ | Calendar | Mark Forums Read |
|
Please consider donating to help us continue to serve you.
Ads on/off / Custom Title / More PMs / More album space / Advanced printing & mass image saving |
![]() |
|
Thread Tools |
![]() |
#1 |
diyAudio Member
Join Date: Dec 2015
|
![]()
Hello,
I see high interest about the subject here. I would like to offer my expertise collected in 14 years of serving professionals. Let’s start with discussion with Dr. Floyd Toole about his paper "The Measurement and Calibration of Sound Reproducing Systems": https://secure.aes.org/forum/pubs/journal/?ID=524 The full version of discussion is available here: http://aplaudio.com/downloads/Reading_Dr_Toole.pdf And what can be corrected – loudspeakers or room: http://aplaudio.com/downloads/Equali...udspeakers.pdf BR, Raimonds p.s. #1 2016.01.08 Problem to solve: How to get uncolored performance of a loudspeaker to meet high accuracy requirements of recording industry. It is completely different accuracy level comparing to sound reproduction for enjoyment. Main points: The idea that pre distortions introduced into loudspeaker’s performance will be eliminated (compensated, neutralized) by room distortions is utopia and incorrect. We should have uncolored loudspeaker to be able to create uncolored acoustic image. The loudspeaker must be treated as distributed parameter system not lumped. The power domain is best to describe the performance of loudspeaker as distributed parameter system. The loudspeakers placement issues must be treated as speaker’s issues, not room. The loudspeaker’s placement into corner adds 7 more virtual loudspeakers that works as one system and must be measured as one system. You can make model for that by use off spreadsheet of Jeff Bagby. But best way is to just measure that in particular place – installation. Background: 36 years in audio industry as recording and live sound engineer 14 years of serving professionals (studios, live PA) with detailed EQ that become possible from Sound Power Frequency Response measurements and now, from high timing resolution TDA measurements. FIR (convolution) eq is used from very start in 2002. Published as patent in 2005. Known under trade mark coneq. Customers: Community Professional Loudspeakers, Panasonic, Hitachi, Kenwood, Acer ... Last edited by Raimonds; 8th January 2016 at 08:17 AM. Reason: p.s. |
![]() |
![]() |
#2 |
diyAudio Member
Join Date: Jul 2007
|
Interesting paper. I hope this posts stays here, I'll quote it in the convolution thread for reference. Chances are this subject isn't seen as a Full Range topic
![]() Cross reference here: A convolution based alternative to electrical loudspeaker correction networks Last edited by wesayso; 7th January 2016 at 10:35 AM. |
![]() |
![]() |
#3 |
Got Foam?
diyAudio Member
|
Raimonds,
Welcome! And thanks for posting this interesting subject. I have only begun to reach parts of your writing as these are lengthy papers. It would be helpful if you can summarize the main points, the background, and the problem you propose to solve in the text within the first post rather than require readers to download the papers and read in detail. As thread starter you have ability to edit post 1 forever. I suggest doing that rather than starting a new post. Thanks, X |
![]() |
![]() |
#4 |
diyAudio Member
Join Date: Dec 2015
|
Thanks for a guidance!
Please find p.s. added in post #1 |
![]() |
![]() |
#5 |
diyAudio Member
Join Date: Jul 2007
|
Raimonds, if I may ask,
The phase linearization that you use, do you linearize phase as if it had output to DC? Or does the phase curve follow the minimum phase of the speaker's FR plot. Having tried both (*) I seem to have a preference for phase following the minimum phase trace of the FR response. Just curious what worked in your experience. (*) flat phase over the entire bandwidth, compared to phase which follows the FR minimum phase plot, as if the speaker acts as one big minimum phase full range driver. |
![]() |
![]() |
#6 |
diyAudio Member
Join Date: Dec 2015
|
Very nice question and you have answered yourself : )
Yes, the minimum phase AFR correction is doing two tasks together – making amplitude and phase correction. After that you have just non minimum phase part caused by crossover (in case of non linear phase) and in worse case – by inaccurately tuned crossover in terms of time alignment. An enclosure with phase invertor is cousing additional group delay also. Than you may consider additional time/phase correction as it was in this example of tuning of Quested monitors: delay estimation and measurement The minimum phase correction is showing its beauty especially in LF where the main resonance of speaker is corrected in terms of an amplitude and phase as well by the negative group delay introduced by the correction. |
![]() |
![]() |
#7 |
diyAudio Member
Join Date: Jul 2007
|
Personally I don't have a crossover and use REW and DRC-FIR and use a combination of settings that give me as close to minimum phase correction as possible at the listening position. Using a frequency dependant view to judge that, but also to get there in processing.
Your tools are out of my league but the points you make in your paper make sense to me and has quite a bit of overlap to what I've been doing. I've experimented quite a bit to find out what works and what doesn't for myself. Going in, one of my goals was time coherency. I was actually surprised to find my preference was different from what I thought was going to be best. I figured I needed flat phase all though the bandwidth as if my output could reach DC. I have no ports, it's a sealed line array, depending on boost for the low (and high) output and the use of EQ is mandatory to the design. A REW wavelet at the listening position (left = right channel) looks like this: ![]() first 28 ms view. Changing a view parameters I could linearize phase down to DC, or follow the FR curve's minimum phase. That last one has won me over. It's more foot tapping and enjoyable, natural sounding I'd say. Last edited by wesayso; 9th January 2016 at 01:43 PM. |
![]() |
![]() |
#8 |
diyAudio Member
Join Date: Dec 2015
|
I would like to express that my previous post statements are true only if AFR correction is clearly a loudspeaker correction. In another words, the measured curve is representing only a loudspeaker and no room’s (or particular measurement point) artifacts included.
Therefore, to make this possible, you must have clearly loudspeaker’s AFR or SPFR. My works are focused on this. You should try TDA demo to see the timing picture in a little bit higher resolution as in mentioned wavelet : ) |
![]() |
![]() |
#9 |
diyAudio Member
Join Date: Jul 2007
|
Maybe I should
![]() |
![]() |
![]() |
#10 |
diyAudio Member
Join Date: Dec 2015
|
It shows graphs but does not save results. And it does not show Non Linear Distortion Analysis curves.
|
![]() |
![]() |
Thread Tools | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
2-Way Digital Crossover Questions with a Plan To Use Room Correction DSP | Gwho | Multi-Way | 6 | 14th November 2014 08:57 PM |
What the difference between dsp room correction eq and software correction | erez1012 | PC Based | 0 | 10th March 2014 08:07 PM |
DIY DSP for Digital Room Correction | OzOnE_2k3 | Digital Source | 114 | 17th June 2008 09:25 PM |
Room correction systems | herodote | Digital Source | 0 | 24th July 2006 11:56 AM |
New To Site? | Need Help? |