A how to for a PC XO.

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
ewildgoose said:



Doing multiple sweeps at different points in a room is also an interesting idea, but it seems extremely difficult (to me) to figure out the maths of averaging the results? Presumably one could time align them based on group delay, but what would it mean to do a simple average? You would need some equivalent of a complex-average, ie taking into acount both phase and freq response? I haven't found many research papers on how you might do this, but I have done a bit of searching.

Personally I think an interesting idea would be to look at variation around the average and reduce the IR towards zero in places where there is lots of variation and less so where you have strong agreeemnt. The idea would be to leave an IR which shows the main reverb and reflections which are strongly present in several locations, but not show (and hence not correct) for the reverb which appears out of phase in different places. Just a thought, but I haven't tried to implement it yet...


Ed W

Your thoughts are a lot similar to mine. I haven't done any real research on this front yet but my brainstorming led me down a similar path. I vaguely remember a course I had as an undergrad in Response Surface Methodology (Stat course). Anyway, the idea I had was to examine the surfaces created by looking at plots in different directions to see where the problems really exist and disregard more of the variation do to combing or minor movements. I'd imagine you'd find more room problems and ringing through this "filtering" of your response. I had hoped that you could derive a smarter filter as a result.
 
Yes, my idea was to take the IR, align the group delay approximately using the largest peak (say). Then simply look at the correlation of each point in each IR. Points with high degrees of similarity would tend to move to the average, those which were wildly differing would have to move towards some kind of interpolation

I don't think it would work too well in practice though because you would need so many IRs to get any kind of statistical significance... I guess that you could use a rolling window along the IR in order to increase the amount of data...

Another idea might be to stick to simple freq response for the multiposition averaging, and only use phase information for the primary listening position.

All the ideas simply boil down to reducing the strength of the correction at certain freqs/time, so guess another way of approaching the whole thing is to build a strong filter, then reduce the effect at certain freqs after the event...

It's tricky anyway...

Good luck
 
Originally posted by ewildgoose Another idea might be to stick to simple freq response for the multiposition averaging, and only use phase information for the primary listening position.

This was my idea. Examining FR at multipositions to gain a better understanding for what you really want to correct for and what you don't. Phase for one position, although the same analysis could be done I would imagine for phase too. Kinda like partial derivatives, hold some variables constant and then analyze in one dimension at a time. Phase would be trickier to analyze so it would be revision 0.26 or so.
 
Intuitively I don't really see how you can average (explicitly) the phase information?

Consider an intuitive example for a moment. Imagine speaker and listening position on the center line of a "box". The direct sound is exactly that, and after a delay you get the reverb from a side wall. Now measure at the point the sound would be reflecting off the sidewall - the direct sound will arrive in half the time, and the reflected sound will arrive instantaneously.

How do you reconcile those two IRs? Frequency is indirectly affected as a result of cancellation. Min phase is of course going to be broadly the same in both locations (although on and off axis is of course not identical), but the excess phase is totally different and of course uncorrectable in general.

It's a tricky bugger isn't it

Ed W
 
I know it sounds like trolling here, but I don't get room correction (any more):), and the posts about measurement just reinforce my wiews.
Having been a room correction advocate for a long time finally I gave up and revised my views.

IMHO, there is no correct way to "correct" a room, there is no such thing as room modes in music reproduction, you can't just take a room response and convolve it with a more or less inverse filter etc. What you get is different, subjectively maybe better, but not closer to optimal.

What you hear when you try to reproduce a musical event in a room is a superposition of the original signal reproduced by the speakers, and the ambient response of the listening room. The ambient response is the definite pattern of early reflections and later reverbs with Rt60 time up to a second in regular listening rooms.

The real problem with it is not that it will "color" the sound, but it will tell your brain everything about your listening environment - like how big is your room, what shape is it, how live is it, how far rare the speakers etc. (No wonder you will never confuse the sound of the hi-fi with the original - except some rare cases when the acoustical signature of your room accidentally closely matches the original recording venue.) This info coming not just from the amplitude decay of the orininal signal in the reverberant environment, but also from the directional/phase/amplitude/freq. response of the individual early reflections, and the direction/amplitude/freq. response distribution of the reverb.

When you do a single omnidirectional mic measurement in the room you are already doing a very crude averaging, keeping only the amplitude/phase decay info with MLS type measurements, and even worse, just the long term average amplitude with the sine sweep. Room modes itself are no more than long term amplitude averages of stationary signals in the echoic environment. This measurement is no way representative to the added sound (ambience!) of your room, so inverting it and using for room correction sounds useless. You will try to launch the complete averaged room ambient response form a single point together with the original signal, and hope for that will correct the full sphere ambient response. This is one of the reason behind the wiew, that you should use room corerection in the LF, where the amplitude avereaging shows huge FR anomalies, and the ear is not too sensitive for the directional info.

The only "working" room correction I can imagine nowadays would be a kind of ambisonic room response measurement/inversion for eight or more speakers with recursive algorhitms to correct the secondary effect of the correction signals, but ambisonics has it's own problem in the HF. At the end it just does not worth it. Why don't you just use heavily damped room and ambience synthesis.

PS. speaker correction is a completely different, and very useful topic.
 
Could someone please clearly define the difference between "speaker correction" and "room correction"? Are these two things actually seperate when you are correcting speakers that have been placed inside of a room? What good does correcting the amplitude/phase response of speakers in an anechoic environment do if you're just going to place them in a room and muck everything up again?
 
diyAudio Member
Joined 2004
m0tion said:
Could someone please clearly define the difference between "speaker correction" and "room correction"? Are these two things actually seperate when you are correcting speakers that have been placed inside of a room? What good does correcting the amplitude/phase response of speakers in an anechoic environment do if you're just going to place them in a room and muck everything up again?

Ambiguous at best.

Your guess is as good as mine and I expect each has a different definition.

I think of them as:

Speaker correction is XO treatments to flat the anechoic response of phase, amplitude etc.

Room correction is physical treatments or using EQ to alleviate room problems.
 
Speaker correction is an attempt to make the speaker as linear as possible in phase and FR in ideal (anechoic) environment.

The room acoustic itself is a linear (although not minimal phase) phenomenon, so if your speaker has a bump at a freq. your total room response will also reflect it, even though an averaged measurement could shows a dip at that freq.
 
diyAudio Member
Joined 2004
fcserei said:
I know it sounds like trolling here, but I don't get room correction (any more):), and the posts about measurement just reinforce my wiews.
Having been a room correction advocate for a long time finally I gave up and revised my views.

IMHO, there is no correct way to "correct" a room, there is no such thing as room modes in music reproduction, you can't just take a room response and convolve it with a more or less inverse filter etc. What you get is different, subjectively maybe better, but not closer to optimal.

What you hear when you try to reproduce a musical event in a room is a superposition of the original signal reproduced by the speakers, and the ambient response of the listening room. The ambient response is the definite pattern of early reflections and later reverbs with Rt60 time up to a second in regular listening rooms.

The real problem with it is not that it will "color" the sound, but it will tell your brain everything about your listening environment - like how big is your room, what shape is it, how live is it, how far rare the speakers etc. (No wonder you will never confuse the sound of the hi-fi with the original - except some rare cases when the acoustical signature of your room accidentally closely matches the original recording venue.) This info coming not just from the amplitude decay of the orininal signal in the reverberant environment, but also from the directional/phase/amplitude/freq. response of the individual early reflections, and the direction/amplitude/freq. response distribution of the reverb.

When you do a single omnidirectional mic measurement in the room you are already doing a very crude averaging, keeping only the amplitude/phase decay info with MLS type measurements, and even worse, just the long term average amplitude with the sine sweep. Room modes itself are no more than long term amplitude averages of stationary signals in the echoic environment. This measurement is no way representative to the added sound (ambience!) of your room, so inverting it and using for room correction sounds useless. You will try to launch the complete averaged room ambient response form a single point together with the original signal, and hope for that will correct the full sphere ambient response. This is one of the reason behind the wiew, that you should use room corerection in the LF, where the amplitude avereaging shows huge FR anomalies, and the ear is not too sensitive for the directional info.

The only "working" room correction I can imagine nowadays would be a kind of ambisonic room response measurement/inversion for eight or more speakers with recursive algorhitms to correct the secondary effect of the correction signals, but ambisonics has it's own problem in the HF. At the end it just does not worth it. Why don't you just use heavily damped room and ambience synthesis.

PS. speaker correction is a completely different, and very useful topic.

Thanks for a counterpoint to all the gushing thats passed so far.

Like everything its completely personal. Accurate it may be but I agree that the potential is there to rob musicality if over done.

One of the reason why I correct only for amplitude rather than try and blanket everything. I think phase compensation can bring benefits but tailor the speaker to the room and get competancy into the design it isn't so much a problem. Physical treatments along with minimal EQ seem to give me the best results for my room and speakers. Others may prefer nothing or even every form of EQ under the sun to try to defeat the room.

Do it too much and you can hear all the details but everything sounds flat, boring and dull.
 
Try DRC and see if you think it's unneccessary...

Room Correction is trying to remove the impact of the room you are listening in and hence you end up hearing more of the original reverb. It is kind of a superset of speaker correction which just limits itself to trying to correct a speaker to be as "flat" as possible (or whatever shape you want).

The problem with only correcting the speaker is that what you hear is room + speaker, hence you need to correct the end result which arrives at your ears rather just than the sound leaving the speaker (although at least having that decent is a good start)

You can't make a crap speaker into a perfect one using digital techniques. Nor can you perfectly fix a room. If you want REALLY high end audio then you need to start with some REALLY good speakers and have a really expertly designed accoustically correct listening room. However, even then DRC will help, but for the rest of the time we don't tend to have access to the above and so DRC is a cheap way to get us perhaps 40-60% of the way there while still living in a room that doesn't have egg boxes stuck to the wall...


Room Correction is Impossible (a reply from Denis Sbragion)
http://www.duffroomcorrection.com/wiki/Room_correction_limits

Some actually results of DRC in a real room (showing massive improvement in room response)
http://drc-fir.sourceforge.net/doc/drc.html#htoc197

What is room correction?
http://en.wikipedia.org/wiki/Digital_room_correction
 
ShinOBIWAN said:
One of the reason why I correct only for amplitude rather than try and blanket everything. I think phase compensation can bring benefits but tailor the speaker to the room and get competancy into the design it isn't so much a problem. Physical treatments along with minimal EQ seem to give me the best results for my room and speakers. Others may prefer nothing or even every form of EQ under the sun to try to defeat the room.

Do it too much and you can hear all the details but everything sounds flat, boring and dull.


If you ONLY correct amplitude then your correction is actually CREATING phase distortion (follows because you are doing a min phase correction).

So the more amplitude only correction you do, the more you "damage" the sound. That said, for reasonable amounts of correction the correction sounds better than the phase distortion which results. Trying to do large amounts of freq only EQ will tend to damage the sound though (whilst improving it in other ways at the same time)

More likely the reason some people is hear EQ damaging the sound is poor implementation though. The ear is not all that sensitive to the phase distortion of a decent EQ.

Good EQ sounds amazing. But GOOD EQ is VERY hard to do right and can easily do more damage than it fixes. That doesn't make the technique useless, but it does mean that you shouldn't assume that there is no benefit until you have heard EQ done well

DRC (the software package) tries to correct the sound using adjustments which correct for the freq distortion. It's not actually totally possible, but you can go a long way toward its

Read the DRC readme above for some totally fascinating results from what is the start of the art system right now. DRC right now is getting quite close to being able to generate filters which approach the limits of what can actually be corrected digitally (understand that digital CANNOT fix all system deficiencies, only some fraction of them)

Good luck

P.S. Like I said in my previous post. I totally agree that *If* you can apply traditional room treatments to your room then these should be done first. Only then use DRC techniques to correct the rest. (in my case I can't implement much in the way of traditional damping though...)
 
Foobar ASIO not working with Console

I have been using a Lynx Two B soundcard and Foobar for playing music the last 6 months. I use the convolver and the xover plug-in and ASIO output with Foobar. The convolver file is made with DRC.

Now I have tried the Console together with Waves LinEq and CurveEq. It works fine but I do not manage to use the Foobar ASIO plug-in together with Console. I get an error message in Foobar. I manage to use directsound, waveOut and kernel streaming but only on channel 1 and 2. Strange! Has anyone gotten this to work?

To use Console works fine together with Foobar when using three stereo channels, but when I use Theatertek with Ffdshow the computer struggles. (PentiumIV 3,2GHz CPU)

I have so far been pleased with the Foobar plug-in solution, and I am not totally convinced if the Console solution is the way to go even if this solution is more flexible. The main reason is the amount of CPU power this solution demands.

I have not tried the Linux solution yet because I know from previous experience that it can be a hassle to get everything to work the way I want on Linux.
 
Re: Foobar ASIO not working with Console

harruharru said:
I have not tried the Linux solution yet because I know from previous experience that it can be a hassle to get everything to work the way I want on Linux.

I won't disagree, but if you re-read your message above it doesn't sound like your windows machine is that easy to configure either!

Given that Dell in the UK will sell you a decent PC for £200 these days (£50 extra gets you a flat screen), it's actually quite economical to simply get a spare machine to put linux on than it is to struggle trying to dual boot.

It does look scary at first, but Brutefir is so easily reconfigured once you have got the hang of it, that it's hard to imagine using anything else

Good luck all!

Ed W
 
diyAudio Member
Joined 2004
IMPORTANT for users of Waves LineEQ:

Take a look at this:

XO1.JPG


It seems the lowpass behaviour doesn't work like the highpass at all. Its more like a steep shelving function.

If you look at the picture it actually roll offs and then maintains amplitude after this point.

This completely explains the muddyness I was describing. Its not a case of anything wrong with cabinet design or drivers but rather the XO execution itself.

I've noticed a that this steep shelving behaviour actually changes with frequency, move the XO point up and the shelve becomes less pronounced and moving it down causes the opposite. There is a specific high accuracy LF filter in LineEQ but its limited to a filter shape that don't match the rest of the other filters.
I believe this behaviour I've noted here is actually engineered into LineEQ rather than any sort of bug.

What this means is that Waves LineEQ ISN'T suitable for use as an XO.
 
diyAudio Member
Joined 2004
I recommend that anyone using Waves be aware of this!

I've never been 100% happy with my ATC to bass driver cross and its very easy to see why now. I've been through 2 drivers now in attempt to fix this problem before discovering the LineEQ 'bug'.

Therefor I suggest you disregard any info in this 'how to' relating to Waves LineEQ.
 
Hmm, well you don't neccessarily expect the stop band rejection to be -150dB, but although I'm not quite sure what I am looking at, that looks like it only does around a 35dB stopband, which is probably not enough. (I'm not an expert, but I would have preferred something more than 80dB I guess)

I notice on that screen that you are using linear phase crossovers? How are you finding these? I might expect these to work well at low freqs, but to cause pre-echo effects at mid freq and upwards - look at the impulse response and you can see why. However, I have seen a few people suggest that the pre-echo is NOT audible, so I am keen to try for myself.

There is a free filter designer with a gui on the net. I think it's called "filter designer", but I forget where from now. It lets you fiddle around with stopband depth, phase, etc - good tool for helping understand the various compromises and how the windowing affects the shape of the filter

Otherwise I guess you are back to Matlab and fiddling around...

Why not splash out on a DCX2496 for the crossover and keep the PC for EQ? It's not a big purchase to discover whether it's really the XO filters causing the problem...

Good luck
 
diyAudio Member
Joined 2004
I've noticed the same behaviour with Wave C4 IIR also.

Its nothing to do with the soundcard drivers since I'm analysing the audio stream directly rather than via the outputs.

Hmm, strange. I think I'm going to look at this more closely. There appears to be a work around but it involves the use of steep filters only and changes to the accuracy of the FIR filters. We'll see.

BTW: I had a DCX2496 before this. If the Waves doesn't workout there's a raft of others to use.
 
Hi guys,

Quick question on the software setup.

I'm playing around with Console and some plugins. In inserting the blocks for input and output I'm scratchin my head in trying to see how wave playback on a system would be routed into or through the Console setup. Any suggestions? This is a friend's system where he currently is using Foobar and a prototype Master Two pre-amp from Acoustic-Reality. Last time I didn't have a chance to inject something to the inputs of the card as I was looking to use the material on his hard drive.

Thanks in advance for any help.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.