What do you think of passive crossovers?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
This is really not true at all.

Yes, DSP systems introduce latency, typically on the order of 1-2 ms however any decent stereo / multi-channel digital DSP system will have identical and deterministic propagation delay down to a single sample for each channel regardless of equalization adjustments. The designers of DSP processors are not morons. ;).

That is why I said that for 1 channel DSP is perfect.

True, if you are mixing and matching different channels on separate pieces of DSP equipment which are different brands/models (which will probably have different latencies) or have some speakers going direct or via analog EQ and some via digital DSP you'll run into problems with differences in latency, something which will catch out the unaware..

I agree these are recipies for disappointment, but it is not the situation I have tested. Two separate but identical DSP will have minute and varying latency differences. Call it jitter if you like.

Most decent DSP systems will also allow you to add additional adjustable latency down to a very fine degree in case you do have some need to use two different pieces of DSP equipment in the same system on different channels. (I would still only ever use a stereo DSP for left and right channels though).

Yes, in one integrated DSP that services both L+R, H+L, latency issues can be overcome. But I want to have my xover next to the amplifiers which are next to the loudspeakers, different reasons for that, and that implies that for what I am doing, two separately running DSP's are required. (or analog xovers)

Double blind tests ? ;) Did you also measure the phase shift and time delay of two channels on a stereo DSP to see if there was a discrepancy, or are you only going by a sighted listening test ?.

Sighted listening test, and I know all the deficiencies, but if the differences are obvious, it is good enough. Also, it might be possible to measure interchannel phase aberrations using music signals, but I certainly can't do it with the stuff in my workshop.

Besides, it's important to realise that inter-aural phase differences are only used up to about 800Hz by the brain to determine direction, above this frequency its amplitude differences and HRTF together that provide left-right localization.

The ear transmits phase information to the auditory nerve up till around 3.500 Hz. The way this works is that the neurons connected to the inner hair cells always fire at zero-crossings in the narrow frequency band they are monitoring. They don't fire all the time when there is a zero-crossing - the more intense the signal, the more frequent they fire - but when they fire, it is at a zero crossing.

Because of this, the ear is much more sensitive to phase and phase shifts than the initial arithmatic would seem to indicate. After all, taking your 800 Hz, that is more than 1 mS per cycle, so what could a couple of uS do for harm? Well, a lot if you look at the way the ear works. When you detect zero crossings, it is the resolution with which you can detect these zero-crossings, and not the time base of the fundamental which is relevant.

vac
 
That is why I said that for 1 channel DSP is perfect.

vac

When using my SigmaDSP based active crossover with ASIO in combination with my midi keys + Synthogy's Ivory piano synthesizer, the latency is low enough such that I don't notice any delay between pressing a note and actually hearing it. Also I don't have any sync issues between the sound and pictures when watching videos.

If you had two DSP chips doing exactly the same thing, one in the left channel loudspeaker and one in the right channel loudspeaker, then providing they are both being fed by the same digital source, surely any differences between the left and the right channel would boil down to perhaps the delay of one sample period, which is pretty much negligible.
 
That is why I said that for 1 channel DSP is perfect.
But that doesn't relate in any way to my comment that a single unit 2 channel DSP has sample perfect synchronisation and latency between both channels - there is an unavoidable delay from input to output that must be taken into account in a mixed environment, but there is no difference in delay between left and right in the same DSP.
I agree these are recipies for disappointment, but it is not the situation I have tested. Two separate but identical DSP will have minute and varying latency differences. Call it jitter if you like.

Yes, in one integrated DSP that services both L+R, H+L, latency issues can be overcome. But I want to have my xover next to the amplifiers which are next to the loudspeakers, different reasons for that, and that implies that for what I am doing, two separately running DSP's are required. (or analog xovers)
What you describe is a somewhat unusual situation though, by choosing not to use a two channel DSP, you do potentially introduce some jitter between channels even if both DSP's are identical.

Whether that's audible at 96Khz 24 bit is highly debatable, (at 44Khz 16 bit it may be) but there is an easy solution - use DSP systems that support external clock output and input. Set one channel's DSP as the clock generator and send clock to the other DSP which is configured to use external clock.

Now the two will be locked together cycle accurate with no jitter, and if they're the same model chips running the same algorithms the delay will be identical too. Problem solved.
Sighted listening test, and I know all the deficiencies, but if the differences are obvious, it is good enough. Also, it might be possible to measure interchannel phase aberrations using music signals, but I certainly can't do it with the stuff in my workshop.
Something like ARTA in dual channel mode can measure the jitter between two channels if you have a good sound card that will do 96Khz. (For real - some cards will say they can do it in the software, but they're actually resampling)

The problem with sighted listening tests is its easy to convince yourself that there is an obvious difference between two configurations and yet in reality there is no discernible difference at all when a double blind tests is done. Been there, done that.

Expectation effect is far stronger than most people give it credit for, even for those of us who are aware of it, and it's easy to tell yourself "I won't be tricked by it", and yet be fooled all the same.

The amount of jitter we're talking about between left and right is not an obvious deficiency and I strongly doubt you could pick it up in an actual double blind test.
The ear transmits phase information to the auditory nerve up till around 3.500 Hz. The way this works is that the neurons connected to the inner hair cells always fire at zero-crossings in the narrow frequency band they are monitoring. They don't fire all the time when there is a zero-crossing - the more intense the signal, the more frequent they fire - but when they fire, it is at a zero crossing.

Because of this, the ear is much more sensitive to phase and phase shifts than the initial arithmatic would seem to indicate. After all, taking your 800 Hz, that is more than 1 mS per cycle, so what could a couple of uS do for harm? Well, a lot if you look at the way the ear works. When you detect zero crossings, it is the resolution with which you can detect these zero-crossings, and not the time base of the fundamental which is relevant.
The nerves in the ear don't response to individual cycles of the waveform as high as 3500Hz, the firing rate is limited to somewhere around 500-800Hz, nor do they respond to relative phase at such high frequencies, so I'm not sure where you got that figure from. Reference ?

Its easy enough to test experimentally - listen to a 3500Hz tone on earphones and vary the relative phase of each channel, and you'll quickly find you can't discern any difference even as the tone is rolled through a full 360 degrees relative phase between channels. Ability to detect relative phase between ears stops above about 800Hz.

The ear can detect relative time delay (not phase) at high frequencies, but the absolute minimum discernible delay is about 10uS. At 96Khz sampling rate an error of one full sample period is about 10.4uS which is right at the edge of theoretical detection if it was a constant error rather than jitter.

You might find the following interesting on our ability to detect (or not) inter-aural phase:

Introduction to Psychoacoustics - Module 08A

Introduction to Psychoacoustics - Module 08B

I don't disagree that jitter is important - if you're going to use two completely separate DSP's in your left and right channel, you really do want to share the same clock signal, although at 96Khz the jitter of not clocking them together is probably below the threshold of detection.
 
Last edited:
I once saw a 21 component passive 3 way XO that begs to differ. :)

I like the ease and versatility of an active unit, the expensive part is having an amp on hand for each pair of drivers. If you have that then...

Coulda been mine.... 21 parts, plus input LCR conjugate. Schematic not currently released on-line.

Pano remembers them... ;)

Later,
Wolf

IMG_6870.jpg
 

Attachments

  • AttitudesBOS-InDIYana2011.jpg
    AttitudesBOS-InDIYana2011.jpg
    220.4 KB · Views: 462
  • IMG_0729.jpg
    IMG_0729.jpg
    406.9 KB · Views: 461
  • IMG_0600.jpg
    IMG_0600.jpg
    161.9 KB · Views: 445
Last edited:
Ex-Moderator R.I.P.
Joined 2005
hey Wolf, impressive and tricky looking xo

had to count my xo components
only 16, and about half of them 'just' resistors

well, there's a couple free hanging 'non active' resistors, from experiments
but I hesitate to completely remove them, yet
because I know from many other experiences that something changes when they are removed
even if they are only connect at one end
sound silly, eh
 

Attachments

  • IMG_5350.JPG
    IMG_5350.JPG
    180.2 KB · Views: 429
The problem with sighted listening tests is its easy to convince yourself that there is an obvious difference between two configurations and yet in reality there is no discernible difference at all when a double blind tests is done. Been there, done that.

Expectation effect is far stronger than most people give it credit for, even for those of us who are aware of it, and it's easy to tell yourself "I won't be tricked by it", and yet be fooled all the same.

The amount of jitter we're talking about between left and right is not an obvious deficiency and I strongly doubt you could pick it up in an actual double blind test..

Simon, I really enjoy this exchange of ideas and hope others will chip in with their specific knowledge. I also agree with your notion that sighted tests are fraught with difficulties, but after a while, your ears get trained to become analytical instruments as well. Don't trust them though, but sometimes it can be quite in your face.

The nerves in the ear don't response to individual cycles of the waveform as high as 3500Hz, the firing rate is limited to somewhere around 500-800Hz, nor do they respond to relative phase at such high frequencies, so I'm not sure where you got that figure from. Reference ?.

I learned all this stuff from books and suffer from source amnesia, so you had me browse through Pickles before I found it on page 42 of Brian Moore's "Psychology of Hearing", I thought, but the 3.5KHz was not there. So back to Pickles "Physiology of Hearing". Page 82: "At High frequencies, above 5Khz, the nerve fibres fire with equal probability in every part of the cycle. At lower frequencies, however, it is apparent that the spike discharges are locked to one phase of the stimulating wave-form." The 3500Hz must be somewhere on my bookshelf. It is also loudness dependant to complicate it further. Anyways, there is phase locking way above 800 Hz.

Its easy enough to test experimentally - listen to a 3500Hz tone on earphones and vary the relative phase of each channel, and you'll quickly find you can't discern any difference even as the tone is rolled through a full 360 degrees relative phase between channels. Ability to detect relative phase between ears stops above about 800Hz..

I can't produce such a signal easily, but it would be interesting if you could post a .wav.

The ear can detect relative time delay (not phase) at high frequencies, but the absolute minimum discernible delay is about 10uS. At 96Khz sampling rate an error of one full sample period is about 10.4uS which is right at the edge of theoretical detection if it was a constant error rather than jitter.

You might find the following interesting on our ability to detect (or not) inter-aural phase:

Introduction to Psychoacoustics - Module 08A

Introduction to Psychoacoustics - Module 08B

I don't disagree that jitter is important - if you're going to use two completely separate DSP's in your left and right channel, you really do want to share the same clock signal, although at 96Khz the jitter of not clocking them together is probably below the threshold of detection.

These are interesting links which I would recommend to anybody. Two points:

1) the detection limit for interaural delays is about 10 uS (as measured in in the brutal test setup employed, in all likelyhood it is even less). You have to separate this from the fact that the "carrier frequency" where this effect is strongest for spatial location is < 700 Hz. That is predicated by the anatomy of the human head.

2) so, 2 separate DSP filtering the L+R have to maintain time coherence <10 uS. That corresponds to a frequency of 100.000 Hz! Even 96 Khz sampling rate will lead to problems. And it is not just the sampling. The rest of the chain, in which a lot of processing is going on, adds to timing errors.

While I am having a go at this, let me also address the point made by Eva. Of course. The room may be a big issue, and to have parametric EQ's with Q's to the moon and back if so desired can be great. But, there is a place for a device to do this, and that is before the xovers. At least, that is my philosophy; perhaps for another time.

vac.
 
Last edited:
...

I can't produce such a signal easily, but it would be interesting if you could post a .wav.

LTspice - can output .wav, is free - behavorial souces can use equations with the "time" keyword giving sim time for use with trig, exponential, ect.

These are interesting links which I would recommend to anybody. Two points:

1) the detection limit for intra aural delays is about 10 uS (as measured in the brutal test setup employed, in all likelihood it is even less). You have to separate this from the fact that the "carrier frequency" where this effect is strongest for spatial location is < 700 Hz. That is predicated by the anatomy of the human head.



so far not too wrong

but the next is way off:

2) so, 2 separate DSP filtering the L+R have to maintain time coherence <10 uS. That corresponds to a frequency of 100.000 Hz! Even 96 Khz sampling rate will lead to problems. And it is not just the sampling. The rest of the chain, in which a lot of processing is going on, adds to timing errors.

even RedBook can give you sub uS phase resolution of the tone bursts used in these tests
which use ~ 3KHz tone burst "click" in headphones to get to the "alarming" sub 10 uS JND values - in a room with loudspeakers the number is 30-50 uS at best with the < 1500 Hz tones that give better resolution in that setting

no 100 KHz frequency components appear anywhere in the signals used - the relative phase resolution is determined by amplitude resolution in addition to sample rate - it is a multi-cycle correlation process applied to strictly Nyquist limited audio frequency signals

for a simplified view just multiply it out: 2*pi*1kHz*2^15/44100 = 4629 "steps" of phase resolution for a full scale 1 kHz sine "zero crossing" ~= 5 nS phase resolution with 16/44

digital audio has no problem representing/controlling relative phase to sub uS between channels - if there is a 1/2 sample latency (not in any chipset sold in the past 20 yrs) you can make a time interpolation filter in DSP - I have, for industrial instrumentation with 6 channel multiplexed ADC - it works

analog reconstruction filter component tolerance will give bigger group delay errors than the digital audio representation limits

as amplitude goes down “phase resolution” decreases – but I don’t think there is any problem with applying dither in these tests – easily giving 20 dB better resolution near/below 1kHz
 
Just another Moderator
Joined 2003
Paid Member
Here are some documents with handy reference material (formulas, transfer function graphs, etc.) that might be useful:

It's mostly written about constant-directivity (waveguide) speakers, but the electronics formulas are applicable to anything.

Thanks Wayne. VERY Useful :D edit: After a quick read of the section on resonance in the first one and a play in speaker-workshop, I've decreased the 2Khz dip by about 0.7db (with no ill effect on the impedance, and increased the falling response between 13 and 20Khz by about 0.6 to 1db :)

Tony.
 
Last edited:
3) latency. DSP works by remembering part of the signal, analyzing it, and calculating the desired transformation. That works great for 1 channel. However, the latency in doing all this is not similar for 2 DSP's. They will have ever so slightly different latencies, typically increasing with the dissimilarity of the two channels. Since interaural phase differences are a main cue to the perception of stereo image, this is not what you want. Analog does not suffer from this. This is not just theory, but also something I verified in listening tests.

vac

Makes me wonder how I and thousands others manage to record and mix 16 or more (I know of people who used hundreds, in that case many tracks are used multiple times but use different dsp) tracks through a digital console when all tracks had different eq on them and use a variety of different dsp plug ins (compressors, reverb, delays etc) yet the outcome is one coherent song.
 
even RedBook can give you sub uS phase resolution of the tone bursts used in these tests
which use ~ 3KHz tone burst "click" in headphones to get to the "alarming" sub 10 uS JND values - in a room with loudspeakers the number is 30-50 uS at best with the < 1500 Hz tones that give better resolution in that setting

no 100 KHz frequency components appear anywhere in the signals used - the relative phase resolution is determined by amplitude resolution in addition to sample rate - it is a multi-cycle correlation process applied to strictly Nyquist limited audio frequency signals

for a simplified view just multiply it out: 2*pi*1kHz*2^15/44100 = 4629 "steps" of phase resolution for a full scale 1 kHz sine "zero crossing" ~= 5 nS phase resolution with 16/44

digital audio has no problem representing/controlling relative phase to sub uS between channels - if there is a 1/2 sample latency (not in any chipset sold in the past 20 yrs) you can make a time interpolation filter in DSP - I have, for industrial instrumentation with 6 channel multiplexed ADC - it works

analog reconstruction filter component tolerance will give bigger group delay errors than the digital audio representation limits

as amplitude goes down “phase resolution” decreases – but I don’t think there is any problem with applying dither in these tests – easily giving 20 dB better resolution near/below 1kHz
You seem to know what you're talking about here, so can you answer this.

In the scenario of two separate (but identical) DSP's which consist of an ADC, DSP EQ processing then DAC at say 96Khz 24 bit, does the fact that they are not sharing the same phase locked sampling clock (vs the case of a single dual channel DSP with single clock) actually matter ? In terms of jitter, drift, or other artefacts etc...?

Is only equal processing latency to within one sample all that matters on otherwise identical but separately clocked systems, even if we're talking about the highly critical Left and Right channel pair ? (As opposed to front vs rear channels using a separate DSP, where there is already a significant deliberate delay usually added to rear channels in the order of milliseconds)

Is there any reason to believe that two identical DSP's would have anything other than the exact same input-output delay to the same number of samples ?
 
Head In a Vice?

Talk about head in a vice. Given: 10usec is the limit of detectable inter aural delay. But we are not talking about inter aural delay from a single source and how that effects localization. We are talking about the effect of a 10 usec delay between the left and right speakers driven by different DSP engines. 10 usec is equivalent to a path length difference of about 0.1". So I would ask, when you listen is your listening position such that the distance to each speakers is identical to within 0.1"? Suppose that is the case, then if your speakers are positioned in an equilateral triangle, 12 feet on a side, with the listener at the apex, if the listener moves his head left or right by about 1", or if the head is rotates less than 1 degree, the path length delay between speakers will increase by more than 10 usec. So a stable 10 usec delay, or even several times that, shouldn't be any more detectable with regard to stereo playback than moving you head slightly.
 
John, I quite agree. In the context of speakers in a room, a constant 10uS delay differential is infinitesimal, as you say amounting to a fraction of an inch error in speaker distance, or an azimuth error of about 1 degree off centre, and this is the reason why I suggested a few posts ago that it would be impossible to detect this small level of error in any kind of double blind testing.

I'm not so sure that the same amount of inter-channel jitter due to separate un-syncronised clocks is equally as inaudible as a fixed time delay, but even then I think it is very likely to still be inaudible. (I think the audibility of jitter in general is overrated - its easy to measure, not so easy to prove that it can be heard, unless severe, particularly at 96Khz)

Even if we were to assume that somehow any of this was audible, the solution is simple - clock all the DSP's for each channel together from a single clock or use a single multi-channel DSP. All possibility of "audible" jitter or inter-channel delay is then removed.

My point always was, as long as you take into consideration the latency of a DSP in the context of the overall system, there is nothing inherently "inferior" about a digital DSP compared to an active analog filter. (Quite the opposite in my opinion) It's all about getting the implementation right.
 
for a ADC-DSP-DAC chain with a single clock the clock stability, jitter performance has little effect at jitter/phase noise frequencies far below the system latency

of course that isn't necessarily much help if you are doing low crossovers/room bass correction with 10s of mS delay

but crystal oscillator jitter/phase noise in general in digital audio is way over hyped compared to the JND thresholds in listening tests: 10-100s of nS, which many cheap packaged xtal osc easily beat - another numbers race unsupported by controlled, blinded testing
 
:confused: I'm still trying to figure out WTF "interchannel jitter" is. :)
Perhaps not the right terminology.

What I mean is if you sample the left and right channels with two separate DSP's with two separate clocks which are only nominally the same frequency, in practice there must be some small amount of clock drift between the two which means that samples are being taken at slightly different points in time in each channel, with the relative time offset of samples between channels varying up to half a sample as the two clocks drift relative to each other.

It's not the same situation as one channel being delayed half a sample period as in some early 80's DAC's, because although the output of the two channels will at some points in time be delayed up to half a sample period from each other, the input samples are also delayed by the same amount.

In theory it shouldn't matter if the input is properly Nyquist filtered but no anti-aliasing filter is perfect. Does it matter, and would it be audible ? I don't know. Probably not.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.