What do you think of passive crossovers?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Thanks Wayne. VERY Useful :D edit: After a quick read of the section on resonance in the first one and a play in speaker-workshop, I've decreased the 2Khz dip by about 0.7db (with no ill effect on the impedance, and increased the falling response between 13 and 20Khz by about 0.6 to 1db :)

Thanks, Tony.

That "Crossover Electronics 101" document is a handout from a seminar I've been doing at various audio trade shows for the past ten years or so. Both it and the "Speaker Crossover" document include Spice models, which are helpful when looking at the LC peaking/damping of a circuit. I've included the models, as well as an executable distribution of AIM Spice at the link below:

 
One more thing real quick. At the "Crossover Electronics 101" seminar, we show a schematic and the response curve it produces, and then we listen to music playing through the actual circuit. That sort of lets everyone associate the "sound" of the circuits they are looking at. We switch between crossover circuits with varying damping levels to hear what different transfer functions sound like. You see the graph and hear how it sounds.

I find this very interesting for people new to the hobby. There are always a lot of "light bulb" moments for the people in the audience, you can see it in their faces. So I encourage everyone to build the circuits in the worksheet and listen to them. At the very least, build a second-order low-pass crossover for a woofer, and run it both with and without a Zobel. Without the Zobel it usually sounds pretty nasal.

I think the sound many people associate as "horn honk" is actually caused by an impedance peak from the crossover interaction with driver impedance. Horns generally have an impedance spike near the lower cutoff, and if this isn't properly damped in the crossover, it shows up as a response peak. This is shown in the document, and it is one of the sounds we hear in the seminar. We listen to a properly damped crossover, then switch to a more common "generic" crossover without a correctly configured R1/R2/C1 network. The contrast is striking.
 
Last edited:
At the very least, build a second-order low-pass crossover for a woofer, and run it both with and without a Zobel. Without the Zobel it usually sounds pretty nasal.

I think the sound many people associate as "horn honk" is actually caused by an impedance peak from the crossover interaction with driver impedance.
This just falls into the category of poor implementation. It shouldn't occur with anyone reasonably knowledgeable in using software to design.

Dave
 
Disabled Account
Joined 2008
John, I quite agree. In the context of speakers in a room, a constant 10uS delay differential is infinitesimal, as you say amounting to a fraction of an inch error in speaker distance, or an azimuth error of about 1 degree off centre, and this is the reason why I suggested a few posts ago that it would be impossible to detect this small level of error in any kind of double blind testing.

I did a listening test on this now. I have a PC-based XO where standard VST EQ plugins are used for EQ and XO. I also use the Voxengo Sampledelay plugin for time correction. With this I can delay one or more channels in a given number of samples, and also make direct A-B comparisons. One sample at 44100 Hz rate equals approx 22.7 µS. I inserted this plugin into one of the audio channels, and tested a little on how much (or little) delay was actually audible. I used mono source material to make it easier for myself.

1 sample delay (22.7 µS) was actually easy to detect when I was switching A-B between delay and no delay. (and keeping my head in the same position...)

The audible effect is that the mono phantom image moves over to one side. The more delay, the more image shift.
 
Last edited:
I did a listening test on this now. I have a PC-based XO where standard VST EQ plugins are used for EQ and XO. I also use the Voxengo Sampledelay plugin for time correction. With this I can delay one or more channels in a given number of samples, and also make direct A-B comparisons. One sample at 44100 Hz rate equals approx 22.7 µS. I inserted this plugin into one of the audio channels, and tested a little on how much (or little) delay was actually audible. I used mono source material to make it easier for myself.

1 sample delay (22.7 µS) was actually easy to detect when I was switching A-B between delay and no delay. (and keeping my head in the same position...)

The audible effect is that the mono phantom image moves over to one side. The more delay, the more image shift.
Ear phones is John's "head in a vice" scenario though. Can you detect a difference on speakers ?

If you switch it while you stay perfectly still perhaps, but if you switch it while you're away from the sweet spot and then come back to where you think it is I'll bet you won't.

Can you try delays smaller than 22uS ?
 
Disabled Account
Joined 2008
I tested with speakers, and kept my head still while switching the delay. Using mono source made it much easier to detect.

With real stereo signal its far less obvious, but the center of the stereo image still shifts a little to one side.

Much larger delay, say 200 µS, is very audible, as the center of the stereo image shifts a lot to one side or even starting to sound like there is no center at all.
 
In a passive crossover, without a zobel, the corner frequency is shifted by impedance variations.

On the other hand, is it wishable to use a zobel when active?

No need to use a Zobel if your amplifier is a voltage source. If it has a >0 output impedance, it all gets complicated.

I haven't read all 23 pages of this thread, but I don't think this has been mentioned: it talks of the passive crossover from the point of view of the drivers themselves.

Active Vs. Passive Crossovers

I cross actively to my subwoofers because an £70 amplifier is cheaper than the components needed for a 4th order 80Hz XO. Even if I did want to use just one amplifier in my system, I'd expect the resultant DC resistance of the inductors to mess with the bass performance somewhat.

Chris
 
I tested with speakers, and kept my head still while switching the delay. Using mono source made it much easier to detect.

With real stereo signal its far less obvious, but the center of the stereo image still shifts a little to one side.

Much larger delay, say 200 µS, is very audible, as the center of the stereo image shifts a lot to one side or even starting to sound like there is no center at all.

Erik, thanks for doing this test. Now imagine that this happens all the time, because two independant but otherwise independant DSP's churn out their bits with a constant shifting of just a couple of frames on the end DAC's. This is what I called for convenience "interchannel jitter". You can imagine that this would smear the stereo image, but the damage goes beyond that. Much of what we call ambience is also very much phase related. So, you just loose definition.

@JCX: you are describing a different situation. DBMandrake put the question right. But thank you for the tip that LTSpice will produce .WAV's; only used the programme for simulations, so I will try to figure out how to do it.

Let me put the same point again but hopefully a bit better. I did not intend to imply that there are 100Khz components in a typical musical signal/ it was just a way of making more tangible what 10 uS actually is - the time base of a 100Khz wave. For the situation of the two separate DSP's, it is more usefull to talk in terms of probability. How large is the probability that both DSP's are at exactly the same stage of the ADC-DSP-DAC cycle, and output the right frame at exactly the same time. Even a one frame difference would lead to a >10uS timing delay in the case of 96, and to 23 uS at 44.1 kHz sampling rate. If this were a constant error, I don't think it would do much harm. However, based on my understanding of the technology - but others might know more and I am willing to learn - it will not be a constant error. We are right in the audility ballpark with these kinds of delays.

vac
 
No need to use a Zobel if your amplifier is a voltage source. If it has a >0 output impedance, it all gets complicated.

I haven't read all 23 pages of this thread, but I don't think this has been mentioned: it talks of the passive crossover from the point of view of the drivers themselves.

Active Vs. Passive Crossovers

Chris

This is what I referred to pages ago when I indicated that passive crossovers are often misrepresented. While it is true that a passive crossover network will change the impedance seen by the driver and hence it will affect the damping that is only part of the story. First of all, the electromotive damping only plays a significant roll in controlling the driver motion around the driver's resonance. Second, if we assume that the driver motion relates directly to the radiated SPL (which is true, particularly at low frequency) then we can look at what makes the driver move and what it's motion is. The point I am trying to make is that if the the driver and crossover make the driver move in accordance with a specific transfer function, then it makes no difference if the driver is connected directly to the amp with active crossover before the amp, or connected with a passive crossover between the driver and amp. The characteristics of the system (amp, crossover and driver) are the same. That is, if the driver moves with a 2nd order bandpass response then it's response to any input is that of a 2nd order bandpass system.

Another point is that direct connection to the amplifier maximizes the effect of the back emf since it maximizes the reverse current. Again, that is always touted as a benefit by those who subscribe to the active crossover school. However, maximizing the reverse current also maximizes the distortion associated with the motor/generator nonlinearity. Hawksford has written about this and how it and be countered by current drive. Current drive, of course, required a high output impedance which then basically reduced the electromotive damping to zero everywhere.

So the point is, yes, direct connection to an amplifier with low output impedance maximizes the reverse current and thus the electromotive damping. But that is not necessarily a good or bad thing in itself. Active proponents are quick to say it's bad because that is what is in there interest. In reality things are not so black and white.
 
The amount of error would shift. If their clocks were perfectly in sync then there'd be no difference. If one ran slightly faster then the other, then the difference between the two would gradually increase as one ran away from the other, then they would meet up again but the other way around.

The main issue I see here is where the nature of the problem really lies. Data isn't clocked in and out off of the LR clock. In fact the data is clocked in and out from the bit clock, where the data and the clock lines have most likely been created and derived from a master clock somewhere whizzing away at something greater then 10MHz.

The period of a 10Mhgz clock is 100ns. If it's the master clock that's then being divided to create the other clocks, then you'd only create an issue as large as what the master clock is capable of providing. We are not talking about standard jitter here either, simply the variable time delay between one DSP and another that would result from a slight difference in their clock speeds. As far as I can see it should be a none issue.
 
Matt, I agree with you that clock misalignment errors are most likely not a major issue. I may have sown some confusion by coining the phrase "interchannel jitter".

What I conceive to be a main source of error is the DSP processing step. I have never programmed a DSP, but otherwise I have done programming from machine level up. A DSP AFAIK is just a dedicated CPU running dedicated software. From other programming experience, I can tell you that programs (certainly with a lot of nesting) get probablistic in the processing time required for a batch. Certainly if the information processed is indentical in size, but different in composition. This would be the case in two stereo channels processed by two different, but identical DSP's.

There is one hard fact I know for sure in this context. Latency in the brand of DSP I am using, is related to the settings applied. More filtering = more calculations = more latency. What I sketched above is an educated guess that similar latency shifts might show up in two DSP's with identical settings, but with dissimilar complexity in the signals being processed.

vac
 
Last edited:
Hmm it's interesting and it's one thing I don't have a lot of experience in myself (that being the hard coding of how things actually run). I can say that the outputs of a sigmaDSP chip (a single chip here) are all in sync with one another unless the filters demand otherwise (they are still clocked together of course, it's just that the DSP can delay the output by X sample periods).

The sigmaDSP chips work on a sample to sample basis where the core is sent an initiation pulse with and at the start of every sample (a sample being a full 24 bit word). I do not know quite how this ties in to the processing required, but from the way I understand it, the chip itself can do the required calculations in the time it takes for a sample to go by, so that the data is ready to be clocked out of the processor on the next sample. The number of calculations the chip can pull off per sample period directly ties in to how fast the core runs, you cannot exceed it for obvious reasons. This is inherently predictable where one can accurately say exactly how many calculations (and thus amount of time) a certain combination of filters will require. The minimum required time delay/latency between input and output should be 1 sample period, unless the filters or on-board ASRCs dictate otherwise.

If you had one chip in the left loudspeaker and one chip in the right that were performing the same calculations, the amount of time necessary for each to do its job should be identical down to the last bit of data processed. The only variation between them would be when the start pulse is sent and then how fast the cores are running.

The chips come with the ability to be able to be clocked from an outside source though so I'd think that that would be the most logical option if you were really concerned about this and wanted to implement the system in such a specific way. Personally if I wanted the amplifiers close to the loudspeakers, I'd keep all the digital DSP stuff in one box and then use balanced lines out to each amplifier.
 
So the point is, yes, direct connection to an amplifier with low output impedance maximizes the reverse current and thus the electromotive damping. But that is not necessarily a good or bad thing in itself. Active proponents are quick to say it's bad because that is what is in there interest. In reality things are not so black and white.

As an active proponent myself I have a question. How is a damping factor which varies non-linearly with frequency (what you get with a passive XO, say series inductor, due to proximity effect losses) in any way a good thing?
 
What I mean is if you sample the left and right channels with two separate DSP's with two separate clocks which are only nominally the same frequency, in practice there must be some small amount of clock drift between the two which means that samples are being taken at slightly different points in time in each channel, with the relative time offset of samples between channels varying up to half a sample as the two clocks drift relative to each other.

The issue is touched on in Bruno Putzeys AES presentation here, about 1/3rd of the way down, under 'TOA Cue fallacy' and following:

http://www.hypex.nl/docs/papers/AES123BP.pdf

In theory it shouldn't matter if the input is properly Nyquist filtered but no anti-aliasing filter is perfect.

This is indeed a relevant point.
 
Certainly if the information processed is indentical in size, but different in composition. This would be the case in two stereo channels processed by two different, but identical DSP's.

There is one hard fact I know for sure in this context. Latency in the brand of DSP I am using, is related to the settings applied. More filtering = more calculations = more latency. What I sketched above is an educated guess that similar latency shifts might show up in two DSP's with identical settings, but with dissimilar complexity in the signals being processed.

The latency of an FIR filter does not in any way depend on the content of the signal being processed. It depends only on the nature of the impulse response, which is encoded in the coefficients being used. Changing those coefficients can change the latency, but changing the signal cannot.

More coefficients (i.e. more calculation, longer filter) does not necessarily lead to more latency, this would only be the case if the filters being implemented were the same type - e.g. both were linear phase. Linear phase filters have defined latency of half of their sample length. A minimum phase filter with the same number of coefficients can have substantially lower latency than a linear phase filter with otherwise similar magnitude response.
 
As an active proponent myself I have a question. How is a damping factor which varies non-linearly with frequency (what you get with a passive XO, say series inductor, due to proximity effect losses) in any way a good thing?

Let me answer by posing a different question. How is it a bad thing? You see, its in the question. Your assertion is that it is bad but no proof is offered. I'm not asking you to provide it, just making the point that such an observation is typical. What I call the "Look at this. It can't be good!" argument.

Electromotive damping is only a significant around the driver's resonance. Below resonance the driver's motion is controlled by the suspension compliance. Above resonance, by the moving mass. Additionally, electromotive damping is always a function of frequency, even if the VC impedance was purely resistive. The damping force is a result of the reverse current flowing through the VC due to the driver's motion. Typically, in discussions it is expressed as Ib = Vb/Re, where Vb is the back EMF and Re is the VC DC R. In reality it should be expressed as Ib = Vb/Zeff where Zeff is the a function of the driver's (Re + jwLe) and any other circuits between the driver and the amp, including the amps output Z. If the driver is directly connected and the amps output Z is zero, then Zeff = Re +jwLe. Le is generally not constant but a function of frequency and even excursion. Going a step further, look at the back EMF itself. It is given by the product of the driver's velocity, U, and Bl, where l is the length of the wire in the gap, Vb = Bl x U. Then, Ib = (Bl x U) / (Re + jwLe) and ultimately the damping electromotive damping force is given as F = Bl X Ib = (Bl)^2 /(Re + jwLe) x U. So we have Bl which is a nonlinear function of excursion and Le which is generally a nonlinear function of both frequency and excursion, thus the electromotive damping is anything but linear to start with. We usually see this simplified as F = (Bl)^2 /Re x U because, as I said earlier, the effect of damping is only significant around the driver's resonance were wLe is small compared to Re. [Note: Even if you want to make the argument that the damping force is present even at high frequency, you can see that the associated nonlinearity in that force can only contribute to nonlinear distortion. Thus, with a 2nd order passive LP crossover it could even be argued that the rising Zeff as the x-o frequency is approached actually reduces the effect of motor generated nonlinear distortion, due to reduced (nonlinear) electromotive damping force compared to an active crossover.]

So suppose you build a 2-ways with passive 2nd order crossover as depicted at Rod Elliot's page. At the crossover frequency the Zeff seen by the driver is infinite and there is no electromotive damping. Good or bad? Damping doesn't play a significant roll in controlling the driver motion at that frequency anyway and its presents can only introduce nonlinear distortion. Down at resonance the Zeff reduces to the DC R of the series inductor. This will increase Qes around resonance somewhat. All this means is that, as most here know, you have to design the enclosure based on the driver's Qtc with the crossover in place.

There are other issues between active and passive, but damping, or lack there of, isn't the devil it is made out to be.
 
Let me answer by posing a different question. How is it a bad thing?

I'm not saying its a bad thing. I'm just avoiding it in my own designs because I've no reason to believe it helps.

You see, its in the question.

Nope, I don't see. Where is it in the question please?

Your assertion is that it is bad but no proof is offered.

Just cut and paste this 'assertion' that you're asserting I've made. I predict you will fail to do so because I can see none. So this is a red herring, a distraction.

I'm not asking you to provide it, just making the point that such an observation is typical.

Who's making the observation here? Certainly not you - you're inferring something that's in reality absent. I'm observing you doing so :D

What I call the "Look at this. It can't be good!" argument.

The first part of your straw man is indeed correct. I am drawing attention to an effect I prefer myself to avoid, by going active. Presumably you have good reason not to avoid it, to even advocate that others don't avoid it themselves. Its this reason that I'm curious about. If it turns out to be good (by which I mean to say helpful, an improvement), when I've digested your arguments then I can introduce it into my designs without any difficulty.

<snip>

There are other issues between active and passive, but damping, or lack there of, isn't the devil it is made out to be.

So then why are you making it out to be a 'devil', even though its a devil that you choose not to avoid?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.