sound quality vs sound quantity.

Is phase more important than frequency?


  • Total voters
    24
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
2) The distance from magnet gap to listener staying the same is irrelevant because we have three different propagation speeds in the total path length - the propagation speed of the impulse from the voice coil to the voice coil former/cone junction, the propagation speed through the cone itself, and finally the propagation speed through air, and they are all very different.

.

Bingo ....!! Nothing like designing a speaker with what's effectively a moving baffle..
 
I'm not so sure.

Doppler Distortion in loudspeakers

The instantaneous velocity of the cone is fairly slow for low notes, even at considerable excursion (lets say 4mm p/p for a wide range driver). Sure, the very high frequency stuff will be shifted a decent amount, but our hearing isn't particularly sensitive there.
What concerns me is the amount of distortion you get from driver non-linearities etc when a small driver attempts to move that far.

Following this, I use fairly aggressive filters in the bass region to cross over to some subwoofers that are far more capable in the low end. Integrating a mid and bass driver at ~100Hz is easier to get "close enough" than a conventional two-way.


As a side note, the voice coil of the driver could be considered an inductor. Inductors have phase shifts too! - I don't think we'll get rid of this until we stop moving bits of paper around with interacting magnetic fields.

Chris
 
I once calculated, that doppler distortions are audible
from a 3" diaphragm at more than 1mm excursion.
The diameter of the diaphragm has no effect on doppler distortion, its simply the amount of low frequency excursion, and the frequency of the higher tone(s) that are important.

The more the low frequency excursion and/or the higher in frequency the high tones are, the more doppler distortion there is.

Of course a larger diaphragm will give less doppler distortion for a given low frequency SPL since the excursion is less for the same SPL...but the distortion would be the same as the smaller diaphragm if the excursion was the same. (But in that case the low frequency SPL is greater)
 
Last edited:
Interesting article, but it seems a bit like Rod was just slowly and painfully rediscovering what was already known. I thought it was well known that the root cause of so called "doppler distortion" in speakers was phase modulation due to the time zero (or acoustic centre) of the driver being moved forwards and back from the rest position.

Certainly the connection between phase and frequency modulation is well known and understood in the RF design world. I'm also puzzled by his reluctance to call it a distortion:

"Furthermore, we should clarify the term 'distortion', since the word is normally applied to a non-linear function. Should the effect be demonstrated to exist in a loudspeaker, then it is obvious that it will appear in an ideal (or theoretically perfect) driver as well, so there is no non linearity. Based on this, the effect probably should not be called 'distortion' (although it must be said that anything that adds frequencies that did not exist in the original recording actually is distortion, but that is probably a philosophical debate rather than one to be considered by the engineering fraternity)."

Sorry, but if you apply two tones to a system and get out additional frequencies that weren't in the original but are related to each other that is non-linear intermodulation distortion, even if an otherwise "linear" device is doing it. There is no philosophical debate to be had about it... ;)

The instantaneous velocity of the cone is fairly slow for low notes, even at considerable excursion (lets say 4mm p/p for a wide range driver). Sure, the very high frequency stuff will be shifted a decent amount, but our hearing isn't particularly sensitive there.
What concerns me is the amount of distortion you get from driver non-linearities etc when a small driver attempts to move that far.
In general I would agree - whilst defending the existence of doppler distortion / phase modulation distortion / whatever you want to call it, I will admit that it is probably a very minor factor in a speaker, and that it is vastly swamped by other non linearities including BL(x) variations and that amplitude modulation of the high notes by low notes is a much more severe problem. Minimising excursion is a beneficial thing for a whole host of reasons.

As a side note, the voice coil of the driver could be considered an inductor. Inductors have phase shifts too! - I don't think we'll get rid of this until we stop moving bits of paper around with interacting magnetic fields.
Yes inductors add phase shifts, but a single inductor like the Le of a driver on its own does not introduce any time delay. This idea has been debunked in another thread recently.

Phase shift and pure time delay are not the same thing.

Just because a series inductor slows the rise of an impulse does not mean the slow rise to the impulse peak represents a time delay - it is just a reduction in high frequency content. The impulse still begins rising at the same time, and that is what counts, not when the impulse reaches its maximum.

As counter intuitive as it might seem, the voice coil inductance does not add any time delay whatsoever. (It simply rolls off the high frequencies more than if it didn't exist)
 
Last edited:
I dunno, this back and forth between the time and frequency domains is confusing. If you have two instruments playing different notes you clearly have inter-modulation but it is not distortion, it is music. What reaches the ear is not two sine waves with the associated inter-modulation products but rather a single complex pressure wave. Of course I have no idea how the brain processes it but when we record it we get a single time varying voltage which is analogous to the pressure variations.

During the amplification and reproduction process there are a lot of distortions added of course but the loudspeaker cone also does not produce multiple different sine waves (unless the notes are reproduced by two different drivers) but rather a single complex time varying pressure wave. Of course a crossover does split up portions of the source signal, but into completely different pieces than the original components that were acoustically combined in the original performance. This might lead some credence to the idea that crossovers ought to be minimized and carefully placed in the frequency spectrum.

To make a long story short (too late) I have no real answers but I always have this nagging suspicion that by analyzing things in the frequency domain that we are being lead astray by our mathematical constructs which are after all just a abstraction of the real physical system.

I think it would be helpful if someone much smarter than myself could come up with a credible way of analyzing things in the time domain. In other words how does the reproduced pressure wave variation differ from the original using different types and arrangements of drivers and crossovers (or not).

Shrug.
 
I dunno, this back and forth between the time and frequency domains is confusing. If you have two instruments playing different notes you clearly have inter-modulation but it is not distortion, it is music.
No, that's not the case. Intermodulation as the name suggests is where one frequency modulates (modifies) the other in some way, whether it be modulating the amplitude or frequency.

In the case of a woofer you have both effects at once - a small amount of frequency (more precisely phase) modulation as well as amplitude modulation, the latter typically due to BL(x) and Le(x) - variations of BL and Le with motor excursion. (Cone breakup also introduces non-linearity at high frequencies, depending on the linearity of the material when it bends)

If you just have two separate instruments playing in a room their waveforms sum at the recording microphone, but just because they have summed at that point in space does not mean they have inter-modulated. The presence of one does not in any way alter the frequency spectrum of the other, so if you were able to subtract one instrument from the summed result you would get back exactly the original signal of the other one.

Summing two waveforms together in a (nearly) distortion free medium such as air, and mixing two signals together in a distortion prone medium such as a woofer is two entirely different things.

What reaches the ear is not two sine waves with the associated inter-modulation products but rather a single complex pressure wave. Of course I have no idea how the brain processes it but when we record it we get a single time varying voltage which is analogous to the pressure variations.
I think its pretty well established both from biology and experimental data that the brain doesn't process a sound pressure waveform like a recording microphone does - it breaks down the sound into constituent frequency bands in the inner ear and sends impulses to the brain that represent the spectral distribution.

Imagine a whole gang of parallel tuned circuits tuned to 1/3rd octave bands which are then followed by diode rectifiers, the output of each rectifier being slightly filtered with a small value capacitor then fed to the brain as a separate impulse per 1/3rd octave channel.

A gross simplification as not all the bands are the same width, and at low frequencies (below about 800Hz) the individual waveform pulses are registered as well, allowing timing information and phase between ears to be determined...

But by and large the ear/brain analyses the spectral content and in some frequency ranges relative timing information between left and right, but it is not directly sensitive to waveform shape. Certainly phase shift at frequencies above about 1-2Khz is undetectable, only the amplitude. At 2Khz and above you can feed a tone into each ear via headphones and constantly vary the relative phase between the two tones from 0 through 360 degrees and you will not hear one iota of difference.

Gradual phase rotation from low to high frequencies is more or less undetectable as long as it is equal in both channels/ears. In other words the extra crossover phase rotation of a 3 way speaker compared to a 2 way goes unnoticed, even though the sound pressure waveform as seen on a scope will look very different between the two.

During the amplification and reproduction process there are a lot of distortions added of course but the loudspeaker cone also does not produce multiple different sine waves (unless the notes are reproduced by two different drivers) but rather a single complex time varying pressure wave.
And if the speaker was ideally linear and also didn't produce phase/frequency modulation, that complex time varying pressure wave would only be the sum of the multiple signals without any intermodulation distortion - as is the case with multiple physically separate sound sources.
Of course a crossover does split up portions of the source signal, but into completely different pieces than the original components that were acoustically combined in the original performance. This might lead some credence to the idea that crossovers ought to be minimized and carefully placed in the frequency spectrum.
On the contrary, a crossover helps to minimize intermodulation distortion by ensuring that the driver that experiences significant excursions is not also responsible for reproducing high frequencies. To be most beneficial the crossover frequency needs to be just above the frequency where excursion drops to minimal levels - which typically means between bass and midrange.
To make a long story short (too late) I have no real answers but I always have this nagging suspicion that by analyzing things in the frequency domain that we are being lead astray by our mathematical constructs which are after all just a abstraction of the real physical system.
The frequency domain is the closest match for the way our ear/brain system perceives sound though, so its no surprise that frequency response flatness is the number one correlating (but not only) factor for sound quality. The other major ones (IMHO) are the dynamic performance of the speaker (how much do parameters like frequency response change with increasing SPL etc) and the directivity/diffraction characteristics.
I think it would be helpful if someone much smarter than myself could come up with a credible way of analyzing things in the time domain. In other words how does the reproduced pressure wave variation differ from the original using different types and arrangements of drivers and crossovers (or not).
If by time domain you mean like a waveform display on an oscilloscope representing the sound pressure, it wouldn't be that useful because it shows things that we can't hear, (phase shift with frequency, for example a mutilated square wave due to phase shift that sounds more or less identical to a normal square wave) but does not show things that we can hear. (a scope display has a very poor dynamic range, and isn't very good for examining frequency response, nor anything less than gross levels of distortion)
 
Last edited:
The way I'm thinking about it is imagining a mid-range driver or tweeter bolted to the wall vs. the same driver being whizzed backwards and forwards in an arbitrary pattern. The way I see it, the full range cone is behaving in just that manner as soon as the bass kicks in. It doesn't duplicate what a microphone would record, because the microphone diaphragm hardly moves at all, and I don't think the effect is constant regardless of scale.

It better duplicate what the mic diaphragm records, and it better scale up accurately with amplitude. That's the objecetive. What you are afraid of as intermodulation distortion is confused with just the way multiple sines are supposed to combine. Think about it some more. Think more about the complex motion of that mic diaphragm moving with the bass and treble; that's what you want to duplicate but with more amplitude.
 
Last edited:
Simon;
of course parasitic resonances of cones ride on them, it is exactly what you measure as a Doppler shift. But controlled by motor portion of reproduced sound does not. Let's separate flies from biff-stakes they sit on.
Not sure what part of my posts you're responding to since you haven't quoted anything.

Whether or not "parasitic resonances" ride on the cone is not really the issue.

The issue simply boils down to "does the acoustic centre at high frequencies move with cone displacement at low frequencies", to which the answer is yes. If the answer is yes, doppler / phase modulation distortion must exist.

Whether the acoustic centre moves with cone displacement can be experimentally verified to a high degree of accuracy by measuring excess group delay at high frequencies with different DC offsets applied to displace the cone.

I don't know why you even continue to debate that the phenomenon exists when Rod's article has provided actual measurements that verify and clarify it. If the acoustic centre wasn't moving with excursion there wouldn't be any phase modulation of high frequencies.

My guess is that your belief stems from the often stated but incorrect rule of thumb that the acoustic centre of a driver is roughly at the voice coil. This is not and has never been true, the acoustic centre is very close to the junction where the voice coil former and the cone is joined, although it can be slightly further forward or back depending on the taper of the cone, as the radiation from the edge of the cone can lead or lag that from the centre depending on the taper leading to a slight variation in the integrated impulses time delay.

(Ideally the taper should be such that the wavefront from the entire cone arrives together in time at the listening point)

The reason why the belief that the acoustic centre is at the voice coil is wrong is because acoustic centre is a virtual location that is calculated back from the listening position based on the speed of propagation of sound in air, however the propagation speed from voice coil through the driver before it reaches air is dramatically faster, this makes the apparent acoustic centre come forward to the centre of the cone and also move with cone displacement. (In the case of infinite propagation speed from voice coil to cone centre the acoustic centre would be exactly at the launch point of the centre of the cone and track its displacement exactly)
 
Last edited:
Whatever you're complaining about at the speaker also occurs at the mic in reverse. It's only a problem if it occurs differently. But I don't see that properly controlling a driver to properly reproduce the complex mic diaphragm motion introuduces distortion because of the difference in scale. Illuminate me. I remain relatively unconvinced, but open.

If you're sensing air velocity instead of pressure, doesn't a bass guitar or kick drum "modulate" the higher frequencies a little bit in real sound in the air? Compared to the speed of sound and within reasonable volume limits I'd expect it to be negligible and inaudible. I'd expect IM in horns to increase a lot with volume, but mostly because of the displacement becoming signficant comapred to the horn expansion. But I don't see the problem with a piston. Isn't the infinitesimal 'doppler shift' of the highs just what happens in air, and in the mic diaphragm? So what's the physical problem when you scale up the amplitude?
 
Does the acoustic centre at high frequencies move with air displacement at low frequencies? Yes.

Does the acoustic centre at high frequencies move with mic diaphragm dsplacement at low frequencies? Yes.

SO the disotrtion problem must invovle the increase in dispalcement? But the highs and lows are increased proportionhally, and the result in the air is just as the original.
 
Last edited:
Oh my an array of in phase full range drivers compared to a non-arrayed 2 or 3 way.

Look, there is no "perfect" sound reproduction.
Audio is very subjective.

I have heard, installed, programmed and repaired world-class full range boxes and arrays.
In my experience: arrays have ALWAYS sounded worse.

There is no array that sounds as good, the fewer acoustic centers the better. (This is why Co-Entrant sounds better than Co-Axial). The cancellation between co-axial drivers is awful.

Audio is a balancing act.
We balance the right solution for the right situation.
There ARE situations where an array is called for.
However we should ONLY array as a LAST resort.

My two cents. Actual retail value based on imaginary hypothesis.
 
Whatever you're complaining about at the speaker also occurs at the mic in reverse. It's only a problem if it occurs differently.
But it does occur differently. A woofer producing a low frequency note and a high frequency note at once has significant excursion - the bass excursion is a significant fraction of a physical wavelength of the high frequency.

A microphone measuring a low frequency note and high frequency note at once has negligible excursion, hence there is no intermodulation effect. Microphones and speakers are not reciprocal in this regard.
 
(Ideally the taper should be such that the wavefront from the entire cone arrives together in time at the listening point)

Yes, I've imagined that even slight changes alrer the wavefront shape and radiation pattern. I favor clamshelled isobarics, where the drivers face each other and I'm listening mostly to the outer driver from the basket frame and magnet side. I'd imagine if I ran it at higher frequencies the radiation pattern or even distortion could get pretty strange, as the cone taper is exactly opposite your ideal. At low ferquencies within bandwidth limits both cones more pretty much together compared to the speed of sound.

But I'm still struggling to get my mind around the claimed 'doppler' problem... Is this a phenomenon of every increase in volume?
 
Oh my an array of in phase full range drivers compared to a non-arrayed 2 or 3 way.

Look, there is no "perfect" sound reproduction.
Audio is very subjective.

I have heard, installed, programmed and repaired world-class full range boxes and arrays.
In my experience: arrays have ALWAYS sounded worse.

There is no array that sounds as good, the fewer acoustic centers the better. (This is why Co-Entrant sounds better than Co-Axial). The cancellation between co-axial drivers is awful.
In general I'd agree the less acoustic centres the better, although at low frequencies you'll then have problems with boundary cancellation if your ideal speaker is a single coaxial point source high off the floor...

I think the problem is line arrays are often confused with actual line sources, and the benefits of a line source get ascribed to line arrays.

A line array of closely spaced dome tweeters is not the same as an actual contiguous line source, for example a contiguous ribbon that actually is a line source. The array has severe lobbing and interference effects above a certain frequency, an actual line source does not.
 
My guess is that your belief stems from the often stated but incorrect rule of thumb that the acoustic centre of a driver is roughly at the voice coil.

No, in this particular case my belief system is not based on any Rules of Dumbs, nor on any articles that you may highly respect. It is based on physics and math. Particularly, on Maxwel equations and geometry. No matter is the coil moving or not, it is excited in respect to the magnet field in the gap. No Doppler effect if the cone does not resonate mechanically. What rides on the cone producing Doppler effect, it's own surface resonances, but not signals that are excited by motor.
 
I have heard, installed, programmed and repaired world-class full range boxes and arrays.
In my experience: arrays have ALWAYS sounded worse.

There is no array that sounds as good, the fewer acoustic centers the better...There ARE situations where an array is called for.
However we should ONLY array as a LAST resort.

My friend Tom Danley and I have been on opposite sides of this discussion since we were kids in high school together. In my own living room, my 12-foot floor-to-ceiling planar dynamic "ribbons" are continuous narrow line sources (within their bandwidth capability). As arrays approach becoming continuous infinite seamless line sources, they approach my ideal if the frequency response and bandwidth is reasonable. On the other hand, nothing is mic'd that way LOL (perhaps a few ribbon mics??). Tom makes point sources and when they keep a consistent coverage pattern with frequency they approach his ideal. Either can sound great. Both sound very different in a real space. The problem with arrays is that there are so many crappy arrays that really are multiple point sources and do not approach my ideal of a single seamless infinite line source.

What I actually favor from a line source is the resultant cylindrical wavefront because of its horizontal-only dispersion and the way it doesn't decrease with the square of the distance.

Real arrays don't make perfect line sources. But trying to make a cylindrical wavefront from a point source is not easy either. IMHO that's really what good PA gear approaches in controlling the vertical pattern and horizontal pattern differently.
 
Last edited:
I have heard, installed, programmed and repaired world-class full range boxes and arrays.
In my experience: arrays have ALWAYS sounded worse.

There is no array that sounds as good, the fewer acoustic centers the better. (This is why Co-Entrant sounds better than Co-Axial). The cancellation between co-axial drivers is awful.

Be my guest. I live in Pleasant Hill, California, use line arrays at home. :)

Edit: I have to admit, it is PITA to measure line arrays properly. But it is nirvana to listen. ;)

The problem with arrays is that there are so many crappy arrays that really are multiple point sources and do not approach my ideal of a single seamless infinite line source.

Exactly! :)
 
Last edited:
No, in this particular case my belief system is not based on any Rules of Dumbs, nor on any articles that you may highly respect. It is based on physics and math. Particularly, on Maxwel equations and geometry. No matter is the coil moving or not, it is excited in respect to the magnet field in the gap. No Doppler effect if the cone does not resonate mechanically. What rides on the cone producing Doppler effect, it's own surface resonances, but not signals that are excited by motor.
Sorry but that reads like a bunch of nonsensical pseudo science.

I've given you a good explanation of why it occurs and why the acoustic centre is not at the voice coil but moves with the cone, but you can believe what you will.

I don't "highly respect" Rod's article, in fact its mention in this thread is the first time it has come to my attention, and there is a lot of hand-waving in the early part of the article, but he does eventually come to the right conclusion by the end, and most importantly provides actual measurements to back up his position, something which you have not.

Feel free to actually take measurements to prove your point of view, however don't be disappointed when they fail to live up to your expectation. ;)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.