Western Electric 1928 - How far have we come in the last 100 years?

Simon, did you see janneman's posts a few months ago, where he had six randomized files from Hawksford, 3 of which simulated the allpass delay of an LR filter? The idea was to do blind testing via sorting. Two of us who bothered to try this (several of the usual suspects refused to participate since they "know" that DBTs are invalid) were able to successfully sort the files.
No I didn't see that. If I had I would have participated in it. Was it music or test signals ?
This certainly sharpened MY awareness of what I thought were inaudible phase shifts!😀
Is it the phase shift per se that you're hearing, or some other factor which is influenced by phase ?

The audibility of phase is an interesting subject and still seems to be debated a lot, but I think to either say that we can hear phase shift or that we can't are both wrong because they're incomplete. Phase is both extremely important and not important at all depending on the context... 🙂

So what phase shifts can't we hear ? As we get further from a speaker the total amount of phase rotation from 20Hz to 20Khz increases rapidly but we don't hear a change. Obviously we can't hear linear phase shift because that is simply another way of saying time delay of the entire signal...

Can we hear relative phase shift between two constant and unrelated tones like 498Hz and 1421Hz ? No, and their relative phase is constantly shifting anyway if they're not harmonically locked, so no matter what additional frequency dependent phase shift you add, they're still constantly varying in phase.

Relative phase between fundamentals and harmonics ? Maybe, but doubtful. Some claim this is the case but I don't think its been proven.

Phase between left and right ear with the same tone ? Yes, but only below a certain frequency. (I forget what, something around 1.2Khz)

In the treble region we can't detect relative phase between ears at all. If you try varying the phase of a treble tone in one speaker you'll hear a change, but you're hearing changes in the summed amplitude due to channel crosstalk in the room, try it with earphones with no crosstalk and you'll hear no change as you vary phase through a full 360 degree at 3Khz 🙂 )

If we're unable to detect phase shift in the same treble tone between both ears, it also seems likely that we're unable to detect phase shift in one ear between low frequencies (below the phase detection threshold frequency) and higher frequencies even if they're harmonically related, since we simply can't detect phase at those higher frequencies.

There also seems to be a general consensus that smooth gradual phase shift is inaudible but sudden sharp phase shift can be - even if the the total amount of phase rotation is the same for both.

So if we're not very good at hearing phase what are we hearing with phase shift in different circumstances ?

I think its pretty simple - in the vast majority of cases we're hearing either differences in amplitude summing, or we're hearing the time delay between arrival of different frequencies due to the group delay.

In the first case you have things like inter-driver phasing in a multiway system - move a single driver back and forth on its own with only the one driver playing and you hear no change. Do the same with both drivers playing and the change in relative phase shift changes the amplitude response vastly, and that amplitude change is easily audible.

In the case of group delay whats interesting is that although we can't determine phase of a periodic signal at all at treble frequencies, we can detect very small differences in the arrival time of an impulsive sound in that frequency range.

In your summed L/R samples the amplitude response is flat leaving only a difference in phase and group delay, but I would suggest that its actually the group delay thats audible not the phase shift per se - you are able to detect a difference in arrival times of related impulsive sounds that are spread across a wide frequency range. (Perhaps what some people call timing in a speaker) This might be particularly noticeable on percussion where you have impulsive sounds with a sharp beginning that cover a wide frequency range.

If the all pass function is above the threshold frequency where we can no longer detect absolute phase shift at all then it stands to reason that we can't detect the phase shift on steady state signals such as tones and pink noise at these high frequencies, so the only thing that we could detect if the amplitude response is also flat would be group delay on percussive/transient sounds.

(I suppose we could also be detecting ringing of high order filters, but that's not us detecting phase shift either)

So I think looking at the phase shift directly doesn't paint an accurate picture of what we can and can't hear. Looking at the group delay is better but group delay also includes duplicated information from the amplitude response, since non flat minimum phase frequency response will also cause group delay, but the amplitude non-flatness is likely to be far more audible of the two.

Looking at the excess group delay might be the best way of looking at the non amplitude related audibility of phase shift IMHO, and can probably explain any phase shift audibility which is not already explained by amplitude variations.

Certainly, excess group delay is the best measurement to make on an actual speaker when you're trying to see the time delay between drivers and the effects of crossover group delay, as it neatly subtracts the part of the group delay that is caused by the lumpy bumpy non-flat response of real world drivers, which otherwise overshadows and obscures the group delay of the driver alignment and crossover.
 
Hi DBMandrake
I think we are on the same track but the words may not be universal.
In my example I said “freely radiating” source and by being acoustically small it is automatically omni directional.
But….If one takes that acoustically small omni source and put it on a large flat boundary AND have the source << 1 / 4 wl from the baffle, then it radiates into “half space” and since the total radiated power is radiated into half the volume, the spl is raised according in the radiation space. With that close spacing, the impulse is not altered / time is intact.

If the baffle is infinitely large (acoustically) like a subwoofer sitting on the ground outdoors, then we have half space but if the baffle is finite, then we have what can be thought of as a 180 degree horn flare.
Don Keeles pattern loss thumb rule for horn pattern control can be applied to find the frequency where the baffle width and angle (180 degrees) begins to loss pattern control, the pattern loss frequency. Google up his paper “what’s so sacred about exponential horns” for a good starting point for this way of looking at horn.

In hifi, this effect is called the baffle step or other names that suggest the change in radiation angle causing a change in level.
In hifi speakers this dimension is pretty small and so a significant amount of energy up high can be re-radiated from that large acoustic discontinuity. In a large horn, in spite of folk lore to the contrary, very little hf energy is radiated at the moth termination of a proper horn.
The radiation angle is set well inside of the horn, in fact Don’s thumb rule can be used to examine the dimensions vs frequency where this takes place within the horn..

You can also put such a sound source in a room corner or at the corner between a wall and floor, as long as the source is << 1 /4 wl from the apex (ideally 1/ 8 wl spacing or less) , it will radiate a fraction of a sphere who’s limits are set by the physical boundary.
Conical horns are a way to further reduce the fraction of space being radiated BUT for a given size mouth or baffle, if you half the radiation angle, you raise the pattern loss F an octave.

That leads to the unexpected situation that a large wide horn has more pattern control (less total energy radiated outside the intended pattern) over a wider frequency span than a narrower angle horn of the same mouth size. Also, that the details of the mouth termination only effect frequencies below the pattern loss point, for a real CD horn, you do not see the effect of the mouth shape above pattern control..

I am glad you have heard what I was talking about with the small full range drivers in the vocal range, that and working on a pair of ESL-63’s for my boss in the 90’s were what convinced me that the radiation shape was way more important and audible than anyone spoke of but also far more difficult to do anything about. I hope more people try that and see what the fuss is, the WE being more of a giant version with directivty.

Part of my job back then in the 90’s was building sound sources used for acoustic levitation and this required >150dB at 22KHz in a coherent beam. We were building a processing system that could levitate samples of glass or ceramic at a very high temperature without a container .
That was a tough transducer engineering job but we went from a water cooled source that required 2Kw from a monster CML tube amp to one that would levitate a Styrofoam ball with 5W and a steel ball with 30W.

Anyway, that took a lot of fooling around and experimenting to get a feel for how things worked and find a new approach so when I explain things, it is only from my perspective.
The engineering comes after the invention on the way to something new working.

I think your “ideal” crossover ranges are consistent with what I hear, it seems like your ears acuity is roughly the ears response curve upside down.

Also, when harmonic fall in that range (produce by driver producing sound at various fractions of that frequency range) the harmonics are more audible and when he fundamental is in that range (harmonics falling higher up) they are less audible and in both cases, the farther away the harmonic is from the fundamental, the more audible it is.

So far as modeling group delay or phase issues, what one hears in a set of headphone simulating condition X, may or may not reflect what a speaker system sounds like using your 3d hearing because in the real world, you have sources located in different locations and radiate a 3d pattern.
Best,
Tom
 
Hi Simon,

I believe that the most important factor for good imaging is the similarity of both speakers (or multiple speakers in a multichannel system). It doesn't matter much if there are large magnitude or time errors as long as they are the same in both speakers.
Unfortunately we're only halfware there by making both speakers behave exactly the same. Room reflections can be perceived as part of the direct sound. Now if distortion introduced by the room isn't the same for both speakers, imaging will suffer again.
Tackle both issues and pinpoint imaging is your reward.

You're talking strictly about stereo imaging here, which is indeed heavily affected by left right symmetry, including both speakers having identical phase curves...(due to room crosstalk)

I was mainly referring to coherency of fullrange drivers though, as was Tom Danley, which is not really the same thing as imaging, as coherency can also apply to a mono speaker setup. I'm also not quite sure how I would define coherency, its something that you recognise when you hear it. A conventional multi-way speaker can have great stereo imaging but still lack the coherency of a good fullrange driver.

Whether its because a single driver radiates all frequencies pretty much perfectly in time (almost flat excess group delay regardless of being on or off axis) or whether its because its a point source in 3D space without interference patterns, I don't know.

What I do know is that with the right design techniques a multiway system can approach the coherency of a single driver, but there are a lot of things that can go wrong to prevent it, and I don't think many multiway systems truly approach the coherency of a single driver, hence the popularity of fullrange drivers among some, despite their many limitations.
 
It has always bothered me to see three way horn designs with high crossover points as you won't have time alignment and the crossover at 5 or 7 kHz (as is typical) will always be messy. This is true of your avatar (the Stereophile measurements clearly showed it) and would be true of the WE systems using a 7k crossover.

Hello Speaker Dave

You certainly do have a point. I actually have the 435Be in my in my HT mains and let them just naturally roll off. They do sound very good that way I must say.

On the new Everest the crossover is at 20K so it's more market driven, true ultrasonic capability, than an issue with HF extension on the 476Be. In any case this has been a topic of conversation for years over on Lansing Heritage. I am more in your camp.

I know when Greg Timbers had his personal system with Array prototype horns with Be drivers he would run with the 045Be's disconnected as he felt they sounded better that way. Some liked them the other way with them on.

When I first went to build my Array Clones I was a bit concerned but they actually sound quite good even with the messy looking measurements, you do get more extension and better directivity control but it's a trade off. Fortunately for me I don't seem to be as sensitive to the comb filtering as others are.

Rob🙂
 
"Coherence" The effect of not having the sound frequencies split up and sent to your ears at different time intervals. Clarity.
The opposite would be a poorly designed multi-way, that gives the effect of three people talking at once. Very annoying.

"Imaging", where the sound seems independent or larger then the actual speaker location.
 
Last edited:
"Coherence" The effect of not having the sound frequencies split up and sent to your ears at different time intervals. Clarity.
The opposite would be a poorly designed multi-way, that gives the effect of three people talking at once. Very annoying.

"Imaging", where the sound seems independent or larger then the actual speaker location.

I believe Simon is talking about something different. Otherwise the effect could be simulated by manipulating the signal going to the speaker.
 
What causes this sensation of "coherence"?
That's the $50,000 question isn't it ?

As I said, I don't know for certain what it is or whether its one or multiple factors. But it is something that a single full range driver acting as a point source automatically has (provided that the frequency response is decent) and that multiway speakers typically do not.

My suspicion is that driver time alignment (no large steps in excess group delay) and very minimal diffraction at high frequencies (drivers both acting close to ideal point sources with no secondary re-radiation, especially through the crossover overlap region) are possibly the main factors.

Accurate phase tracking between drivers is also essential for the illusion to be complete, but does not in and of itself provide coherence, otherwise all competently designed multiway speakers with good phase tracking would have the same coherence of a full range driver, but they don't.

There is something more than just driver phase tracking and flat on axis response to it.
 
No, it would not "surely degrade sound" (this is exactly what group delay listening tests have shown) nor is it complex to do.


Yes it would, not only would you be adding DSP, but multiple crossovers as well, which as was explained earlier, is not desirable in a full range setup.

In fact, name a DSP, that can divide a signal into preset bandwidths, reassemble them with advances and delays, to create one singular, in phase sound. I don't know of such a device, if created it may be a better alternative to traditional multiway design.
 
Yes it would, not only would you be adding DSP, but multiple crossovers as well, which as was explained earlier, is not desirable in a full range setup.

In fact, name a DSP, that can divide a signal into preset bandwidths, reassemble them with advances and delays, to create one singular, in phase sound. I don't know of such a device, if created it may be a better alternative to traditional multiway design.

It would be trivial to obtain such behavior in DSP. My guess is that high end touring concert setups already do it. That's just a guess though. For home consumer use it's probably a solution in search of a problem.
 
Yes it would, not only would you be adding DSP, but multiple crossovers as well, which as was explained earlier, is not desirable in a full range setup.

In fact, name a DSP, that can divide a signal into preset bandwidths, reassemble them with advances and delays, to create one singular, in phase sound. I don't know of such a device, if created it may be a better alternative to traditional multiway design.

XTA Electronics

Electro-Voice Dx46 Two-in/six-out FIR-Drive sound system processor

LM Series Overview - Lake Products

Klark Teknik | DN9848E
 
My fault for not specifying, but clearly those are not consumer products and part of a much larger and elaborate system.

Also to be disputed is the degradation to sound quality using DSP, which could be a whole thread by itself.
Guitar amp, possibly studio guys know what I am talking about.

You have an uphill battle if you think this crowd would accept those products in their typical systems. ($$$)