Beyond the Ariel

ra7

Member
Joined 2009
Paid Member
Again, there is no difference between phase alignment and time alignment, at least in my mind. Phase just shows how different frequencies arrive in time relative to each other.

And I still don't believe that you can recognize the frequency after hearing less than a complete cycle of a given frequency. Kindhornman, care to supply a more detailed explanation or a link or something?
 
Perhaps it would be better if we just called it relative distance to the virtual point source at a specified frequency. There seems little awareness that this point varies for all the different frequency components of a passage of music due to phase rotations associated with all the intrinsic electronic and acoustic EQ (not just the cross over). With the exception of horn loading. I guess this is what is meant by minimum phase, but I am no expert. Wish I understood more.

Perhaps too obvious to say thet our hearing evolved empiracly (as hunters and hunted)rather than by computation. Anyway the relevance is that distance of a sound is much to do with familiarity with the ambient clues, rather than just phase linearity - and I guess this is the reason we can hear a sound coming from behind our 'speakers.

martin
 
Again, there is no difference between phase alignment and time alignment, at least in my mind. Phase just shows how different frequencies arrive in time relative to each other.

And I still don't believe that you can recognize the frequency after hearing less than a complete cycle of a given frequency. Kindhornman, care to supply a more detailed explanation or a link or something?


Hello,

IMHO phase alignment and time alignment are different animals.

As an example :

take 2 perfect loudspeakers (bass and mid+treble) and a Linkwitz-Riley crossover.

If the geometry is correctly set up, the waves radiated by the 2 loudspeakers will be in phase at all frequencies. We can say the phase alignment is perfect.

But the Impulse Response demonstrate that high frequencies waves are arriving earlier than the low frequency waves. (Because the group delay curve of a 2 ways loudspeakers system using a Linkwitz-Riley crossover is not constant and the delay is higher for the lower frequencies...)

Best regards from Paris, France

Jean-Michel Le Cléac'h
 
I think that there is some real misunderstanding going on here. You do not need to hear a completed wave for that to excite your ear. To say that you have to wait for the entire wave to pass is just not so. Your ears will pick up the rising pressure wave and the speed that it is passing long before the entire wave has to pass. Our ear brain feedback loop is better than that.

I don't believe that this is true. My understanding is that it actually takes several full wave for the ear to recognize the pitch. No instrument, not even a digital calculation can do what you suggest. How is it that the ear can?
 
Administrator
Joined 2004
Paid Member
Harmonics? At least for low notes and a lot of voice, the fundamental is swamped by the harmonics. So although we make be calling it 220Hz, there is a lot of harmonic content that is higher, so complete cycles can be heard sooner.

There should be some research on this around web.
 

ra7

Member
Joined 2009
Paid Member
Hello,

IMHO phase alignment and time alignment are different animals.

Jean-Michel Le Cléac'h

Jean-Michel, but if I say that the loudspeaker has flat phase through its entire range, wouldn't it be the same as saying all frequencies arrive at the same time? What exactly are we saying when we say it is time aligned? Even full range speakers radiate sound from different parts of the cone.

When Lynn talks about moving it by 1/4 inch to 'time-align' the horn with the woofer, all he is saying is that this gives the best phase overlap through the crossover region.

Harmonics? At least for low notes and a lot of voice, the fundamental is swamped by the harmonics. So although we make be calling it 220Hz, there is a lot of harmonic content that is higher, so complete cycles can be heard sooner.

There should be some research on this around web.

Pano, but you do need to at least hear one complete cycle of the harmonic frequencies. The question is whether can you recognize a wave without hearing one complete cycle of it. You can do it using the missing fundamental thingy, where the 2nd needs to be -6db, third -12db, and so on. I may be off on the numbers. But it surely needs to be sustained for a few cycles before your brain can decode the information. There is just not enough information in a fraction of a cycle to describe the frequency of the wave.
 
Last edited:
Assume two perfect loud speakers one being used for the low notes and one for the high notes.
Assume they are aligned in time perfectly.
Apply 300hz to 1 and 3000hz to the other.
The sound of the two will arrive in perfect unison.

now add a crossover circuit.
the two sounds will no longer be in perfect unison because the crossover causes a phase difference.

now align the two speakers so that their phase is in alignment.

This will cause a misalignment of the two speakers such that if we apply 300hz to 1 and the second later we apply 3000hz to the other the sounds will arrive at different times.

So even though the speakers are aligned in phase they will not be aligned in time.
 
Administrator
Joined 2004
Paid Member
Pano, but you do need to at least hear one complete cycle of the harmonic frequencies. The question is whether can you recognize a wave without hearing one complete cycle of it.
I certainly do not know the answer to that one. Geddes just said above that we need more than one cycle. If there are some studies that show this, or hints as to were to find them, please let us know. It's an interesting subject.

But in any case, the harmonics would take a minimum of 1/2 as long for a full cycle. Maybe 1/3 to 1/4 that long.
 
Pano,
The subject you just brought up about the harmonics is a very important part of the picture in my eyes. If phase is not important then it would seem that we could randomly add the harmonics of a signal at any interval or time during the signal and not notice the difference. I have a hard time with that concept, it would seem that harmonic timing would be very important to identifying the acoustic signature of any instrument. Not talking about pure sine waves, they are not music and hold little sway in how I look at the complex waveforms we are trying to recreate here. I think that phase as in timing of signal is much more important than what it is being given. Can I prove that with sine waves, I doubt that very much and I don't think that a steady state situation is analogous to music in the least. I think we really have to look at complex signal to see what is really important here and its relationship to the phase shift over the frequency band and the relationship of first impulse response with harmonic content.
 
Hi Pano, ra7
There is an important connection between time and frequency.
It is often the “chart of fundamental frequencies” from various instruments which “prove’ the extension needed to produce that fundamental.

On the other hand, if one is interested in music then one has to account for the fact that it changes amplitude (well it used to more often).

If one took a “pure” distortion free sine wave and examined it with an FFT, you see one frequency standing high over the noise floor.
Take the same sine wave and FFT and amplitude modulate the sine wave so that it goes from zero amplitude to max to zero in 5 cycles and following a Gaussian envelope (the most simple shape) and now your pure tone occupies about 1/3 of an octave bandwidth instead of a single tone. The shorter the envelope, the more complex the modulation shape, the wider bandwidth that event requires / occupies.

One can take a percussive instrument like a kick drum and examine it with an FFT and while it has a resonance, the signal that comes out has it’s greatest energy far lower and is broad band.

In the extreme case (in a perfect world) one could generate a perfect 20Hz to 20KHz impulse response, feed that impulse to the loudspeaker and ideally, it would produce all frequencies 20Hz to 20KHz such that they arrive at the same instant.

In other words, what one has in music can be anywhere from someone sleeping on a keyboard or very dynamic signals and each has very different requirements.

Truetone posted a link to “minimum phase” in post 8507;
My condensation would be that the phase response is always tied to the amplitude by the Hilbert transform, any change in amplitude has a corresponding change in phase etc.
This is why you can EQ a minimum phase problem and the EQ fixes both amplitude and phase (and why you can’t fix things that are not minimum phase because you fix one thing but screw up the other).

However, you’re talking about crossovers and preserving a signal too, note in the link “all pass” filter linked at the bottom is an exception and something which is not “minimum phase”.
This configuration has a phase rotation within the band where the response is not changing and in effect the lower half is delayed in time relative to the top half of the band.
This is also what all normal and proper configuration of the named crossovers do as well.
For example a forth order L&R or Butterworth etc will sum “flat” in amplitude but exhibits 4X90 degrees phase rotation from well above to well below that crossover point. As with the all pass, the lower region is delayed relative to the upper portion because of that phase shift (model one, look at Group delay for instance). These configurations are proper and ubiquitous but do not preserve the input waveshape as the harmonics are re-arranged relative to the input signal.
It is argued that one can’t hear an aberration this short in time except for some kinds of signals and so preserving / reproducing the input signal’s wave shape in acoustic pressure is not required. I suspect that is the case up high but less so lower down, here, to my ears when a wide bandwidth occupies one small time, the dynamic impression is greater.

To have an upper and lower range sum without that phase shift (as it they were actually one wide band driver), the drivers and crossover phase shift must be accounted for and offset. To the first approximation, one moves the LF drivers forward of the hf source by an amount equal to the added group delay was when mounted normally.
What one finds is that there are ways to eliminate the crossover phase shift with passive crossovers, even with multiple drivers within a single horn, but the filters are totally adapted to / driven by the individual circumstances (you are adding the magnitudes and phase responses of drivers and filters).
Best,
Tom Danley

Hi Pano, i think i am going to be out that way again next month (going to run the projector for Doug at another AES thing).
 
Pano,
The subject you just brought up about the harmonics is a very important part of the picture in my eyes. If phase is not important then it would seem that we could randomly add the harmonics of a signal at any interval or time during the signal and not notice the difference.

It is very easy to prove that you can change the phase of the harmonics in a signal and not change the perception at all.

Just be careful that the playback system is perfectly linear because changing the phase of the harmonics will change the way the composite signal gets distorted and THAT change is audible.
 
I did not find the referenced paper, but I was quite impressed with the three part series "Phase Coherence as a Measure of Acoustic Quality". This has me rethinking the whole idea of phase detection. And it explains something that had alluded me for a long time and that is: Why is our hearing most acute to group delay at about 2 kHz? Why not lower? If Griesinger is right then this would explain a lot of what I understand to be true based either on reading or experience. The papers and his theory are untested, but none-the-less, it is well thought out and supported.