Beyond the Ariel

Lynn,

Since you've already supplied details on the bass cabinet and the drivers used, when do you think you'll be able to give us a cross-over schematic as well? (if that is your intention ultimately). I am only asking, being encouraged by your very positive comments on their sound. It just seems that you've been having a good handle on their voicing lately and was just wondering if the design is nearing completion.
 
ra7,
As long as you are in agreement that phase change over the frequency band is not important then I agree with you that phase alignment through the crossover region is the most important factor. While I agree with that in principal and think that this covers the majority of the problem with time alignment I am not really convinced that phase shift over the frequency band is not also an issue to be addressed. Why do I say that, my reasoning is that the impulse response, or initial leading edge of a sound such as a drum or guitar or many other instruments gives us many clues that allow us to tell that an instrument or anything else is real or a reproduced image. If the timing of the first impulse is shifted in time, the harmonics of the sound, the brain can detect that. I could be wrong about that but to me the phase coherency of the entire frequency band is an important function of reproducing the critical impulse response of everything that we listen to.
 
myhrrhleine,
When checking the acoustic output of a speaker in the freefield, how are you making a distinction between phase and time alignment? Both are a measure of time, phase just given in an angular measurement, but if the phase is lining up the time alignment must also. We are talking acoustical phase angle, not just the electrical angle here.

All you have to do is an FFT of an FFT. In other words do it twice.
This will give you something called a cepstrum response.

The problem with alining phase is that it occurs at only 1 frequency.
You could have two drivers separated by a significant distance yet reasonably aligned at crossover. Usually 90 degrees is enough.

Do a sweep and everything will look good, but test with brief individual tones as in music and you will quickly see the difference.
 
buzzforb,
The difference in the acoustic centers of a 1/4" are going to be much greater at high frequency than they will be at low frequency. So the match between let's say a mid-range and a bass speaker with a small shift will not cause the same time misalignment as you will have between the mid and a high frequency device.

For frequency that is true. However for time alignment it is dependent on the velocity of propagation of sound.
Therefore, 1/4 in of delay at 10khz is the same as 1/4 in at 100hz.
 
myhrrhleine,
I understand your arguments above. If what you are saying is true on its face then the only real solution seems to be actual physical alignment of acoustic centers. A coaxial design with identical acoustic centers of both devices, not a staggered design as most devices are created in reality. Both voicecoils would need to be on identical planes to do that, not staggered. With multiple devices on a baffle even if you could get the acoustic centers the same distance back it will still fail the test due to the vertical distances. This has always been the crux of the problem with any time alignment scheme that I am aware of, they are only true at one fixed distance on one vertical axis and nowhere else.
 

ra7

Member
Joined 2009
Paid Member
ra7,
As long as you are in agreement that phase change over the frequency band is not important then I agree with you that phase alignment through the crossover region is the most important factor. While I agree with that in principal and think that this covers the majority of the problem with time alignment I am not really convinced that phase shift over the frequency band is not also an issue to be addressed. Why do I say that, my reasoning is that the impulse response, or initial leading edge of a sound such as a drum or guitar or many other instruments gives us many clues that allow us to tell that an instrument or anything else is real or a reproduced image. If the timing of the first impulse is shifted in time, the harmonics of the sound, the brain can detect that. I could be wrong about that but to me the phase coherency of the entire frequency band is an important function of reproducing the critical impulse response of everything that we listen to.

Right. Phase overlap through the crossover is the most important thing.

I see what you are saying about phase over the entire range. But research has shown again and again that it doesn't matter. You can try the RePhase software written by one of the members here. Do a search. It allows you to fix phase changes due to box modifications, crossovers, and so on. I tried it, and initially I thought I was hearing dramatic changes. But it turned out that my mind was playing games. When I didn't know whether the phase correction was enabled or not, I couldn't tell a difference.


However for time alignment it is dependent on the velocity of propagation of sound.
Therefore, 1/4 in of delay at 10khz is the same as 1/4 in at 100hz.

Your first statement above is in conflict with the second.
 
myhrrhleine,
I understand your arguments above. If what you are saying is true on its face then the only real solution seems to be actual physical alignment of acoustic centers. A coaxial design with identical acoustic centers of both devices, not a staggered design as most devices are created in reality. Both voicecoils would need to be on identical planes to do that, not staggered. With multiple devices on a baffle even if you could get the acoustic centers the same distance back it will still fail the test due to the vertical distances. This has always been the crux of the problem with any time alignment scheme that I am aware of, they are only true at one fixed distance on one vertical axis and nowhere else.

Yep, the best thing is a single full range driver.
Everything else is a compromise.
But, 1/4 in can make a difference in the sound.
 

ra7

Member
Joined 2009
Paid Member
It's simple math, dude. You don't have to argue with me. Do it yourself and find out.

We are discussing reproduction of sound. The reproduction should ideally be linear to perfectly reproduce how it was played. Research has shown that phase does not matter outside the crossover region. Of course, this has been debated here and elsewhere and in theory it looks like it should matter. But experimental evidence points otherwise.
 
It's simple math, dude. You don't have to argue with me. Do it yourself and find out.

We are discussing reproduction of sound. The reproduction should ideally be linear to perfectly reproduce how it was played. Research has shown that phase does not matter outside the crossover region. Of course, this has been debated here and elsewhere and in theory it looks like it should matter. But experimental evidence points otherwise.

i'm not going to argue with you. above you said, "Sound propagates as waves. A 100 Hz wave is much longer than a 10 kHz wave. To get information at 100 Hz will take longer."

this statement doesn't have anything to do with phase and it's wrong.

if you play the lowest (sub-C) on the piano simultaneously with the middle-C, you will hear them at the same time (simultaneously)! if the drivers on a speaker are 'time aligned' the same holds true.

BTW the wavelength of a 10kc wave is 1.356" and the wavelength of the 100hz wave is 135.6". start time of waveform is the same; so is the end time, if sounded simultaneously for the same period of time (in some measurement). i'm done.
 
Last edited:

ra7

Member
Joined 2009
Paid Member
ok, so now you found that the wavelengths of a 100 Hz and 10 kHz wave are different. To recognize that it is a 100 Hz wave, you need at least one complete cycle. One complete cycle of a 100 Hz wave is 135 inches long or 11.3 feet. Speed of sound is 1126 ft/s. So, it takes 10 ms for that wave to pass through a plane (the plane of your ears, say). A 10 kHz wave takes 0.1 ms to pass through the same plane. If you start them both at the same time from the same point, the 10 kHz wave will have conveyed all its information much before the 100 Hz wave.

If you have ever done measurements, you will recognize this issue in gating. To get information down to 100 Hz, the gated time needs to be much longer. Indoors, long gate times lead to reflections creeping in. Therefore, you need to do a nearfield measurement when measuring low frequencies.
 
ok, so now you found that the wavelengths of a 100 Hz and 10 kHz wave are different. To recognize that it is a 100 Hz wave, you need at least one complete cycle. One complete cycle of a 100 Hz wave is 135 inches long or 11.3 feet. Speed of sound is 1126 ft/s. So, it takes 10 ms for that wave to pass through a plane (the plane of your ears, say). A 10 kHz wave takes 0.1 ms to pass through the same plane. If you start them both at the same time from the same point, the 10 kHz wave will have conveyed all its information much before the 100 Hz wave.

If you have ever done measurements, you will recognize this issue in gating. To get information down to 100 Hz, the gated time needs to be much longer. Indoors, long gate times lead to reflections creeping in. Therefore, you need to do a nearfield measurement when measuring low frequencies.

couldn't resist responding to this one!

i wonder what percentage of all music ever played suffered the indignity of having certain notes cut short so that only a fractional wavelength reached the listeners?

and since we are striving for perfection here at a sub-sanity granular level, shouldn't we ban all recordings made on wax cylinders? after all, only Caruso was (about) first recorded on these!
 

ra7

Member
Joined 2009
Paid Member
By the time notes are stopped, several cycles have already been launched. You know what frequency it is. If you stop it before even one cycle is completed, something that I bet never happens in music, it is nothing. It does not carry enough information to reveal what it is. At least, that is my understanding.

This is beside the point anyway. The point was about 1/4 inch making a difference in getting the horn to mesh with the woofer. My answer was that it would make a small difference in the phase overlap between the woofer and horn, and that might be perceptible due to the ripple in frequency response in the crossover region.
 
I think that there is some real misunderstanding going on here. You do not need to hear a completed wave for that to excite your ear. To say that you have to wait for the entire wave to pass is just not so. Your ears will pick up the rising pressure wave and the speed that it is passing long before the entire wave has to pass. Our ear brain feedback loop is better than that.

As has been stated all sound at a given temperature and pressure travels at a fixed speed. The length of the wave has nothing to do with the speed of sound in air. You do hear the different beat frequencies as a complex waveform and that is how we hear music, otherwise you could only listen to one frequency at a time which I think we all know is not realistic. So the same thing goes for time misalignment, it isn't slower for longer waves, the launch time is still the same if the phase of the signal is starting from the same 0 degree reference point. The real difference is in the decay time, a higher frequency wave will have a much shorter decay time than a longer wave. I think that it is an inverse square rule with distance but could be wrong on the math there.
 
I think that there is some real misunderstanding going on here. You do not need to hear a completed wave for that to excite your ear. To say that you have to wait for the entire wave to pass is just not so. Your ears will pick up the rising pressure wave and the speed that it is passing long before the entire wave has to pass. Our ear brain feedback loop is better than that.

i do apologize, but i was joking about the 'indignity' thing.

this is why i jumped to the 'music model' instead of the 'test tone' model that has been discussed - in order to bring about some reality.

while i don't have an O-scope, i can put some test tones through the old pipes with the audio generator, and verify the frequency with the counter.
 
The point was about 1/4 inch making a difference in getting the horn to mesh with the woofer. My answer was that it would make a small difference in the phase overlap between the woofer and horn, and that might be perceptible due to the ripple in frequency response in the crossover region.

this has not always been the point in this discussion - that is why i jumped in about speed of sound, etc. i don't think that anyone has disputed this point, except to correct phase alignment with time alignment.

there is a 'setup procedure' on the WWW somewhere on just how to do this with i believe an old A7 VOT system. if i can find it, i'll post here.