Question about phase and chair placement

Status
Not open for further replies.
And of course you question remains unanswered.

Bill, you seem to think we are not answering phishead's question...

He seems quite happy with the answers he is getting, so perhaps you could use your superior brainpower and experience to tell us what question he is really asking, or maybe even post something constructive for a change in this thread and give us your well thought out answer.
 
If you put simple sinewaves into your system and listen as you move your head a couple inches, you will be amazed at how the volume of any particular frequency bounces up and down depending on where you put your head. With very high frequency tones, moving your head as little as an inch will allow you to locate peaks and nulls that are a function of multipath (reflections from room boundaries and the direct signal from the driver) and loudspeaker output phase.

Is this audible in music which almost always consists of much more complex waves than simple sines? Sure it is- this is the sort of thing that makes up the in-room frequency response of a speaker. Does it matter?

MR
 
pinkmouse:

Didn't you read the new rules? You're supposed to be nice and make an effort not to be sarcastic.

Here is the original question:
<blockquote>
If you move your listening chair a mere .8 cm forward, you've completely changed the phase of the 20khz frequencies compared to the 20hz. I know this is being picky, but this makes me question our ability to hear phase differences at all. If such a small motions of our head can make large differences in the phase of the sound we are hearing, when does it start to matter? I've never noticed a difference between leaning forward or leaning backward while listening to music, a difference of about 20 cm. This 20 cm changes the phase of 850hz by 180 degrees, while changing 1700hz by 360 degrees. You'd imagine that such a change would be noticable.
</blockquote>
This discussion has gotten off track.

Can anyone imagine a 100msec block of music in the form of acoustic energy coming toward you? Can anyone picture the waveform? Does anyone know what is happening at a given point in space in any given instant in time? Hasn't anyone looked at music on an oscilloscope?

These forums are not just about answering questions. Those who come to an understanding because they have participated in a thinking process are much better off than those who are simply told.

The phrasing of the original question notwithstanding, the answer does not involve the terms frequency or phase.

If necessary, I will draw and post a series of diagrams. I don't think that will be needed though. Someone will come forward. Who knows, they may be lurking in the background reading this thread but not posting.

On the other hand, I may have finally gone off the deep end. These forums do take their toll.
 
OK Phis, Bill,

How about this.

Consider a sound source in an open field, and your sitting on your couch listening to this sound source.

First it doesn't matter how simple or complex the sound being generated is. It will create pressure variations in the air that will travel (in all directions, but) toward you, the listener.

Second, as the sound is being generated, it will leave the sound source at the speed of sound. This speed of propogation will not change in mid journey on it way to your ear. And certianly, different frequencies will not travel at different speeds. All sound will travel at the same speed indiscriminate of frequency or phase.

So the sequence at which the "sound waves" leaves the sound source is the same sequence when it reaches your ear. There is no change of sequence (one pressure wave to the next) no matter how close or how far you move from the source.

You would have be moving your head pretty fast to get doppler distortion, so that out of the argument. If you talk about a reverberent sound field, like a room, all bets are off on what you'll hear. If you solve that one, write a paper about it.

That leaves us with, there is no perceptible difference in moving your head a small distance in relation to the sound source because in fact, there is no difference (except for the very small change in SPL).

Well Bill, how'd I do?😛

Rodd Yamas***a
 
You should get a couple of points, save for . .
<blockquote>
So the sequence at which the "sound waves" leaves the sound source is the same sequence when it reaches your ear.
</blockquote>
There are no sound waves (plural). This is at the heart of the problem. There is only ONE. I keep getting the idea that people think that there are a multiplicity of waves, each independent from the others. Not so. The sound issued by the speaker is a "composite". It is a SINGLE wave whose only relevant (to this discussion) attribute is that it varies in pressure over time. The relationship of the frequencies (and their relative phases) which make up this composite are already SET the instant that the sound leaves the speaker. At this point the entire concept of phase no longer applies - it disappears from the equation leaving only amplitude and time. Phase is an intermediate player who often gets benched. By moving forward or backwards you are only changing WHEN you hear the pressure change at your ear - you are not changing any relationships between the constituent parts of the wave because they cannot be changed. They cannot be changed because they no longer exist. But, sadly, they can be revived and phase will rear its ugly head again. So remember, whenever you meet phase on the street you need to say, "get outta my phase."

Now, of course, someone is going to say, "But, wait, my tweeter is tweeting and my woofer is woofing. Surely they are issuing separate waves."

So they are.
 
Bill Fitzpatrick wrote:
By moving forward or backwards you are only changing WHEN you hear the pressure change at your ear - you are not changing any relationships between the constituent parts of the wave because they cannot be changed.

Isn't this the same as I also said within my last post (the thing about starting the CD player a little later or earlier)?

😕

I also repeatedly said that phase alone doesn't matter at all. But phase DISTORTION will cause signal distortion (i.e. linear distortion). Maybe it was a mistake not to mention that these form of distortion is introduced by electronics (to the least extent) and by loudspeakers (the main culprit) but never by changing distance to a sound source - because it just changes a frequency-independant delay which leaves waveshapes as they are.

Regards

Charles
 
Good stuff Bill.

Of course when you have real speakers with multiple drivers in a real room with diffractions, reflections, standing waves, etc. all bets are off.

I think the confusion may come from a misunderstanding of Fourier Theory. Lots of people seem to have got the idea that all sound waves are made from sinusoidal signals added together. The misunderstanding is that this is just a mathematical tool to help to understand and analyse what is going on. It's not the real worlds.

As Bill pointed out, an impusle from a loudspeaker travels round the room as an impulse. In a real room it will hit floors, walls and objects, each of which will absorbe part of the energy and bounce the rest back again. In this real world, each object will have a non uniform frequency response and will therefore change that impulse a little bit before sending it back into the room. So, if you have a room, for example, that has walls that perfectly reflect 1 kHz but have a good coefficient of absorbtion for all other frequencies, then, every time you play an impulse (or any other complex signal) from your loudspeaker, you will get really irritating standing waves at 1 kHz, providing that the original signal contained a 1 kHz component.

"But DocP, you just said that splitting a signal into its component sinusoids was just a mathematical trick ?" Confusing the real world, isn't it !!!

In practice exactly this sort of thing happens quite often with low frequencies where a room will "boom" at a given bass frequency. Often the solution can be found by changing the location of the speakers and the listening position (easier than changing where the walls are located). This doesn't stop standing waves existing, but it can alter the phase and amplitude just enough to make all the difference. Interestingly, the human hearing system is remarkably good at ignoring what the room is doing to a given signal, within limits. But I've gone off on a rant about room acoustics....

Going back to the phase bit, this is the reason for time aligning the drivers and the popularity of electrostatic drivers, Kef's Uni-Q or full bandwidth drivers. They do all the time alignment for you and you don't end up with the low frequency bit arriving slightly after the high frequency bit which confuses the hell out of the hearing system. Similarly diffraction from sharp edges of the loudspeaker box can add another bit of the impulse to the overall sound, resulting in smearing of the image.
 
Sorry Bill, I just thought you were getting a bit gro....my fault

this is the reason for time aligning the drivers and the popularity of electrostatic drivers, Kef's Uni-Q or full bandwidth drivers.
.

Yes, totally correct, but to agree with Bill, all you are hearing is still one modulated compression wave, once the sound leaves the speakers, and distance from that source is irrelevant.


Second, as the sound is being generated, it will leave the sound source at the speed of sound. This speed of propogation will not change in mid journey on it way to your ear. And certainly, different frequencies will not travel at different speeds. All sound will travel at the same speed indiscriminate of frequency or phase.

Yes also true in a perfect world, but in the real world there are many complications, to give one example, on a sunny morning at an outdoor festival the relative humidity close to the ground is high, due to evaporation of night time dew. This difference in humidity causes sound to curve down towards the ground, as the speed of propagation of a sound wave is slowed by the denser air, ( just think of light, a prism, and the resulting rainbow).
 
Hi Pinkmouse,

You were doing fine up to this point.
(just think of light, a prism, and the resulting rainbow)
This statement implies that different frequencies are affected to different degrees by temperature or humidity changes. This is not true. As the wave propagates through the changing conditions, its speed changes creating a dispertion or focusing effect. It is frequency independent.

Rodd Yamas***a
 
After reading your posts I will sit and think a little bit about it.

I am not entirely convinced that splitting the signal into its original frequencies is a bad idea. Fourier isn't just a mathematical tool, it is how our ears perceive the sound. Our choclea are filled with hairs of different lengths. Each hair is tuned to a specific frequency and picks up only one frequency at a time.

Thanks for the dialogue, folks. I find that this is the best way to sift through the heaps of crap that I've managed to acrue.

-Dan
 
Unfortunately, I found this thread to late to impress Bill with my grasp upon this topic. Richard Vandersteen designed his speaker line around this very issue. According to the subjective tests on a "standard listener", he discovered that phase misalignment is indeed audable and results from recreating a sound from speaker drivers of different frequencies on the same plane. In nature, a soundwave carrying different frequencies and amplitudes is generated from the source point(s) and transmitted omnidirectionally. Our hearing has evolved to locate the source(s) of the sound by processing the time delays of the reflected wave(s) to identify its location within a space. Vandersteens solution was to stagger the speaker drivers so as to time their arrival more accurately within a listening space. Since low frequency waves travel more slowly, the lower frequency drivers are placed closer to the listening position than the higher frequency drivers. Unfortunately, his speakers are not to create the enormous soundstages that his competitors can achieve. Whether this is the fault of his speaker or of the way we record sound is still up for debate. Vnadersteen addressed this issue with the cabinet design, some have addressed it in the crossover. The so called "time aligned crossovers" that you have heard about try to time/phase align the different drivers electronically.
 
nania:

The speed of sound is NOT related to frequency. Midrange drivers and then tweeters are set back in an attempt to put the <font color="#ff0000">acoustic centers</font> (read voice coils here but only as a discussional reference) of all the drivers on the same plane. This is what time alignment is about. This works to <i>some</i> degree but as Richard Heyser pointed out decades ago in one of his great <u>Audio Engineering Society</u> papers, the acoustic center of a given driver varies with frequency.

Anyway, your post, however important it might be, has nothing to do with the original question of this thread which, I believe has been answered.

I don't know if everyone is happy with the answer but it's correct nonetheless. If anyone thinks my answer is in error, let me know - I'll be happy to continue the discussion.
 
Status
Not open for further replies.