Wide Directivity 2 way compact Speaker T34A Waveguide and Purifi 6.5 Aluminium

Hi,
yeah I can relate to what you write, like this: also my system sound is ~same on sofa and in the kitchen and all the way to garden, but when I get bit closer than where my sofa is, the sound greatly changes. This is about 2.2m listening distance, from ear to speaker. Sofa is ~3m away for practical reasons and too far for good sound. I don't know at what distance in your or Tooles place perception changes. I would assume it's quite close to speakers, closer than typical listening distance is at typical domestic room.

I cannot hear difference with slight phase / time changes from other side of the room where my DSP is located. I tried adjusting it at the close proximity but not at the sofa, I need to try the sofa as well.

Other side of the room sound to me is quite simply typical home hifi livingroom sound, perception heavily influenced by early reflections. However, perception turns into much more focused and involving sound when the listening triangle is shrunk enough that brain starts to pay involuntary attention to direct sound. Griesinger texts basically define this difference is due to preserving original sound phase, which helps it to pop above amplitude of all the noise always around us and enables brain to focus on a sound.
Distance where this happens can be evaluated with mono noise playing in your stereo. When phantom center image is quite small and very well localized, focused sound, brain pays great attention to it. This happen when listening distance is short enough the direct sound has enough time and intensity before reflections come in. If it's mushy ball of sound, unfocused and big then early reflections greatly influence the perception and there is just no clarity to it, mushy, brain doesn't seem to even want to focus on it. All these adjectives are hard to describe, it's easiest just to experiment with it.

Example: When I'm positioned "too far" from speaker mono noise phantom center sound covers roughly the whole area between my speakers. As I start move myself closer to speakers staying equidistant to both the phantom center comes into focus. Keeping eyes closed helps to detect this. When the phantom center has great focus to it, the speakers themselves seem mute and all the sound seems to be concentrated on the middle, true phantom center. It's very calm sound in a way, surrounding noise in the room kinda vanishes and focus is on the phantom center so much so I cannot hear my speaker are on or off even if I look at them. Only if I start moving closer to one speaker than the other, localization of the noise moves to the speaker I'm close to. Or if I move bit further out the noise widens again, It's very different sound perceptually.

I hope I described it well enough that anybody could relate. Listening to music and moving in the room, from sofa to bit closer listening proximity on the foot rest makes huge difference in perception, to me at least. This could be very small, even unnoticed difference for some people I guess.

I'd bet that beyond this distance where brain seems to lose focus on the sound and it changes, it's about speaker power response what we listen to, or in-room response if you will. People likely can detect difference between speakers, but not likely detect differences in phase because it's all ruined already "proven by perception".

All this stuff is very much about how auditory system works, like, how many cycles of direct sound is needed before reflections come in in order to get direct sound? I don't know. At very high frequencies wavelength is very short and brain likely has enough time to analyze direct sound before reflections come in. For example 200Hz is already 1.7m long, and peraps only half a cycle comes before reflections pile up depending on many things like listening distance. This is why you'd easily localize sharp attach sounds, but bass notes are almost impossible to lozalize.
There is likely some "critical bandwidth" somewhere in between, on the midrange, that is important for auditory system to be able to register enough direct sound before reflections to be able to provide the focused perception of sound, a human voice for example.

David Griesinger explains this bit differently though, through periodicity of sounds: on every fundamental cycle of sound all harmonics line up and make huge amplitude peak, which pops above all the noise (with lost phase information in this sense) and auditory system focuses on that. Conversely one could think that auditory system does not pay attention to all the noise around us because it could drive us crazy, but picks up the important stuff based on some mechanism and makes it more important perceptually while suppressing the unimportant, "the noise". If you think this is good information and start to utilize it, then it means having "mushy sound" means phase information is lost, because brain is not able to focus on the sound., thus speaker phase at xo doesn't matter either. Or, when the focus to sound is great, original harmonics are preserved well enough brain is able to lock in and phase should be audible, if it is as such beyond just this focus / unfocus thing!

So, in this sense it is very important to learn to listen your own auditory system, learn to detect whether your brain is paying attention or not 🙂 It opens possibility to use logic with listening tests, you could come up with listening test to determine whether phase is audible or not. Just listen with brain having focus and compare it to brain not having focus. This is possible because the focus seems to be on/off kinda thing and it's detectable when it happens. I can literally AB test an early reflection by moving myself a bit so that my own auditory system mutes it or not 😉

Getting of topic, but I think this is such an important concept that should be associated on all discussion in way. Some things are more or less important, depending on which state your brain is at, which depends on everything including which state you'd like it to be, which sound you prefer on which recording, what your room acoustics is like and so on. This is core to speaker design in my opinion.
 
Last edited:
@markbakk , that is the question in essence, what one hears vs what is measured. And i agree with @tmuikku that when getting closer to the critical distance, the room influence minimises, but also my wife noticed the effect that the sounds stay more coherent at odd listening places. But of course my wife and i are familiare with the room acoustics.
Next week i have a visitor who will bring his own speakers as well, a first visit to our livingroom, so interesting to hear his observations.
 
  • Like
Reactions: tmuikku
Sure, but is that not the same with f.i. a voice saying a word ?
Well for higher frequencies thr alteration of the sound enveloppe perhaps? A topic mr Griesinger has touched upon.
I started this by stating that the phase behavior of individual drivers in the crossover region might not be of much interest, and the (phase of the) acoustic sum (on the listening axis) is what really matters. We didn’t have to get into long discussions about the bigger phase picture for me. I’m not into Griesinger that good, but up til now discussions about envelopes and phase relationships between different frequency bands seem a long shot to me. One doesn’t want sudden big phase changes in a loudspeaker. Period. Imho.
 
For completeness and academic interest this is the raw waveguide response without any EQ, impedance peak is at 760Hz which is very close to the value hificompass measured in his review.

T34AWG-RAW-Six-pack.png