If I know the FR, do I know the transient response?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
diyAudio Moderator
Joined 2008
Paid Member
Is the brain supposed to guess what the original sound is like? That's a logical fallacy of "reconstruction" that occurs whenever you assume the original source signal is faithful in phase to the glockenspiel in the first place and all you need to do is get your XO perfect for phase.
These are different issues. Lining up phase around a crossover doesn't guarantee there will not be any delay in that region (if I understand you).
 
I believe mark100's reasoning is faulty. And there is a motif in recent posts that can be called "reconstruction" or "reverse engineering" of phases by the brain or maybe "wishful thinking". For example, the discussion of echoes.

...Is the brain supposed to guess what the original sound is like?

Perception is relative. Audible elements of the perception (of a glockenspiel or any other instrument and their early reflections) are formed from the relationship between one spectral component and another - and also between the same spectral components between the ears. It is the phase changes between these spectral components that are relevant in this thread - that is, whether they are audible or not - and if they are, the circumstances in which they are audible. The model is entirely linear but the relationships will not be evident in a conventional (second order) frequency response. The notion of the brain as a spectral analyser under-estimates the processing in our brains by at least an order - a bispectral model would take away much of the need for "guessing"...
 
I believe mark100's reasoning is faulty. And there is a motif in recent posts that can be called "reconstruction" or "reverse engineering" of phases by the brain or maybe "wishful thinking". For example, the discussion of echoes.

The phase of separate frequencies of the glockenspiel strike spin through the air to the mic and then later to your ear and then along the length of your cochlear organ. The rotation may be orderly in a math sense, granting mark100 that. But what does it "look" like to the ear? Is the brain supposed to guess what the original sound is like? That's a logical fallacy of "reconstruction" that occurs whenever you assume the original source signal is faithful in phase to the glockenspiel in the first place and all you need to do is get your XO perfect for phase.

No doubt mark100 has cultivated hearing and taken great care in developing his audio environment. But the process of making his phases do his bidding certainly also entrained all kinds of other audible tweaks along the way unrelated to phase per se.

Ok, take #2.

I think unrelated issues are being combined.

No matter what signal a mic picks up, I want my speaker, my reproduction of what the mic picked up, to replicate that signal as accurately as possible.
It's an all win, nothing to lose, goal.

Similarly, no matter how my speaker interacts with the environment, it is all win with nothing to lose, if the speaker accurately replicates the signal being fed to it.

And finally, now matter how our hearing works, the all win, nothing to lose chain continues.....why wouldn't we want as accurate signal propagation as possible?

I mean, Hi-Fi....high fidelity...true to source....
If we aren't after this, why bother with anything other than simply pleasing our ears and saying screw hi-fi (which i'll admit is often tempting sometimes :rolleyes: )

The thread started with the question, "If I know the FR, do I know the transient response?"
The simple true answer is yes, when you include phase in the definition of FR, such that FR has both a mag and phase component.

The thread in various ways has gone on to question if there is a benefit to paying attention to the phase component, like we all know there is to the mag component.
The simple true answer is yes. The only question is to what extent.

Imho, most everyone makes phase much too difficult. Seems kinda simple when broken down.

Drivers are minimum phase within their passbands ...which means flatten mag, and phase automatically flattens too. All win, no lose....how hard is that?

Crossovers, no matter what type we use.... passive IIR, active IIR, IIR via FIR, linear phase via FIR, mixed phase....no matter what...
all have the same goal ...get the acoustic output phase traces of adjoining drivers to lay on top of each other.
This is indisputable imo/ime...if for no more than the sake of smooth frequency response. All win, no lose.

When you apply both of those steps, driver correction and proper crossover construction (which includes delay compensation/polarity/levels),
and make boatloads of measurements,
you begin to see and hear a pattern......
damn! ...good looking measurements sound good too.:D

So for me, I say what's to lose by striving for perfect impulse, perfect mag and phase, perfect transients? So far, a little latency is all I've found... and heard lots of benefits.

If folks want to think things are complicated and say why bother, bless them.
Me, I like to simplify things, try stuff, and observe without necessarily having positive expectations. Pls bless me too, ok? thx:)
 
...The thread in various ways has gone on to question if there is a benefit to paying attention to the phase component, like we all know there is to the mag component.
The simple true answer is yes. The only question is to what extent.
...
Crossovers, no matter what type we use.... passive IIR, active IIR, IIR via FIR, linear phase via FIR, mixed phase....no matter what...
all have the same goal ...get the acoustic output phase traces of adjoining drivers to lay on top of each other.
This is indisputable imo/ime...if for no more than the sake of smooth frequency response. All win, no lose....

Ever fool with DSP time alignment?

If you fool with time alignment or follow the threads in this sub-forum, you know it hardly makes a difference, even for subs and even rather much time (AKA many degrees). But if you read long threads about making the perfect XO, it makes all the difference in the world in that thread. And despite asking many times, I can recall no instance of somebody posting hearing evidence on the question.

Whatever breast-beating about phases, there is no way for humans or machines to reconstruct a sound object (real or from a speaker) using phases as any help. The absolute and relative phases are random at your ear.

Your brain seems to have adequate data to identify and locate sound objects based on amplitude and time of arrival information (and as some have noted, better after "learning" the room sound). My guess is that if the partials of the glockenspiel strike arrive within say a 10ms interval (AKA up to a whopping 50 samples of the 5kHz partial), it will sound like great transient reproduction.

Forget about phase.

Doesn't anybody have a method for quantifying transient response that's better than just eyeballing an oscilloscope?*

B.
* speaking of 'scopes, my speakers make lousy square waves while a buddy has good square waves with his Quad ESLs. Both systems sound the same. That's the nature of watching square waves on a 'scope: the phases can be out of whack and the image on the screen can be screwy, but it has little effect on the sound... because your ear doesn't care about phase.
 
...there is no way for humans or machines to reconstruct a sound object (real or from a speaker) using phases as any help. The absolute and relative phases are random at your ear.

Not true on both counts (at least if you are not listening to pure tones or experimental synthesizer bands - which I would not recommend).

Doesn't anybody have a method for quantifying transient response that's better than just eyeballing an oscilloscope?*

I have been trying to explain how you might do just that...
 
...I have been trying to explain how you might do just that...

I am sure the shortcoming is mine.

But could you please explain your test method again in terms simple enough for me, perhaps starting with words like, "Put your mic......" and ending with the words like, "... and so a score of X would be considered a pretty good transient behaviour"? Or like that.

B.
 
Originally Posted by bentoronto:
...there is no way for humans or machines to reconstruct a sound object (real or from a speaker) using phases as any help. The absolute and relative phases are random at your ear.


Not true on both counts (at least if you are not listening to pure tones or experimental synthesizer bands - which I would not recommend)....

Your pronouncement would be true only if humans have telepathic powers allowing them to keep track of absolute phase back to the speaker or even back to the glockenspiel.

You may find it worthwhile to acquaint yourself with the concept of "stimulus equivalence" as used by perception psychologists. In other words, the same array of frequencies with the same array of relative phases can arise from more than one sound object. Therefore unlike loudness and time or arrival, phase* isn't a helpful tool in human life.

B.
* in ordinary life, phases of the frequencies reaching our ears (or even just one ear) are neither absolute or relative. It is essentially random. Even the human vocal track generates tones in different locations and difference relations - not like the textbook at all
 
Last edited:
I am sure the shortcoming is mine.
But could you please explain your test method again in terms simple enough for me, perhaps starting with words like, "Put your mic......" and ending with the words like, "... and so a score of X would be considered a pretty good transient behaviour"? Or like that.

This is not about what you measure but how you display it. From earlier, you know that the impulse response has all the information you require, providing (system) non-linearities do not effect matters (which they are very likely to over the low frequency region we have been talking mainly about, but we will ignore that).

So then it is about how you display that information. A magnitude response has no time information whatsoever. There is body of work concerning group delay distortion from which a phase response will yield an audibility criteria, but that is not the same as we are talking of here, since transient events are by definition, short lasting.

So instead we might look to the time-frequency plane to see how the spectral components are dispersed in time - and how their inter-relations might characterise our perception. Trouble is we encounter the effects of the window/filter using such methods that may well obscure what we are looking for.

We might instead need to relate the elements we spot deviating from an ideal response to every other component we measure to be able to form some idea of its perception - if indeed it is perceivable. This is not trivial - even to display, but I think finding a way to display the evolution of the bispectral content in time might be 'where it's at'.

That is as close as I can get to telling you how to do what you want. I have no idea if anyone has ever achieved this - or even attempted it. There are a series of JAES papers from the research lab at Philips (I think) concerning the Wigner distribution and that might be a good place to start.

I am sure the shortcoming is mine.

An attribute we obviously share :)
 
diyAudio Moderator
Joined 2008
Paid Member
It is essentially random.
That could be "proven", but what if our human brain can outsmart us and discern the origin of something we hear? Here's another thought, what if some phase variation around a crossover changes the modal distribution of sound within a room altering the tonal balance.. what we hear would have nothing to do with phase and yet was entirely caused by it.


Drivers are minimum phase within their passbands ...which means flatten mag, and phase automatically flattens too.
all have the same goal ...get the acoustic output phase traces of adjoining drivers to lay on top of each other.
...good looking measurements sound good too.
If this was on topic we'd have a few things to talk about ;)
 
diyAudio Moderator
Joined 2008
Paid Member
@ Mark, so much to say on the topic too. I've brought some of it up with you on other threads. Anything on your mind?

No it cannot.
Hence the quotes ;) A caution not to take what appears to be, at face value.
That is impossible too.
Let me put it this way, vary inter-driver phase then measure room power and you will get a different result.
 
Clarification: I suspect preserving phase alignment doesn't matter much for the sound of a glockenspiel; but at low enough frequencies, phase and inter-aural time difference blend into one another and you do have the ability of nerves to synch to that timing. So in the sense of spatial localization below 800 Hz, phase (AKA timing) matters and it comes tied naturally with other cues to lateral plane localization.

Also my guess about the blending period of partials should probably be closer to what is called "temporal resolution" which is maybe up to 5ms with test stimuli and maybe a lot more with music and music rooms.

B.
 
Clarification: I suspect preserving phase alignment doesn't matter much for the sound of a glockenspiel; but at low enough frequencies, phase and inter-aural time difference blend into one another and you do have the ability of nerves to synch to that timing. So in the sense of spatial localization below 800 Hz, phase (AKA timing) matters and it comes tied naturally with other cues to lateral plane localization.

There is no basis for that I know of. There seems to be a confusion between inter-aural phase and the relative phase of various spectral components presented to the ears. They are different and one does not preclude the other. As I have alluded to before, the processing in the brain is likely different too.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.