If I know the FR, do I know the transient response?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I am not sure that is possible - it would certainly be interesting to see how it was done if so. And I am not sure the delays we are talking of would be appropriate in any live monitoring application.



All discussion I know of has concerned only linear phase crossover implementations - which, for good reasons, are not always as good as their minimum phase counterparts. Those reasons do not apply to the low frequency roll-off.



There are also applications at much higher frequencies, such as in stereo shuffling, where linear phase filters are clearly preferable. I would even suggest that a lack of such filtering is why what should be a fixture of stereo has been absent for so many decades.

Yep, i wish Meyer still had their tutorials online about crossovers.
Practical step by step tuning via dual channel FFT, that started on-axis and then showed how to continue to optimize off-axis.
Had charts for all major types of crossovers and orders, showing polarity, phase shift, summations, etc.
Miles ahead of the usual material.
Oh, and like said, analog, no latency to speak of..
Lordy, i'm so happy all that complexity can be traded in for a few ms of latency :D

Sorry, but I don't follow how you say all the discussion has been about linear phase crossovers....
I'd say it's been about the thread title ??? And nicely so :)

The answer to which is a nice big Yes!, whether minimum or linear phase ..
(of course given a definition of frequency response that includes phase)
 
...analog, no latency to speak of..

That is the part that confuses me as it appears contrary to the established law of causality.

Sorry, but I don't follow how you say all the discussion has been about linear phase crossovers....
I'd say it's been about the thread title ??? And nicely so :)

I was meaning the research of which I am aware, rather than this little thread. I do not know of any study into the effects of compensating for the phase of a loudspeaker's fundamental low frequency roll-off - or similarly that of the microphone at the recording end.
 
What if the linear phase case was derived from something that wasn't minimum-phase to begin with?

I love that soundbloke posted after you...i get to quote you now :D

Not sure what you mean by 'wasn't minimum phase to begin with'...

If you mean a source, a musical instrument or the human voice for example..
I don't see how the concept of what kind of phase it has applies...
I mean, it is what it is....

It seems to me, the task of my speaker is to reproduce exactly that it is what it is.
Honestly, I struggle to even see how it's debatable that linear phase, along with flat frequency response, isn't the theoretical best solution for that task. Perfect impulse, perfect transients, perfect fundamentals and harmonics.

But i easily see it's certainly debatable how auditory that 'perfect' solution is, vs a gentle sloping, smoothly transitioning, minimum phase curve (along with the same flat frequency response).

All that said, it strikes me as kind of intuitively obvious that if I have to get to a 'gentle sloping, smooth transitioning, phase curve' NOT to hear a difference....well...that linear phase is probably the right destination...;)

If you meant something else by 'wasn't minimum phase to begin with', ...please explain
and sorry for the soapbox above either way
 
That is the part that confuses me as it appears contrary to the established law of causality.



I was meaning the research of which I am aware, rather than this little thread. I do not know of any study into the effects of compensating for the phase of a loudspeaker's fundamental low frequency roll-off - or similarly that of the microphone at the recording end.

Maybe the confusion comes from us not having defined what we mean by a smoothly sloping minimum phase curve. I mean one that allows for the normal low end group delay we typically see, but does so with as little phase rotation as possible, as smoothly as possible.

Gotcha re the research. The only research I'm aware of, if it can be called that, are reports from a few respected prosound speaker designers and audio testing consultants, who have made some phase linearization experiments as far down as possible. The reports tend to echo an opinion that the most discernible difference was in the low end.

I've done similar tests outdoors, but only as far as 8096 taps at 48kHz, allows. I do think bass changes. It almost sounds like there is less bass, but then a sense comes that it's just cleaner,....to turn it up a little...and all gets tighter and better i think.
No double blind yet, so who knows. (Actually i don't much use double blind anymore other than for trouble shooting. Too much retained short term bias. Prefer to live with a system for a while, and count the smiles vs i need to dial something :rolleyes:)
 
Maybe the confusion comes from us not having defined what we mean by a smoothly sloping minimum phase curve. I mean one that allows for the normal low end group delay we typically see, but does so with as little phase rotation as possible, as smoothly as possible.

In a linear phase response, the angle of the straight line is proportional to the time delay - which is the same for all frequencies. The transition I am talking about needs be about (IMHO) 5Hz. There are other factors at play here as I have eluded to previously, such as the proximity effect due to the stereo difference component.

But phase compensation essentially delays all frequencies by the right amount so that they have the same overall delay - higher frequencies tend to get delayed by more wavelengths therefore. But there always has to be an overall delay - and that is where my confusion arises.

While cascaded Bessel filters might provide for an analogue delay, it would still be a delay.
 
Last edited:
What if the linear phase case was derived from something that wasn't minimum-phase to begin with?

You could certainly compensate 'past' a linear phase response to generate a maximum phase version, just not sure why you would want to? Although a lot of drum samples effectively do just that and are renowned for their 'attack'. This was something of what I was pondering on previously as to how we integrate the different frequencies when perceiving transients. I have no explanation...
 
To be a bit clearer (?), for phase differences to be audible (normally referred to as group delay differences instead of phase differences), we are looking for the audibility of one spectral component being delayed with respect to another. This is why I thought the bispectrum might be a place to search...
 
diyAudio Moderator
Joined 2008
Paid Member
Not sure what you mean by 'wasn't minimum phase to begin with'...
What if a band has a component near the start of the impulse, and another away from the start. Eg if the origin of this behaviour is acoustic. Would you perhaps have a post-FIR result that still isn't minimum phase?
I struggle to even see how it's debatable that linear phase, along with flat frequency response, isn't the theoretical best solution for that task.
Well, it can't hurt. Only it could if its use is somehow hiding other issues.
 
It may be helpful to close the textbook and its math truths and look at the music room.

You have a recording of a hard mallet hitting a glockenspiel. There are components at say 200, 400, 600, 1050, etc. Even with a smooth phase slope (or whatever slight imaginary compromise with reality you want to propose), each of those components arrives at your ear at essentially unrelated - essentially, random - phases. If you reverse the polarity of a high-frequency tweeter, no music will sound the least bit different (try it and you'll never argue about the importance of phase again).*

And that's the case even if you spend all night tuning up your filters for whatever response you believe is mathematically ideal. In short, I suspect all the terrific brain efforts devoted to making the XO, one little part of the long audio chain coherent, is wasted.

B.
* those components do arrive along with their various echoes (from the recording studio and your music room.... assuming your ears have learned "the sound" of your music room previously) in a satisfactory time alignment that let's your ear hear a real great glockenspiel chime esp if you have ESL speakers, phase be damned
 
Last edited:
diyAudio Moderator
Joined 2008
Paid Member
Even with a smooth phase slope (or whatever slight imaginary compromise with reality you want to propose), each of those components arrives at your ear at essentially unrelated
The direct sound shouldn't be random, and it is the direct sound field that spawns room interaction. If it were a single reflection we're talking about it would likely be easy to discern (direction, strength, same response as the direct sound, in being related to and yet being heard as distinct from the direct sound, etc).

You are using open baffle speakers. The goal of a wide directivity speaker (in my overgeneralised opinion) should be to offer variation in reflections so as not to call attention to anything specific, but rather to offer room ambience.

Such a different result.
 
The direct sound shouldn't be random....

Even simplified to a dual-mono signal, 200 Hz reaches your left ear from the left speaker say 6,322 degrees after leaving the L speaker, the 400 Hz component of the glockenspiel reaches your left ear and so on maybe 12,681 degrees later - a bad match. And then you have L speaker to R ear, R speaker to L ear, and R speaker to R ear, and then the echoes all of which are different degrees.

But it might sound great whatever the theory of the first post might say.

Or am I not understanding something?

B.
 
Last edited:
diyAudio Moderator
Joined 2008
Paid Member
And yet we can still discern a group delay of a millisecond or so. What you describe is a natural delay process even for an actual glockenspiel.

I wish I could leave it at that but there is one more issue and that's where it is being considered. Eg if we are talking about the creation of a direct sound wavefront around a crossover, the relative phase of the individual drivers is important on many levels. Otherwise the response can be off, the response to power ratio can be off, the reflections tonal balance can be incongruent with the direct sound due to off-axis variations.
 
... 200 Hz reaches your left ear from the left speaker say 6,322 degrees after leaving the L speaker, the 400 Hz component of the glockenspiel reaches your left ear and so on maybe 12,681 degrees later

That describes a linear phase response - which for the case where the magnitude response is unchanged is a pure time delay. In essence, higher frequencies need be delayed by more wavelengths than lower ones because they have a shorter wavelength in time. So a pure delay requires a phase delay that increases linearly with frequency - hence the term "linear phase".

And then you have L speaker to R ear, R speaker to L ear, and R speaker to R ear

For stereo (which is perfectly accurate at low frequencies), that is exactly what you require. I have posted about the common misconceptions in stereo elsewhere on this forum.
 
I wish I could leave it at that but there is one more issue and that's where it is being considered. Eg if we are talking about the creation of a direct sound wavefront around a crossover, the relative phase of the individual drivers is important on many levels. Otherwise the response can be off, the response to power ratio can be off, the reflections tonal balance can be incongruent with the direct sound due to off-axis variations.

This is much of what I have been trying to say. Phase linearisation is not a cure for all ills - and in certain cases can make matters audibly worse than they were beforehand. Crossovers are one such area - and not just where there are significant time delays to compensate. But for the low frequency roll-off, there is no such problem.

The audibility of relative phases and delays of spectral components is exactly that I referred to several posts up this thread. We need to relate measures in one frequency band to those in another. Hence we might end up in the bispectral world if we hope to make some measure of the audibility of transient phenomena.
 
....What you describe is a natural delay process even for an actual glockenspiel.

I wish I could leave it at that but there is one more issue and that's where it is being considered. Eg if we are talking about the creation of a direct sound wavefront around a crossover, the relative phase of the individual drivers is important on many levels. Otherwise the response can be off, the response to power ratio can be off, the reflections tonal balance can be incongruent with the direct sound due to off-axis variations.


Yes, what I want to describe really is the real-world. Is that erroneous?

Maybe our hearing doesn't care much about phase and math truths but assigns an identity to freq components arriving within the same time slot and seemingly (by whatever perceptual mechanism) coming from the same object.

And yet we can still discern a group delay of a millisecond or so...

Ignoring the wiggle-room "or so....", there are hearing phenomena that can be observed in lab tests and/or with instantaneous A-B comparison that don't matter in real life (examples are polarity reversal and lateral localization of a 100 Hz tone). What is the test set-up that reveals that "millisecond" sensitivity?

B.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.