Explanation of Time Aligned and Linear-Phase?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Disabled Account
Joined 2017
Benjamin. Offset is literally -the- main criteria of how well the linear phase crossover works.
But in all 3 dimensions.

Eg, if any amount of drivers were all infinite small and inside of each other, u could apply any crossover of endless steepness and result in absolute perfect transient under all angles.

Since that doesn’t work, keep it as small as possible.
 
The decision process is really pretty simple. Nobody would be cobbling together a system today without DSP in the middle, unless they couldn't justify the modest extra cost*. Once you have a DSP unit, you can adjust things all you want and electrically move any of your drivers back a cm or two or some meters.

This thread encompasses alignment ranging from a subwoofer that's 10 feet from the main speakers to the tiny phase adjustments of L and R tweeters relative to one another and their midrange drivers. it should be pretty self-evident (and even true) to say, the big misalignments matter and the small ones don't.

More specifically and critical of the anecdotes about "the day I first heard phase-precise X speaker system....", there's no way to make phase right starting from the band in the studio to your ears and not even true just from the recording engineers ears to your ears. Phase issues sure look compelling in physics theory but not in perceptual experience.

B.
*unless you have very precise lab gear, there is no way for you to construct a passive crossover to achieve the theoretical precision some have talked about here; to put into perspective, in testing and listening to a new set-up with dipoles or open baffles at home, there are times when I'm hard pressed to choose what polarity to prefer, let alone tweak the phase
 
Last edited:
Disabled Account
Joined 2017
Well if that particular holy big day may comes, ull notice.... nothing.
Any fullrange speaker, headphone, or even modest filtered speaker (like 2 way shelf speaker mid tone xover 2nd order) can provide you that "holy experience".

U should first try to get ur headphone and mix such phase distortions into the signal to get a feeling what youre actually dealing with. I did that. And I also did simulations on what if the whole prering cancelation doesnt work out in reality.

Since that test, i approach building my speakers completely different!

Did you ever try to listen to a mono source, plugging one of the 2 channels out of phase?
Did you hear absolute silence then?
No? Doesnt make you think bout linear XO ? :)
 
Racing around the corner here...
Did the headphones make you feel the music, as well as hear it?

As long as our ears are spaced apart and you're listening in a room, why would you expect the wave fronts to cancel each other out, except in anechoic conditions at that single spot at equal distance... That's not where my ears are, nor do I live in an anechoic space.

Of course we don't get to control every part of the recording chain. However there isn't much wrong with trying to aim for more perfection in time. As long as we keep in mind the room will actually play a large part in the perception of it all. Can't stop at the speakers alone. Can't simply DSP out room effects either by simply straightening phase.

However phase does play a role in our perception. Not nearly as large a role as frequency response (not by a long shot). To expect any benefit would involve the listening space as well as the speakers. Is it worth it? Try it and determine for yourself.

Without the complete picture of what was done and how, it will be a wild guess what each personal experience is actually based upon. I'm glad I decided to test this for myself and would recommend that to anyone that is curious enough.
It wasn't all that easy but sure was a fun trip. It didn't do what I thought it would, learned a lot in the process. I won't dismiss it's benefits either.

I ended up with a preference for phase following the band pass behaviour of my speakers. Band width being between ~20 Hz-18 kHz. Most benefits in sounds below ~2 kHz. The parts of music we feel helps in this process. We listen with more than our ears alone.

Don't take my word for it, try it for yourself. Don't expect it to be an easy test.
 
Last edited:
Data point

However phase does play a role in our perception.... I'm glad I decided to test this for myself and would recommend that to anyone that is curious enough.

Here's a data point or maybe I should say an absolutely worst-case test from a few years ago.

A Klipschorn true folded corner horn sits a few feet behind dipole ESL panels. Bad enough time alignment? Not yet. Don't forget, you need to add another 15 feet or so for the horn itself.*

With my Behringer DCX2496, I "moved" the ESL panels back to the true-horn in time. I played my favourite test music, such as Holst's Band Suites that I've played a great many times at home and audio shows and everywhere I could.

Did it make a difference? You bet I could sense greater tightness of the bass. Too bad it was just a tiny difference barely perceptible and then it was on music I knew very well.

B.
*Easy to figure the phase error from the wavelength of, say, a 120 Hz tone.... if you really had any idea what the EXACT acoustic distance was
 
Last edited:
Disabled Account
Joined 2017
Yes yes yes. Im doin phase for years excessively.
I dont want to argue bout yes or no.
What i meant is, its not “woaaa resolution” experience.
Whoever thinks it sounds more natural, wider stage, whatever, didnt make appropriate AB comparison.
You cant hear it in strings, voices, synthi, esp classical music.
Its a pure impulse smearing effect appearing eg in guitar, drum, hitting noises.
You need to very attentive and need a reference. The smearing seems to slightly extend the reverb, creates multiple impulses, makes a two tone high and low separate. Its very subtle.
It not only gets worse with steep filters, but especially with many and most with low XO.
Subwoofers are the worst in this respect.

But what i was trying to point at is the offset. Listening mono left right out of phase of course there is no silence. And one has to be aware that linear phase comes with prering planned to be canceled out by the 2 drivers crossing by ringing out of phase.
There is no lunch for free.
There is no non-minphase behavior without prering.
And as humans only have decay masking perception, the prering is crucial in a room, and as such the offset of location in relation to lambda - defining the efficiency of cancellation.
 
Drum is crispier. But subtle. Thats it. Rest the same
The Klipschorn's bass bin is about 6 ms behind the tweeter, and about 4.4 ms behind the midrange driver. See the settings here done by Greg Oshiro.

If you were referring to the Khorns being physically 15 feet farther separated from dipole panels from the listening position, then my apologies for misreading what you said.

In terms of the difference in how it should sound, assuming you're talking about correcting bass bin to midrange and tweeter delays--it should be pretty dramatic in terms of timbre shift (more neutral), and a little bit crisper on transients, but not like the shift in timbre. The midrange imaging should be much more coalesced in the upper midrange and lower treble.

If you didn't get a big timbre shift, recommend very near field absorption around midrange horn mouth (about 1/2 to 1 metre distance radially), across top of top hat, on floor in front of Khorn (about 1-2 metres). Should be big improvement in center phantom image performance.

Chris
 
Currently, my demastering total is about 11000 music tracks. The reason why I mentioned the demastering method above...is because it works, and works quite well.

Chris
Interesting. The frequency curve on that page showing the result after de-mastering (or re-eq, whatever term applies) is very similar to the type of equalization I've liked to apply ever since I was a teen; steadily increasing and flattening the bass below 70Hz and decreasing/flattening the treble with a shelving response from 4kHz upwards. It's always system dependent, but my current system is otherwise pretty flat from 25-15kHz when measured, but EQ improves listening to pop, rock etc.

It seems the issue is with the way music is mastered compared to film. Now, Natalie Merchant albums required no bass or treble EQ on my system. I'd like to hear back from you regarding the "Ophelia" album if you have it on hand. Thanks.
 
Yes, it's one of my most played albums after demastering EQ and fixing clipped peaks (removing induced odd harmonics). It's pretty easy to see where the music track issues are when you look at individual cumulative frequency responses, spectrograms and amplitude/time traces (looking for clipping).

And you're right about having a flat FR system. Most of the issues that I've run into with others listening to demastered tracks that I've done is that they apparently have non-flat systems with too much bass (especially midbass), and "house curves" (on-axis attenuation of treble), apparently trying to use a "one size fits all" approach to demastering. Visualization of time/amplitude and time/frequency of each track is key to getting this it right, in addition to careful listening during iterations, I've found.

If you want to hear a demastered version of Ophelia, send me a PM with your Google+, gmail, etc. address.

Chris
 
...There is more to mastering than applying EQ, and demastering would have to alter more than just EQ in virtually all cases to literally constitute demastering.

Also, the website link for more information on the process appears to attribute motivation for some things that mastering engineers are hired to do, such as increase apparent loudness, as though those things are done with disregard for the original purity of the natural sound. Fact is, most mastering engineers don't like the loudness wars any more than you do. These days, its artists and record companies that insist on loud masters. If one engineer can't or won't do it, they will find one who will.

In addition, often artists and record labels don't necessarily value the purity of natural sound over what they consider to be good, commercial sound that can sell lots of records. Of course, if they think natural sound will sell the most records, then fine, natural is good.
There are multiple operations that I usually wind up performing on each track during my demastering tasks to undo as much as possible that which has been done to the recordings during mastering. (Generally, the more recent the recording, the more operations have to be applied.) Undoing mastering EQ is just one of those operations--but it is the operation with the most positive benefit sonically. My ears tell me that the efforts expended to do this, even partially, far outweigh the perceived residual mastering, mixing and recording issues that remain. The idea of "you can't undo something because you can't completely undo what's been done to it" (...that's undesirable...) is actually a "fools choice".

Note that nothing can really undo mixing or recording mistakes--which is why the concept of mastering first was proposed in the early 1970s: to undo recording and mixing errors, then to prepare the tracks for transfer to the extremely limiting medium called "phonograph". That concept--mastering for increased "commercialization"--actually doesn't work for me, but I'm into high fidelity sound reproduction, not creativity after the music has been recorded. YMMV.

The balance of your comments may be of interest to you, but in reality, I don't care what the market forces are. Nowadays it's trivial and free of extra charge to place original mixdown tracks on a web site alongside the "commercialized tracks" for sale. Then it's not a one-size-fits-all proposition at that point. It's retained corporate culture blocking that option that's apparently the problem nowadays.

As far as "making the music sound better by adding distortion and noise", adding "euphonic" distortion, I believe this article--recently resurfaced by PMA--helps to coalesce the thinking on that subject: Euphonic Distortion: Naughty but Nice? | Stereophile.com

Chris
 
As far as "making the music sound better by adding distortion and noise", adding "euphonic" distortion, I believe this article--recently resurfaced by PMA--helps to coalesce the thinking on that subject: Euphonic Distortion: Naughty but Nice? | Stereophile.com

Chris

Never heard of anyone using a program like that to add distortion to music recordings in order to try to make them sound better. Not surprising that the author of the article didn't like it.

Most of the distortion in pop music recordings comes from the selective use of vintage equipment, mic preamps, tube compressors, EQs, tape recorders, etc., all of which have some audible distortion and noise compared to the cleanest and most pristine equipment available today. Because the distortion mechanisms in such equipment are much more complex than a static polynomial transfer function model used by the distortion program in the article, the program is likely of little use for understanding why distortion is added to commercial music to make it sound better to many or most people. Understood, not everybody likes even a little distortion. Particularly, many people including many mastering engineers don't like the loudness wars.

Regarding the term demastering, it still sounds kind of pretentious to me, but I guess it's your right to call it that. Maybe you could even trademark it.

The processing itself you have described seems to me more like a type of re-mastering to alter the sound of recordings more to your own tastes, and perhaps to the tastes of other people as well. I don't know if more people know about it, if there would be great demand for what you do or not. But, in any case no objection whatsoever to such processing here.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.