In my bizarre lava cave listening room phase is surprisingly good at the listening position. There must be room modes, but they don't seem to mess up phase in the bass like other rooms I've measured. Will post some examples plots tomorrow. It's nice not having to fight the room.
In my bizarre lava cave listening room phase is surprisingly good at the listening position. There must be room modes, but they don't seem to mess up phase in the bass like other rooms I've measured. Will post some examples plots tomorrow. It's nice not having to fight the room.
No parallel walls, and wall surfaces are irregular and jagged like anechoic treatments?
Surely the distance between the modes is also a factor. What I do not understand is how a system with minimum phase modes all of a sudden becomes non-minimum phase. This just isn't like physics. Things don't happen all of a sudden (except in quantum mechanics.)
The more I think about it the more I guess that I was wrong. Room modes are not like multiple ways, because they don't have low and highpass character. In fact they have peaking and notching character.
Maybe the minimum phase behaviour is not the problem, but the generation of the inverse filters. When done manually you don't see the filter function of the single modes when they strongly overlap. It is hard to create inverse filters that completely equalizes amplitude and time together. I have never seen anyone who did this manually with IIR filters on overlapping modes. Amplitude response was always good, but time domain (spectrum decay) never.
But this proves nothing. Maybe no one did it right yet...
A main reason for a non minimum phase room response is a null (aka a strong reflection). There is a decent description in http://www.regonaudio.com/Digital Filters Part II.html
I would not say it is hard to equalise a single mode successfully, combating multiple modes across multiple positions is obviously much harder, if not impossible, to do perfectly.
I would not say it is hard to equalise a single mode successfully, combating multiple modes across multiple positions is obviously much harder, if not impossible, to do perfectly.
The more I think about it the more I guess that I was wrong.
Wow, I am impressed, not because you admit this but because almost no one around here ever does. I mean seriously, there are people around here who I never believe because they are often wrong and never admit it. So thanks for that!!
So back to the phase audibility question. In thinking about this I have a problem with it and that is this. There are a lot of different opinions about what is audible and many very nice pictures of what can be done with FIR and DP etc. But the subjective assessments are all personal. I have seen no real scientific tests done.
But add to that the fact that the real experts in the field of perception (Toole, Olive, etc.)never even talk about phase, conduct no tests on it and claim to have better than a 95% correlation to data that completely excludes phase. How does one explain that?
But add to that the fact that the real experts in the field of perception (Toole, Olive, etc.)never even talk about phase, conduct no tests on it and claim to have better than a 95% correlation to data that completely excludes phase. How does one explain that?
Earl,But add to that the fact that the real experts in the field of perception (Toole, Olive, etc.)never even talk about phase, conduct no tests on it and claim to have better than a 95% correlation to data that completely excludes phase. How does one explain that?
My take would be that phase response was not easy to measure when Toole, Olive, etc. were doing their tests on perception, while frequency response was. Equalization has been around for ages, affordable FIR filters for probably only a decade.
In general, correcting frequency response problems with "classic" filters also smooths phase response, so (old) studies showing a preference for smooth frequency response also would generally show a preference for smooth phase response.
Though I do not doubt those that can reliably pick out flat phase from smooth "wrapped" phase after enough exposure to the difference, from what I have read from those making the comparison between flat FIR and smooth, but wrapping IIR filters they were not able to tell much, if any difference in their (initial) A/B tests.
I bought hardware capable of FIR almost a year ago, but have yet to learn the software to implement it, though still on my "to do" list, it just has not seemed too pressing, as having heard FIR implemented flat phase/frequency response compared to nearly flat phase response with more ragged frequency response, it was the ragged frequency response that was far more apparent.
That said, what FIR filters can do to ordinary components was a real ear opener to me, even though at the time I had no idea how the dead flat phase response had been achieved.
Art
Phase is audible but only on certain material and different for everyone. Simple recordings of percussive music seem to benefit the most from linear phase. I personally find plucked stringed instruments and percussion the easiest to hear the difference. Acoustic music in general.
This Finnish study is kind of interesting.
https://www.google.com/url?sa=t&sou...ggyMAI&usg=AFQjCNHap8PBdbZKD_TV5IhrvBYu9ceF-g
This Finnish study is kind of interesting.
https://www.google.com/url?sa=t&sou...ggyMAI&usg=AFQjCNHap8PBdbZKD_TV5IhrvBYu9ceF-g
Last edited:
Art,
Sean Olive has studied the use of room correction software and he concluded (from memory) that they were all different, some improved the sound but some made it worse. Also, that the poorer the speaker to begin with the greater the improvement (not surprising.)
If you are saying that linear phase lies in the 5% area of potential improvement that JBL admits to not understanding then I can see that being the case. I can also see that some speakers will be improved significantly by this, others not so much, or not at all. So its generally a small factor, but could be significant in some cases.
Sean Olive has studied the use of room correction software and he concluded (from memory) that they were all different, some improved the sound but some made it worse. Also, that the poorer the speaker to begin with the greater the improvement (not surprising.)
If you are saying that linear phase lies in the 5% area of potential improvement that JBL admits to not understanding then I can see that being the case. I can also see that some speakers will be improved significantly by this, others not so much, or not at all. So its generally a small factor, but could be significant in some cases.
This Finnish study is kind of interesting.
Interesting paper, very reputable authors. Sounds consistent with what I was saying, although not spot-on to the topic. Seems like a small effect that potentially could be significant. That "group delay" was the audible correlate does not surprise me since this is exactly what my own work has found.
Thanks.
I have no idea why it's been ignored for so long. I basically agree with Kees (Kessito) that it does bring tonal difference, especially noticeable in percussive sounds. Largely exaggerated, a huge time gap would create a "kaboem" sound where the original sound input was more like "boem".
FIR and DSP gives us more power to correct but it's not without it's own pitfalls. But still I believe there is something to gain. Many musical instruments in real life get their signature sounds from the harmonics that the artist gets out of them. Isn't it our job to make sure those harmonics are kept in tact the best we can?
I like to view it as a band that has been playing together for a long time. Catch them at their peak and you see them enjoying themselves getting the timing cues from their fellow artists and you see them move along with the music. They can't even help that part. But I've also seen performances that, while they didn't sound bad as such, the players were just going trough the motions.
But I'm sure that does not help this discussion much. 🙂 But aside from the things that sound different, how does it make you feel? Or am I the only one stating that 😱. The rhythm and pulse of the music really sucks me into it (my goal).
So I better stick to agreeing with this part of Kessito's post for now as it is more clear than I can provide it myself 😉:
P.S. I would like to know if Kees does feel/notice the improved "vibe" of the music. The rest of you, disregard this post (lol).
FIR and DSP gives us more power to correct but it's not without it's own pitfalls. But still I believe there is something to gain. Many musical instruments in real life get their signature sounds from the harmonics that the artist gets out of them. Isn't it our job to make sure those harmonics are kept in tact the best we can?
I like to view it as a band that has been playing together for a long time. Catch them at their peak and you see them enjoying themselves getting the timing cues from their fellow artists and you see them move along with the music. They can't even help that part. But I've also seen performances that, while they didn't sound bad as such, the players were just going trough the motions.
But I'm sure that does not help this discussion much. 🙂 But aside from the things that sound different, how does it make you feel? Or am I the only one stating that 😱. The rhythm and pulse of the music really sucks me into it (my goal).
So I better stick to agreeing with this part of Kessito's post for now as it is more clear than I can provide it myself 😉:
What I have noticed with linear phase in a studio environment is it really helps hearing early reflections patterns from the live room (so where to put your microphones), and balancing mixes with percussive instruments. (linear phase in the 80-500Hz range makes hand percussion sound "pok" instead of "pak"), but also with things like piano. It makes sense that the more percussive sounds benefit more from linear phase of course.
What I have noticed with large scale PA systems that are linear phase in the 250-5000Hz range (the PA world never does linear phase in the sub-lows because of the latency) is that , besides snare drums and percussion, vocal intelligibility really benefits from linear phase, even to the point that you can take down speech vocal level by as much as 2-3dB without loosing definition, this is a big deal in PA-world where 3dB represent a lot of speakers.
What I also have noticed is that a system should ALWAYS be causal, by which I mean that the total wideband system phase response should follow the frequency response, i.e. if your subs drop of 24dB/ocatve @30Hz, your phase should follow this behavior, so it must also wrap 360 degrees. If you make the Hpf response of your system linear phase and not causal anymore it starts to sound really strange and "wrong", I also score 100% on this in abx testing.
To learn what to listen for I can recommend listening to music with hand percussion (congas bongos, snare drums etc.)
(I always use Paul Simon, Hearts and Bones, the last 20 seconds of the track)
with 8th order allpass filters digitally inserted (if you don't have DSP you can process your audio files with it in a free audio editor such as audacity). This would make the same phase response as a 16th order lowpass filter, which is excessive.
You can try the filters first on 250Hz (this is around where the fundamentals of most percussion live), it should be easy to to notice that the percussion does "piew" instead of "pok" because there is severe time-smearing. Once you score 100% on this you can try 4th order and 2nd order allpass filters and different frequencies. Try it on headphonse first and once you hear it switch to speakers, since your listening environment will make it harder to hear.
Disclaimer; sometimes it is better to not hear things, since it can make you enjoy music less...
my 2ct 🙂
Kees
P.S. I would like to know if Kees does feel/notice the improved "vibe" of the music. The rest of you, disregard this post (lol).
From my point of view the biggest advantage of FIR filters is the arbitrary shape of their slope. Subtraction filters (e.g. Horbach-Keele) or asymetrical slopes are not possible with IIR filters. The possibilities to shape directivity is much higher with FIR filters, because of the linear phase character.
Beside that I did my own tests with equalized phase response on a 3-way speaker (with 48 dB/Oct) years ago (using the software Phase Arbitrator). I could not hear any difference. Maybe I needed more training, but since these days I don't believe anyone who claims that a corrected phase makes a "big" difference. 😉
Beside that I did my own tests with equalized phase response on a 3-way speaker (with 48 dB/Oct) years ago (using the software Phase Arbitrator). I could not hear any difference. Maybe I needed more training, but since these days I don't believe anyone who claims that a corrected phase makes a "big" difference. 😉
Earl,
I have much respect for you and your work. But what I really don't understand is why you don't try this for yourself?
I understand that you want to base your work on scientific evidence, but isn't science also about keeping an open mindset to continue and improve on work that has been done in the past?
I understand that the experience of one person isn't reliable scientific data, but first you have to find out what you are looking for before you can decide what you would like to test on a group, right?
I would be happy to make some audio fragments for you with different phase shifts in it, which you can test with headphones or your own speakers, then you can find out for yourself if you think it is relevant or not. You could point me to your favorite audio test tracks and I could process them for you, including exact description of the processing.
I gues that you have abx software yourself, if not I could also supply this for you.
If you draw the conclusion that it is not relevant for you, it is perfectly fine by me. I am just having a hard time understanding that one of the "great minds" in audio (which you are in my opinion), doesn't want to find out if he himself hears it or not, but just wants to base his opinions on data measured on other people.
Kees
I have much respect for you and your work. But what I really don't understand is why you don't try this for yourself?
I understand that you want to base your work on scientific evidence, but isn't science also about keeping an open mindset to continue and improve on work that has been done in the past?
I understand that the experience of one person isn't reliable scientific data, but first you have to find out what you are looking for before you can decide what you would like to test on a group, right?
I would be happy to make some audio fragments for you with different phase shifts in it, which you can test with headphones or your own speakers, then you can find out for yourself if you think it is relevant or not. You could point me to your favorite audio test tracks and I could process them for you, including exact description of the processing.
I gues that you have abx software yourself, if not I could also supply this for you.
If you draw the conclusion that it is not relevant for you, it is perfectly fine by me. I am just having a hard time understanding that one of the "great minds" in audio (which you are in my opinion), doesn't want to find out if he himself hears it or not, but just wants to base his opinions on data measured on other people.
Kees
Earl,
I have much respect for you and your work. But what I really don't understand is why you don't try this for yourself?
I understand that you want to base your work on scientific evidence, but isn't science also about keeping an open mindset to continue and improve on work that has been done in the past?
I understand that the experience of one person isn't reliable scientific data, but first you have to find out what you are looking for before you can decide what you would like to test on a group, right?
I would be happy to make some audio fragments for you with different phase shifts in it, which you can test with headphones or your own speakers, then you can find out for yourself if you think it is relevant or not. You could point me to your favorite audio test tracks and I could process them for you, including exact description of the processing.
I gues that you have abx software yourself, if not I could also supply this for you.
If you draw the conclusion that it is not relevant for you, it is perfectly fine by me. I am just having a hard time understanding that one of the "great minds" in audio (which you are in my opinion), doesn't want to find out if he himself hears it or not, but just wants to base his opinions on data measured on other people.
Kees
Kees, I appreciate your offer and I will take you up on it.
My mind is not closed on this issue and if you think it is then you haven't read what I have been saying. My point is that there are a lot of inconsistencies in the claims and that usually means that something is amiss. I have the utmost respect for people like Toole and they do not seem to be interested in this. It can't be that they are not aware of it.
The only work that I have seen done which is scientific on this issue (previously posted) does not seem to support this as a major effect. So yes, I would have to say that this data does not support your contention. I have seen some data that does, so again, my mind is not made up. But simple subjective assessments by a few people done without controls is never going to convince me of anything.
Why don't I try this? Because I have only ever done subjective work in a laboratory, I do not do this at home. My home system is not prone to testing as it is all custom installed and not setup for experimentation - basically it is a PITA. If I became convinced that there was a solid hypothesis here I might design an experiment and have Lidia test it at her lab.
My Favorite song, which I always use to test systems, in "Blue Train" by Linda Ronstadt or anything off that CD. I have done many subjective tests with this song in the past. So accurately define what you are going to do and send it to me. I will use headphones and then if it is significant try it over speakers.
This is fantastic.
Nice. I love to see this type of collaboration.
Earl,
<Removed for brevity>
I would be happy to make some audio fragments for you with different phase shifts in it, which you can test with headphones or your own speakers, then you can find out for yourself if you think it is relevant or not. You could point me to your favorite audio test tracks and I could process them for you, including exact description of the processing.
I gues that you have abx software yourself, if not I could also supply this for you........
<Removed for brevity>
Kees
Nice. I love to see this type of collaboration.
Bigger windmills in view? If phase is audible, it's more subtle than amplitude and other problems. Pick your battles.But add to that the fact that the real experts in the field of perception (Toole, Olive, etc.)never even talk about phase, conduct no tests on it and claim to have better than a 95% correlation to data that completely excludes phase. How does one explain that?
How much more inconsistent is it than some other areas? And if the effect is subtle at best, how consistent should we expect it to be?My point is that there are a lot of inconsistencies in the claims and that usually means that something is amiss.
There have been a few recent posts that I do not find inconsistent with my experience. You know me well enough that if I didn't agree, I'd say so. And at least one person has done blind ABX tests and reported a high level of success.
I have done phase work on signals and tried to hear it on headphones. I failed. But correcting speaker phase has been audible, a little. I do not know why the difference.
From my point of view the biggest advantage of FIR filters is the arbitrary shape of their slope. Subtraction filters (e.g. Horbach-Keele) or asymetrical slopes are not possible with IIR filters.
Just for the record, if you study the theory of Spectral Analysis (see Marple or Kay), IIR and FIR are two classes of filter functions know as Autoregressive (AR) and Moving Average (MA) (also called feedback and feed forward, or all pole and all zero.) In the text that I referenced it is proven that either form of filter can fit an arbitrary transfer function, but not necessarily with the same precision for a given order. Some filters better fit some functions, but either filter can do any function.
The most concise are ARMA with both poles and zeros. These are common and always fit with the least number of terms, which is why they are preferred for adaptive processes in real time. MA are the easiest to do - just take an FFT and you basically have your coefficients.
Last edited:
How much more inconsistent is it than some other areas? And if the effect is subtle at best, how consistent should we expect it to be?
There have been a few recent posts that I do not find inconsistent with my experience. You know me well enough that if I didn't agree, I'd say so. And at least one person has done blind ABX tests and reported a high level of success.
But isn't subtlety the point - at least that's my point. If this is a small effect then its no wonder that it hasn't garnered much attention and is so inconsistant. But listening to the claims one would be led to believe that if you don't do this its not worth listening to the system. Perhaps this is just the usual puffery and we should look past it, but to me, significance is the issue. How significant is it? I asked about the ABX test and didn't get any specifics on what was tested so I don't know what to make of that.
As I said, if I can be convinced that this is a significant effect and not just a wild goose chase then I will setup an experiment since Lidia and I are looking for something to do (I'm bored.) (Actually my son wants to do a "sound experiment". He did an experiment last year on a new way to measure the speed of sound which was simple and accurate to 2% error and won a big award for it. So he wants to do a sound experiment again. Dad helped a little. 🙄)
If it's an effect that takes hundreds of subjects to detect then I am not going to do it. Instead we will do a test for the effect of very early side reflections on image stability. I'm pretty sure that won't end up with a Null result.
Some modeled visuals for the phase talk : )
All three examples has system bandwidth 20-20kHz and BW2 stop band slopes.
First is a two-way minimum phase with LR4 XO at 1kHz.
Second is minimum phase full range system.
Third is linear phase system.
Personally enjoy a 2-way system with 3" full ranger 10F/8424 combined 10" woofer SPH250KE, and run it linear phase all way to that systems stop bands as in third example.
Also is attached a zip-folder with dxo-file to open in free XSim http://www.diyaudio.com/forums/multi-way/259865-xsim-free-crossover-designer.html (a bit fun and action with all the windows open when one switch between the three drivers).
All three examples has system bandwidth 20-20kHz and BW2 stop band slopes.
First is a two-way minimum phase with LR4 XO at 1kHz.
Second is minimum phase full range system.
Third is linear phase system.
Personally enjoy a 2-way system with 3" full ranger 10F/8424 combined 10" woofer SPH250KE, and run it linear phase all way to that systems stop bands as in third example.
Also is attached a zip-folder with dxo-file to open in free XSim http://www.diyaudio.com/forums/multi-way/259865-xsim-free-crossover-designer.html (a bit fun and action with all the windows open when one switch between the three drivers).
Attachments
Last edited:
- Home
- Loudspeakers
- Multi-Way
- Mini-Synergy Horn Experiment