No? You don't want to see measurements?
No, not with a null result!
Very wrong reasoning! Any, and I mean ANY crossover designer should and MUST include equalizing in the crossover (passive or active) in the process of designing the end product. Even the most coveted expensive high-end midranges need some equalizing (in the crossover), at least for implementing the inevitable Baffle Step Correction.What you haven't done is to compare the differences among these drivers without any EQ. That is where the real valuable information lies. And where significant performance differences are likely to show up based on price and quality. Not in what you have done in a very narrow test
Last edited:
Fair enough. Many would, why do you care?No, not with a null result!
Many would, why do you care?
I just suspected for a long time what Jon seems to have confirmed, that's all!
The only reason to have measurements is for getting a sense of how inept a human sense of hearing is. I'd guess it's inverse to level of denial😀
...here's a relevant posting on another website by Floyd Toole.
Haha! The M2 they talk about in that thread is a speaker I am familiar with. Have a pair right here, along with the digitally EQ'ed Crown class D amplifiers that are part of the system. The speakers sound pretty good with the EQ engaged, but the digital amps sound AWFUL.
Therefore, I am now using Sound Lab electrostats driven by Benchmark AHB2. Much better than M2!
However, available dacs here most of the time consist of Benchmark DAC-3 and Topping D90. The Sound Lab speakers sound much better with the D90 than they do with DAC-3 (haven't tried both with M2).
Easy to hear the difference blind, too!
One more thing about M2, experiments here suggest that M2 could sound *much* better with better quality dacs and power amps, and possibly by using higher qualtiy DSP.
Last edited:
Your conclusion is very misleading, however, for most of us who don't have the ability, or means, or interest to accomplish that very tight level of EQ. Most people here don't utilize DSP, which is essential to replicate your results.
What you haven't done is to compare the differences among these drivers without any EQ. That is where the real valuable information lies. And where significant performance differences are likely to show up based on price and quality. Not in what you have done in a very narrow test
I understand your point, classicfan. It's an interesting perspective, no doubt. You're being practical. I get it.
Well, let's bring back the sports car analogy.
You want to test different cars on a race track. You want to compare them, ultimately, to determine which one is better/faster.
Now, hypothetically, all sports cars are equipped with normal all-season tires.
Let's say the all-season tire is a Non-EQ driver. It works, not perfect, but it works in most condition. It's also cheap, easy to drive and quiet (no pun intended) and practical.
Let's say the extreme summer or semi-track tire is the EQ'd driver. It's much more appropriate for a track, it will show the real capacities of each cars, but it cost more and it's not as easy.
So basically, the semi-track tire is a better tool, a tool that exists and that is available for sports car enthusiasts. It makes ANY sports car better, faster, and numbers can prove it (lap times), the same as an EQ can prove it (frequency response).
So basically, you are asking me to test audio components, transducers, speakers... Without using proper, modern, available tools, such as semi-track tires for a race track.
Also, and more importantly, the whole idea of this blindtest was to see what the results would be ONCE EQ'd.
I see not reason to conduct an IDENTIFICATION blind test...with uncorrected drivers. And what about SPL-matching? How can we do that anyway? On each driver's peak?
See? It doesnt make sense at all.
The EQ corrections was mandatory for that test.
An hypothetical Non-EQ test would have been entirely pointless, UNLESS, of course, nobody would have identify any of the drivers... and in that case, we should all stop this hobby and go collecting butterflies.
To sum up this whole thread.
A guy just parked his Ferrari 488 pista on the side of a race track.
For some reasons, the car is on all-seasons tires.
All other pilots are noticing that...
''Why don't you use better tires for the track?'' They say.
- I have a Ferrari 488 pista, he replies.
''Yes, but you won't be able to get the most from it, not even close.''
- But I paid a fortune for my Ferrari!
''Ok but you need good tires, even on the road. You're wasting your car's potential with such tires.''
Replace semi-track tires with winter tires, if you want to picture that in a more cold and dramatic scenario. Even if you have the best 4x4 SUV in the whole world, if you have worn out summer tires and you're stuck in 50cm of snow and ice, you won't go far. You need traction.
Same applies for all, and I mean ALL, non-EQ drivers... You simply don't get the FR you could really get. No exotic enclosure or paint on the cone, or the most expensive driver, will have a FR that doesnt need correction. That doesnt exist.
A guy just parked his Ferrari 488 pista on the side of a race track.
For some reasons, the car is on all-seasons tires.
All other pilots are noticing that...
''Why don't you use better tires for the track?'' They say.
- I have a Ferrari 488 pista, he replies.
''Yes, but you won't be able to get the most from it, not even close.''
- But I paid a fortune for my Ferrari!
''Ok but you need good tires, even on the road. You're wasting your car's potential with such tires.''
Replace semi-track tires with winter tires, if you want to picture that in a more cold and dramatic scenario. Even if you have the best 4x4 SUV in the whole world, if you have worn out summer tires and you're stuck in 50cm of snow and ice, you won't go far. You need traction.
Same applies for all, and I mean ALL, non-EQ drivers... You simply don't get the FR you could really get. No exotic enclosure or paint on the cone, or the most expensive driver, will have a FR that doesnt need correction. That doesnt exist.
Why do you refuse to show measurements?
😕
Scott, I don't have measurements. The test was made over 2 years ago, in an audio lab that is dismantled. Measurements were done then just to EQ the drivers, and it was just ''preliminary EQ'' (see? I don't use the term ''sloppy'' anymore, I learned!) 😛
I don't see how the measurements would be relevant for you to judge of that test.
Measurements would have been relevant IF, only IF, the participants (at least one) would have succeed to identify at least one duo of drivers.
Because one could argue that the EQ or the room, or anything could have been in a way that it was easy to spot said drivers.... Not the other way around.
Basically what I say is: any lack of EQ precision or room's acoustic flaws would have HELP to get positive identification, and on the contrary, a more perfect EQ would have made things worst... But how worst? We had already ZERO identification!
Poor EQ accuracy could have made the best speaker sound as bad as the worst speaker, could it not? Why does a lack of EQ accuracy have to work only in the other direction, towards making speakers sound more different from each other?
Weird that you invested all that time and effort but didn't keep the measurements, sloppy one might say.
Not an unreasonable request. I think measurements of the EQ'd speakers would be included somewhere in that, "details/detailed" perhaps?Please explain in details the methodology, the number of participants and the detailed results.
Last edited:
All the people that have passed blind tests listening to dacs/amps/speaker/single-opamps have been completely ignored.
Has their methodology been examined/documented and the same judgement applied as is asked for here?
I don't know if the number of people who have done that are less than 5% or even less than 1% of the population, but I do know for a fact that such things have happened and continue to happen.
I'm still going with .0001% or less if you are talking the whole population.
Also please note something important about that test, about the Null hypothesis.
It's always related to a threshold.
In that test, the threshold was found: while the power response (beaming/directivity) thereshold was not found, SPL and bandwith were.
As explained in earlier pages, the participants were sensitive to SPL differentials of minimum 0.3db, but usually more in the 0.5db-1.0db range..
As for the bandwith, people had a very hard time to identify a 6600hz from 7200hz upper crossing point (that's 1/6 octave ''removed'': at 48db/octave it's a 8db drop) and people started to only consistently spot a 1/2 octave difference (24db drop).
And of course, in these post-test's tests... We chose only music excerpts with enough music material in these frequencies.
So despite the fact that this was not that blindtest's main goal, we noticed thresholds.
I'll let the pleasure to others to make more thorough tests in regards of those thresholds, but they exists, obviously.
Same as when we found the threshold with digital music formats:
160kpbs for MP3 encoding
96kbps for the AAC encoding
And what that meant is basically starting from a 192kbps MP3 encoding, you're ''safe'', you wouldn't be able to identify it from the original uncompressed file.
Same for a 128kbps AAC encode (Apple).
So, bottomline, thresholds are ''fun'' to find. They make these blindtests less ''nul'' -pardon my french- but they are often difficult to reach because we tend to overestimate our hearing capacities. This happened every single time I organized a blind test: digital music formats, then the DACs, then the mid drivers...
The DACs test, however, we couldnt find any threshold, as we couldnt find a converter ''bad'' enough to have audible differences.
It's always related to a threshold.
In that test, the threshold was found: while the power response (beaming/directivity) thereshold was not found, SPL and bandwith were.
As explained in earlier pages, the participants were sensitive to SPL differentials of minimum 0.3db, but usually more in the 0.5db-1.0db range..
As for the bandwith, people had a very hard time to identify a 6600hz from 7200hz upper crossing point (that's 1/6 octave ''removed'': at 48db/octave it's a 8db drop) and people started to only consistently spot a 1/2 octave difference (24db drop).
And of course, in these post-test's tests... We chose only music excerpts with enough music material in these frequencies.
So despite the fact that this was not that blindtest's main goal, we noticed thresholds.
I'll let the pleasure to others to make more thorough tests in regards of those thresholds, but they exists, obviously.
Same as when we found the threshold with digital music formats:
160kpbs for MP3 encoding
96kbps for the AAC encoding
And what that meant is basically starting from a 192kbps MP3 encoding, you're ''safe'', you wouldn't be able to identify it from the original uncompressed file.
Same for a 128kbps AAC encode (Apple).
So, bottomline, thresholds are ''fun'' to find. They make these blindtests less ''nul'' -pardon my french- but they are often difficult to reach because we tend to overestimate our hearing capacities. This happened every single time I organized a blind test: digital music formats, then the DACs, then the mid drivers...
The DACs test, however, we couldnt find any threshold, as we couldnt find a converter ''bad'' enough to have audible differences.
Poor EQ accuracy could have made the best speaker sound as bad as the worst speaker, could it not? Why does a lack of EQ accuracy have to work only in the other direction, towards making speakers sound more different from each other?
Mark, you do understand it was an identification blind test, do you?
There was no ''sound bad/sound good'' in the equation.
Weird that you invested all that time and effort but didn't keep the measurements, sloppy one might say.
Not an unreasonable request. I think measurements of the EQ'd speakers would be included somewhere in that, "details/detailed" perhaps?
You did read my explanation?
In that particular blindtest, the measurements of the drivers were irrelevant.
The most important measurements, relevant to the test, were provided in post #1:
Noise floor: 29db C-weighted
RT60: 350ms
The noise floor was low.
And one could argue the RT60 was a little high, but on the other hand a very low RT60 means the power response has less impact (room's walls/ceiling/floor reflections) therefore doesnt give a lot of chances to spot differences. But a very high RT60 defeats the idea of having a ''real life'' Hi-Fi environment, since sound systems are usually not installed in bathrooms.
There should be a name for the "law" that the level of evidence demanded for a claim is inversely proportional to how well the claim comports with an internet discussion participant's existing opinion.Has their methodology been examined/documented and the same judgement applied as is asked for here?
There should be a name for the "law" that the level of evidence demanded for a claim is inversely proportional to how well the claim comports with an internet discussion participant's existing opinion.
Thanks, this just made it on my "quotes to be quoted" list.
How about "Earl Grey's law of indredulity"?😀
Poor EQ accuracy could have made the best speaker sound as bad as the worst speaker, could it not?
To answer more specifically this question:
You have to define ''sound as bad''...
Flat response = best, therefore bumpy curve = worst speaker, am I right?
So what you mean is my ''poor EQ accuracy'' would actually copy the worst driver's natural FR ?
There should be a name for the "law" that the level of evidence demanded for a claim is inversely proportional to how well the claim comports with an internet discussion participant's existing opinion.
Nothing new. Its part of well studied human biases. Confirmation bias, etc. Not found to always follow a simplistic mathematical law though.
Last edited:
- Status
- Not open for further replies.
- Home
- Loudspeakers
- Multi-Way
- BLINDTEST: Midrange 360-7200hz, NO audible difference whatsover.