New thread for blind subjective tests of drivers
http://www.diyaudio.com/forums/full...ind-comparison-2in-4in-drivers-round-3-a.html
Round 3 just started. B80 as well as A7P and A7.3. Some cool 2in drivers too.
http://www.diyaudio.com/forums/full...ind-comparison-2in-4in-drivers-round-3-a.html
Round 3 just started. B80 as well as A7P and A7.3. Some cool 2in drivers too.
http://www.diyaudio.com/forums/full...ind-comparison-2in-4in-drivers-round-3-a.html
Round 3 just started. B80 as well as A7P and A7.3. Some cool 2in drivers too.
You have put a lot of effort into this. Congrats. 🙂
You have put a lot of effort into this. Congrats. 🙂
Thanks! Good luck on your testing.
A change of approach in the whole test.
It seems more and more obvious that our controlled BLIND test environment makes it very difficult to pass the identification step for some participants, so instead, we will make GROUPS of drivers that have high contrast potential (theoratically).
Each group = 4 drivers, selected for their different technologies/sensitivity/Sd, etc...
So these are NOT what we subjectively think sounds the best but rather a combination of drivers that provides sonic signature contrasts
.
.
.
1st group that will be tested:
Voxativ AC-1.6
ATC SM75-150
Airborne FR151 paper cone version
Alpair 7.3 eNc
The second round will use a 2nd group of drivers that will probably include one of the Scan-speak along with one driver from the first group, so we can have a reference.
All will be SPL-matched and EQed for flat response, same as originally planned.
It seems more and more obvious that our controlled BLIND test environment makes it very difficult to pass the identification step for some participants, so instead, we will make GROUPS of drivers that have high contrast potential (theoratically).
Each group = 4 drivers, selected for their different technologies/sensitivity/Sd, etc...
So these are NOT what we subjectively think sounds the best but rather a combination of drivers that provides sonic signature contrasts
.
.
.
1st group that will be tested:
Voxativ AC-1.6
ATC SM75-150
Airborne FR151 paper cone version
Alpair 7.3 eNc
The second round will use a 2nd group of drivers that will probably include one of the Scan-speak along with one driver from the first group, so we can have a reference.
All will be SPL-matched and EQed for flat response, same as originally planned.
Last edited:
So you're saying that people can't hear any difference? Is this with the drivers eq'd or au naturale?
I believe he is saying people are having trouble telling drivers apart in a blind test. Blind tests are in general only able to statistically tell us that DUT are different. A null result is only applicable to that test.
dave
dave
So you're saying that people can't hear any difference? Is this with the drivers eq'd or au naturale?
Bottomline: i made an error.
I thought it would be a given. I thought it will be very easy to identify the drivers from each others.
But it's not.
''au naturale'' but SPL-matched it's not THAT easy.
SPL-matched and EQ'd is difficult for many people.
You need to focus. You need to ''practice''.
That being said, not all drivers are equal. That explains the selection of drivers we made.
So now the test is much more oriented on identification rather than appreciation.
To give you an idea: we could even made a round with 4 times the exact same driver, but EQ'd differently. Just to give a perspective on human perception in regards of frequencies peaks/dips and SPL (V.S. change of drivers)...
By exemple, all same driver model but:
Driver 1 : EQd flat / 0.0dB
Driver 2 : EQd flat / +1.5dB
Driver 3 : peak 1khz-2khz (+3.0dB in the peak)
Driver 4 : dip 1khz-2khz (-3.0dB in the dip)
just to be sure, you are trying the drivers without any bass support or highs support?
so you listen to music with only the mids playing?
can you please, and sorry if I make you repeat this, what is the parameter of the test and what are the frequency that each mids have to play? (IE: 300hz to 5khz)....
thanks! this is getting very interesting imo and correlate what many seem to believe: that what makes a good speaker is definitely not using the ''best'' drivers, but the integration between the LF drivers, MIDS and HF drivers. the crossover is the heart of a speaker, not the drivers. of course driver quality if important, but it seems that once you use good quality drivers and use them in the right application and situation, the xo is definitely the heart.
I'm sure many knows that, but still, that test is very interesting and helpful!!!!
so you listen to music with only the mids playing?
can you please, and sorry if I make you repeat this, what is the parameter of the test and what are the frequency that each mids have to play? (IE: 300hz to 5khz)....
thanks! this is getting very interesting imo and correlate what many seem to believe: that what makes a good speaker is definitely not using the ''best'' drivers, but the integration between the LF drivers, MIDS and HF drivers. the crossover is the heart of a speaker, not the drivers. of course driver quality if important, but it seems that once you use good quality drivers and use them in the right application and situation, the xo is definitely the heart.
I'm sure many knows that, but still, that test is very interesting and helpful!!!!
Last edited:
what many seem to believe: that what makes a good speaker is definitely not using the ''best'' drivers, but the integration between the LF drivers, MIDS and HF drivers. the crossover is the heart of a speaker, not the drivers.
It depends on what people think is the bottleneck in their situation. For example, for beginners it is usually very difficult to integrate drivers, so they tend to think that the integration is the key. But once integration is very easy for them, the bottleneck is then the quality of the drivers. For some, it may even be the enclosure.
and correlate what many seem to believe: that what makes a good speaker is definitely not using the ''best'' drivers, but the integration between the LF drivers, MIDS and HF drivers. the crossover is the heart of a speaker, not the drivers.
I wouldn't go that far.
In that test preparation, we have made several listening sessions with endless of xover combinations, music excerpts (styles and lenght time) and drivers...
When a participant knows which drivers are playing, they have clear preferences almost all the time. And he is quite sure about them. No hesitation or not much...
But when the very same test is switched in blind, for some reason the same participant is much less affirmative, sometimes even downright confused and unable to know which is which...
We had so far about half-dozen participants, so it's statistically not conclusive at all. But we're starting to see patterns already... It can go from confident to not confident (regarding identification/preferences in blind).
It is now much more about participant's auditory capabilities than driver's quality. Maybe that will help to find participants with ''silver and gold ears'' so we can shift back the test to the appreciative road... 🙂
It depends on what people think is the bottleneck in their situation. For example, for beginners it is usually very difficult to integrate drivers, so they tend to think that the integration is the key. But once integration is very easy for them, the bottleneck is then the quality of the drivers. For some, it may even be the enclosure.
My opinion on that is:
Driver's selection is the most important thing, but it must go with a good integration, and the best integration is possible with active set-up/electronic xover/DSP EQ.
just to be sure, you are trying the drivers without any bass support or highs support?
so you listen to music with only the mids playing?
can you please, and sorry if I make you repeat this, what is the parameter of the test and what are the frequency that each mids have to play? (IE: 300hz to 5khz)....
We tried almost any XO combinations between 200hz+ without any lowpass to very limited bandwitch such as 1khz-3.5khz.
What works the best with the drivers we are actually testing is 340hz-6.7khz @48db/oct slope: very listenable while not having some kind of ''misleading'' mid sound feeling.
We tried few rounds with much more limited bandwith and we get same perceptions/results as this wider one. There was no case of ''wow, that driver sounds really different like that!''...
On the other hand, anything lower than 340hz get a bit more tricky for some drivers, even EQ'd.
And most drivers once EQ'd could go up to 8-10khz easily put then again it sounds a bit funny when you don't compensate with lower frequencies... It feels too unbalanced. 340-6700 is excellent.
Anyway, for identification, it's a PLUS to have at least one driver a little bit out of their comfort zone... We might put back the Radian just to see how a ''misused'' driver will perform in blind 😉
One thing VERY weird is: when you listen to drivers with such limited bandwith for hours, and then switch back to a ''real'' full bandwith system, for the first seconds it sounds almost bad and at the very least weird... 😱
Like if you get used to that unbalanced sound.
Fortunately, it comes back to normal after few minutes.
Might be optimistic but my educated-guess is:
about 70% of the participants will be able to consistently identify each 4 drivers in a blind test.
That is after doing everything possible to help them, including practice time and drivers presentation (not blind) before each round.
That might leave 30% of people unable to identify correctly.
It's kind of sad, in a way.
We had about the same percentage of people in our previous test that were unable to spot the MP3 64kbps (consistently)... You might think a MP3 64kbps is very easy to spot, but not for all people.
about 70% of the participants will be able to consistently identify each 4 drivers in a blind test.
That is after doing everything possible to help them, including practice time and drivers presentation (not blind) before each round.
That might leave 30% of people unable to identify correctly.
It's kind of sad, in a way.
We had about the same percentage of people in our previous test that were unable to spot the MP3 64kbps (consistently)... You might think a MP3 64kbps is very easy to spot, but not for all people.
I don't understand the point of a blind test preceded by identified audio training of their sound. What is the need to have people correctly identify drivers after training them? I thought the point is to have a blind test so that people can select an unbiased "best" driver based on sound quality.
thanks for the info! you should edit in in your first post so people know the parameters of the test!We tried almost any XO combinations between 200hz+ without any lowpass to very limited bandwitch such as 1khz-3.5khz.
What works the best with the drivers we are actually testing is 340hz-6.7khz @48db/oct slope: very listenable while not having some kind of ''misleading'' mid sound feeling.
We tried few rounds with much more limited bandwith and we get same perceptions/results as this wider one. There was no case of ''wow, that driver sounds really different like that!''...
On the other hand, anything lower than 340hz get a bit more tricky for some drivers, even EQ'd.
And most drivers once EQ'd could go up to 8-10khz easily put then again it sounds a bit funny when you don't compensate with lower frequencies... It feels too unbalanced. 340-6700 is excellent.
Anyway, for identification, it's a PLUS to have at least one driver a little bit out of their comfort zone... We might put back the Radian just to see how a ''misused'' driver will perform in blind 😉
One thing VERY weird is: when you listen to drivers with such limited bandwith for hours, and then switch back to a ''real'' full bandwith system, for the first seconds it sounds almost bad and at the very least weird... 😱
Like if you get used to that unbalanced sound.
Fortunately, it comes back to normal after few minutes.
My opinion on that is:
Driver's selection is the most important thing, but it must go with a good integration, and the best integration is possible with active set-up/electronic xover/DSP EQ.
how can you have this conclusion when your test seem to show that in blind test situation, well designed and behaved drivers sounds more or less the same?
about dsp, electronic xover. I wonder why all the big companies (revel, jbl, B&W, ect) havent jump on the dsp train in their high end speakers. if it was so superior, every big companies would use dsp/electronic xover.
I think its very tempting to believe that dsp and electronic xover is the best way to integrate drivers, I doubt its the truth though.
Last edited:
I don't understand the point of a blind test preceded by identified audio training of their sound. What is the need to have people correctly identify drivers after training them? I thought the point is to have a blind test so that people can select an unbiased "best" driver based on sound quality.
You start the test as that:
''Ok we'll start with the music from XYZ, ready ? Here is the Driver 1.... Now here is the driver 2.... Now here is driver 3... finally, here is Driver 4.''
''Ok now the test begins, ready ?''
(exact same music plays)
*random driver*
''Which one is it ? 1, 2, 3 or 4 ?''
You repeat with another random driver. Then, if successful after X trials, you move to the next step: unknown music excerpts (more difficult).
If successful after X trials, you have a participant who is able to identify. If not, well, you can ''eliminate'' him or continue the presentation/practice.
The whole idea behind all this is very simple: You need to be able to identify PRIOR to give your appreciation. It's simple logic.
I thought the point is to have a blind test so that people can select an unbiased "best" driver based on sound quality.
From Day 1, I knew it would be difficult.
I had very little hope to get any conclusive answer about what is best. For many reasons discussed all over this thread...
But anyway, from my point of view it's much more exciting to know about the identification test results.
Can't wait to see the % of people able/unable and also if there is any patterns among the drivers (i.e. two drivers easily mistaken from one another), etc...
Still, we can collect the comments from participants about their preferences, of course. But that is subjective and one can argue about driver's misused, etc...
how can you have this conclusion when your test seem to show that in blind test situation, well designed and behaved drivers sounds more or less the same?
Personnal opinion. Nothing more...
The test results is one thing and my opinion is another. 😉
At this time, i'm far from having any ''conclusion''... Just casual talk, here 😛
about dsp, electronic xover. I wonder why all the big companies (revel, jbl, B&W, ect) havent jump on the dsp train in their high end speakers. if it was so superior, every big companies would use dsp/electronic xover.
I think its very tempting to believe that dsp and electronic xover is the best way to integrate drivers, I doubt its the truth though.
Big companies must deal with considerations that are often not related to ''best sounding'' goals...
But if you go to Hifi shows, you'll see more and more DSPs (and music server). Something that was almost unthinkable only 10 years ago.
Another thing: you can go passive and STILL benefits from having a DSP/electronic xover, simply by using it as a designer's tool.
Back in the days i was always struggling with passive components to make comparison tests but nowadays you can use DSP as a tool to compare easily xover slopes and points. Extremely efficient to say the least. And we're not even talking EQ yet, which is a game-changer...
- Home
- Loudspeakers
- Multi-Way
- World's best midrange Blind Testing - Need your help.