World's Best Midranges - Shocking Results & Conclusions.

I still believe, few years later, that blindtests involving Subwoofers/Woofers or Tweeters, would have the same fate. Given, of course, common bandwith, EQ'd and SPL-matched.

And I still believe that anyone motivated enough would be able to clone, to sonically mimic pretty much any commercial loudspeakers, such as B&W, Avalon, Dynaudio, MBL and whatnot... with just a bunch of cheap easily available parts, an EQ, good measurement tools and enough patience to organize the whole thing.
 
Hi JonBocani,


Why not to start to draw the ideal listening 20 hz to 20 khz first as you talk about mimic a speaker whatever the room with DSP tools ? If you have to draw it what would like its shape please ?


For illustration mine could look like a 12 db slope since -F3 at 40 hz, an extra bump between 70 hz to 120 hz of 6 dB then a little recess in the 1 k hz to 2 khz and - 6dB between 2.5 K hz to 4.5 k hz, something flat after till 15 k hz...after not sure I can hear it too much ! Not talkin about the difficult subject of listening room , phase and so on, if I had an ideal headphone it woulk look like this curve !
 
Jon, I largely agree with you, but....distortion and break up profiles of drivers do give coloration and will be hard to replicate. Plus power handling, it costs money to make drivers that can handle serious power.



I already thought about this hypothetical blind test, because I wanted to organize it.

To begin with, you need to determine the commercial speakers to be used as a benchmark (I bought a pair of B&W CM9 at that time) then take the measurements to determine what will be needed as DIY components to replicate the sound signature. My idea at that moment was to use the absolute cheapest possible, obviously to make a point...


All tested, of course, at a predetermined sound level (say 90db at the listening position) to keep the control of the distortion below audible level. (later, another test could be made at 100db, or maximum output from the benchmark, etc...)


In an ideal world the (4) speakers installed on rotating plates, similarly as I did for the midrange drivers, to avoid identifying them by their position in space (power response, etc..), but then I wanted to test a simpler method which consisted in rotating the listener on a chair and more or less align him each time on the pair of speakers that was going to play that round.

The 2 pairs of loudspeakers, aligned offset 1 + 3 and 2 + 4, so as to keep the same distance between each pair.

And of course, as many people to go through the test as possible, with different age and background, and at least 20 rounds each to get statistical value in case of success (17/20) and with music excerpts they are comfortable with.

Voilà. That was my recipe.
 
Sometimes a dead violonist is much more comfortable than a live one ! 😉

Sorry, couldn't resist !!!!

I guess it is probably still possible to distinguish live from recorded through differences in room interaction between speaker and the real thing.

Regards

Charles
 
Sometimes a dead violonist is much more comfortable than a live one ! 😉

Sorry, couldn't resist !!!!

I guess it is probably still possible to distinguish live from recorded through differences in room interaction between speaker and the real thing.

Regards

Charles

haha! 😛

I hope so, Charles, I hope so.
I would be confident that almost everyone would spot the live musician, but since I started making blind tests, I go from surprise to surprise... 🙄


That being said, the ''commercial speakers V.S. DIY speakers test'' would probably be much harder for our ears (to spot), even by using cheap components on purpose...
 
"3. No one ever does blind listening tests"

I have to react to this - primarily because it sends me down memory lane 😀

I did participate in a blind listening test back in my university days.

It was a research project (like for a PhD) about the capabilities of humans to differentiate between speakers.
We - the test persons - were tested if we had "normal" hearing and the first listening test was a screening if we were sufficiently consistent in our ability to decide between speakers.

The tests was using 4 sets of speakers placed behind a curtain and every test person was repeatedly presented with a piece of music and could switch between two speakers at a time. The tests varied in what two speakers we could switch between.

The goal for the test person at every sub-test was to decide the prefered of the two speakers.
And we were only presented with "speaker A" and "speaker B"

After the tests had completed we were shown the speakers - I recall a Quad ESL 63, the rest was not remarkable.
BTW: I was never told what speaker I preferred.

As I recall the PhD student ended up at Bang&Olufsen and the research report must still exist somewhere - but will be probably in Danish🙄

Digging that out will be a feat - I think the research was conducted back in 1982.

Cheers, Martin
 
Why do people think merely EQ-ing a speaker can fix most audible problems and can imposter a more expensive speaker? Its just not the case, unless you don't have an educated ear (like most self proclaimed audio-phools). Eq won't cover up non linear distortion, cabinet ringing, phase shifts, inductor hysteresis distortion or other breakup modes. If that was the case, Bose would be the greatest brand known to man kind.... we all know better, unless you're some brain dead investment banker who regularly visits a palm reader.

Also, if EQ is that great, why can't it fix a bad sounding acoustical space? ?...because it can't. You simply can not just slap some slick EQ on cheap speakers and get them to sound like multi thousand dollar ones. An experienced ear can pick up on less than 0.1 % odd order harmonics in the midrange. No EQ is going to cover that up.

Short duration blind listening tests aren't very objective, mainly because of basic human psychology. We humans are very emotionally impulsive in our immediate responses and that gets in the way of objective reasoning needed to make a fair assessment regarding sound quality. Its very easy to sway the results one way if so desired by regulating the listening environment. All of our other senses mess with how we perceive what we hear. Even something as seemingly unrelated as what we had to eat that day can skew the results.
 
The biggest flaw with blind tests is the ignorance of the participants. I.e. lack of reference. If a group of people were asked to judge a range of brick walls without knowing what makes a good or bad brick wall, how can the results be offered as measurement?

The design of the tests is also important. Knowing the strength and weaknesses of given elements, the test can be designed to expose weaknesses. Any car can get from A to B.
 
Isn't blind A B testing just to see if there is difference between the test subjects and not if either is the absolutely best there is? I'm not very familiar with this test but surely it is the best test method anyone has been able to conduct to separate all kinds of biasing ruining the evaluation. Actually this is the whole point in the test, to keep the participant ignorant? Maybe there is some short education period before a test, like how to listen some of the stuff they are asking to evaluate?

If I remember Toole had a note in his book that relatively inexperienced listeners were almost as good (persistent results) with the double blind tests as very experienced listeners. This tells the test is good and there is no need for reference for the ignorant to be able to hear difference between A and B, if there is one. There is a point what you say, if you want absolutely best speaker for yourself you have to do a test yourself since can't trust other people ear 😉

You could device a blind test for brickwalls as well. Install some the different walls for subjects. Some get the good stuff, some get the placebo, then see if the good stuff is any better than the placebo. The subjects don't know which one is installed, they just evaluate the goodness of the wall. Double blind test would be that the construction worker doesn't know which one they install for each customer. If it was a big difference, surely both would notice right away but if it just a difference how dence the bricks are or something like that, they might not notice a difference but the manufacturer gets information they can use the lower cost brick.
 
the brick wall comparison is not relevant to ABX audio.
In the brick wall you are making a comparison with solid objects, a solid object remains unchanged you can touch it and it is always there, it is always the same.
Can you replace the bricks and do the same ABX comparison with anything else eg you want to compare the quality of # 2 cameras?
You can print the same photograph made by 2 cameras and compare the 2 photographs without knowing which camera it was generated from.
But you have n 2 solid things in hand, you can make the comparison on equal terms.
An ABX listening comparison is a very different matter,
our brain keeps the memory of a sensation for only a few seconds, this means that the comparison between what you are hearing and what you have to remember through a sensation does not have the same value.
Of course this applies to the sound comparison of small to very small differences.
 
Good point! Since audio is perceived in real time there really is no other way to compare than listening one after another. The important thing with blind listening test is to remove other influences affecting judgement than the audio perception itself.

Double blind test tries to remove the organizing party affecting the results as well. Level matching, positioning, the room and all sorts of stuff could affect the result. It is responsibility of the organizing party to make the environment and situation relevant for the results to be meaningful. However blind listening test as a method seems as good as it gets, otherwise it is the brain biasing the results.
 
Last edited:
However blind listening test as a method seems as good as it gets, otherwise it is the brain biasing the results.
And that is plain reality. Like testing wine (or various soda brands for the non-alcoholics), food or the quality of your TV set, you can have two objectives with ABX testing: finding out if a difference is perceivable and finding out whether a certain quality is preferred or disliked. The latter would also come up with AB-testing by the way. For perception alone, only the ABX-test works.

Here we discuss both objectives (perceivability and taste) and often they get entangled in each other, with sometimes fiery discussions as result. Part of the fun of this forum probably is the psychosocial experiment I guess...
 
Would be shocking to realize after such a test that the personal taste was an obstacle for better alternative 😀 Some would deny I guess, and blame the test.

Anyway, it is unfortunate the test is pretty complex to organize properly so most people will never get to participate. I would love to. Anybody doing ABX tests them selves at home, any way to cheat oneself as fairly as possible?😀 For example testing crossover variants. I do this all the time (not proper ABX), but the ear and brain is easily fooled so it would be interesting to do it as well as possible at home. Any practical tips?
 
Last edited:
Ehh, connect them both to an amp, add proficient resistors to the outputs of the crossovers and hook up a speaker to both + leads of the crossover? If you can hear anything, there is a difference 😉
Not that this approach really works I'm afraid... but some four-pole switch to switch between the crossovers should do the trick, with use of an assistant to manage the switch and record the results.
 
The biggest flaw with blind tests is the ignorance of the participants. I.e. lack of reference.

Years ago, the folks at Audio Note wrote an interesting paper titled 'Are You On The Road To Audio Hell?', which addresses the problem of subjective evaluation of audio components by comparison to some reference recording. In short, the paper argues this paradigm is backwards, since one does not know what the recorded reference SHOULD sound like, or whether the chosen reference recording just happens to favor the errors/colorations of one component over another. Their solution is both simple and non-intuitive. In short, one shouldn't compare components with the purpose of identifying which most closely matches some familiar reference, but rather, to identify which components make unfamiliar recordings sound the most different from EACH OTHER. Anyway, for those who may not be familiar, below is a link to the paper on Audio Note's site.

https://d1b89e86-9572-4311-9f80-600...d/3e7c3b_5da8bdfea9024ff3be5ca2c536a6c0a5.pdf
 
OP (JonBocani) had been trying to prove "everything is the same" once the frequency response is matched in several different threads, but it ended up only to prove blind test does not really work for this kind of test. I'm still interested in the reason why it failed.
 
I've tried many small midranges for my 3 way fronts. 15W, 15WU, SB15, Accuton T8, Dayton RS125 and some 5" alu from AE. They do NOT sound the same - no matter the amount of DSP - and I mostly EQ within +/-1,5dB from around 500 - 15.000hz.

All harder domes clearly have a more detailed upper midrange compared to all the paper ones - no matter the price. It's pretty clear to me that basic physics tells the story here.
Hard domes breakup more severely than softer self damping materials like paper, plastic and different fiber types - but the harder domes is also way more clean/even within it's usable bandwidth - and that helps.
I have tried most types and brands of DSP's. My experience tells me that too much EQ, kills the quality - something simply sounds off. Which brings me back to the fact that in real life, you tend to get way better results by using sensible amounts of EQ on good drivers - for the job that is - rather than punishing a ragged or unsuitable driver with a huge amount of EQ. For example, I tried to do all I could with a dedicated 5" SB satori papyrus midrange. But it will never sound as open and clear as the little Dayton RS125 that I experiment with now. Just look at the trend with BE, Textreme, Titanium and the like for tweeters. Why would it not be exactly the same principle with midranges?

In a theoretically set up experiment with super clean drivers - no doubt the difference would be smaller and harder to hear. But in real life, all drivers have all kinds of quirks and "personality" that is not easily dealt with, by simple EQ.

Yes yes... I know that a lot of people might enjoy music out of any boom-box. But most people I know - no matter their background - tend to enjoy good quality sound systems.... they just back off when it comes to paying for it - oh no.... to big, to difficult, to expensive. 😀 It all has to be so easy now 😉