The only ''definitive'' answer in this Subjective world is...

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
^ An automated turntable is Harman's solution and I think better. Moving from room to room may mess with folks (especially if people can identify *which* room they're in), and you might lose certain temporal effects (noise introduction).

I think it's also worth saying that day-to-day we *do* listen sighted. So there's some value in buying/building things you find cool, as, with biases and all, it's likely going to sound better to you. On the R&D side, yes, being deliberate with trying to please a wide audience OR aiming for a specific sound that a subset prefers seem more valid.
 
...would be obtained by conducting a large survey in a blind test environment.
But every True Subjectivist knows you can't judge the sound of something while blindfolded! :D
On a more serious note:

The test is not only about having many people, but also having many people listening the very same thing. (controlled environment)

It makes a lot of difference.
And that really ups the cost, as to hear the same thing they need to sit in the same seat (and maybe elevated or something so their ears end up in the exact same place), so you can't have a group of people listening, you can only have one at a time.

I have to wonder what percentage will prefer A over B when A is a speaker reproducing a single musical instrument (with no compression, eq or other processing) and B is a live performance on that same musical instrument.
 
And that really ups the cost, as to hear the same thing they need to sit in the same seat (and maybe elevated or something so their ears end up in the exact same place), so you can't have a group of people listening, you can only have one at a time.

Yes, always one at a time.

It's not only a question of ears position, it's a question of avoiding any influence, spoken or else.

When i made the MP3 v.s. CD v.s. HD24/96 blind test, it was one at a time. Yes it takes more time, but it's more reliable.

Also the SPL must be the same (+/- 0.3db max) when you test different systems or speakers.

Finally, the auditory memory is very short, so switching from a room to another (when it's close) it's very useful.
 
But does your post#1 scenario actually answer the question asked? I found it odd that there was no 'can't hear a difference' percentage accounted for. It made me think of the "Forced Choice" (iirc) in SY's testing article from LA, but with a corrupted meaning and method. I'm a greenhorn in this subject.
If the only choices are A or B, then 'no difference' must arbitrarily be assigned. Can't that skew results?
 
But does your post#1 scenario actually answer the question asked? I found it odd that there was no 'can't hear a difference' percentage accounted for. It made me think of the "Forced Choice" (iirc) in SY's testing article from LA, but with a corrupted meaning and method. I'm a greenhorn in this subject.
If the only choices are A or B, then 'no difference' must arbitrarily be assigned. Can't that skew results?
I'm not an expert, but I've read (and thought) a little bit about this. One might argue that the listener arbitrarily assigns "no difference," and perhaps there could be a tendency to pick the first choice when no difference is heard, making the statistics partly invalid (though this could be corrected by alternating A and B for each person).

On the other hand, with no way to weasel out of a choice and decide on first listen that there's no difference, the listener is forced to "listen harder" for a difference (to decide whether he/she likes A or B better, or for ABX, whether X is A or X is B), and thus the statistics become MORE valid.
 
sofarspud, you are right.

The variable 'can't hear any difference' must be controlled.

There is few ways to do so: my prefered one is a pre-selection of participants who demonstrate their ability to 'ear the difference'.

That is exactly what i'll do for my upcoming blind test about midrange drivers.
 
I'm not an expert, but I've read (and thought) a little bit about this. One might argue that the listener arbitrarily assigns "no difference," and perhaps there could be a tendency to pick the first choice when no difference is heard, making the statistics partly invalid (though this could be corrected by alternating A and B for each person).

On the other hand, with no way to weasel out of a choice and decide on first listen that there's no difference, the listener is forced to "listen harder" for a difference (to decide whether he/she likes A or B better, or for ABX, whether X is A or X is B), and thus the statistics become MORE valid.

Also, i think we should consider the fact that between A and B, the difference can be obvious.

The test can be designed so there is a minimum of contrast between A and B. Of course, there is still chances for a small % of people to fall into the 'can't ear a difference' or 'can't hear a difference but i'll take A anyway*'

* based on experience: people tend to ''find'' the answer, even if they really have no clue.
 
Also, i think we should consider the fact that between A and B, the difference can be obvious.

The test can be designed so there is a minimum of contrast between A and B. Of course, there is still chances for a small % of people to fall into the 'can't ear a difference' or 'can't hear a difference but i'll take A anyway*'

* based on experience: people tend to ''find'' the answer, even if they really have no clue.
While we're designing this, each person can be given a suite of comparisons, with "what is A" and what is B" randomized for each comparison. Those who get things right roughly 50 percent of the time (whose responses are "statistically insignificant") clearly can't hear a difference.
 
I didn't read the whole thread but one has to keep in mind that the result of such a test will most probably show which speaker is subjecively preferred by most people not the best sounding.

BTW: Thinking that something IS good just because most people think that it is good is a common misconception (or logical fallacy). It is that widespread that there even exists a term for it: "Group think".

Regards

Charles
 
While it is obvious that one has to use proper statistical analysis on blind results (random order of A and B for example), wouldn't it be nice if the answers were correlated to measurements, thus giving the engineer a clear set of guidelines to shoot for instead of the typical "sounds good to me" criteria - which has no meaningful quantification.

Actually that has been done, for the most part, by Toole and Olive and some others. The problem is that many - several here in this thread - simply want to throw away all that so that they can continue to believe what they want to believe - themselves (and usually their own design despite how flawed it is.)
 
Nope. My ears are right yours are wrong - that's the only statistic of importance to me. What anyone else thinks is irrelevant ;)

Nobody builds what I want except me myself and I

There's a lot of wisdom in those quotes in that audio should be a subjective journey to satisfy your own ears and not what the other 9,999 people think.

In terms of correlating measurements to what we hear, why isn't it more common to use a binaural dummy-head microphone for evaulating performance in acoustic spaces? Would have thought it to catch more complex temporal and spatial casualities than a dumb omni-directional mono mic. Why don't researchers like Toole, Olive etc or high-end room correction solutions like Acourate, Audiolense, Dirac, Lyngdorf etc use binaural dummy heads for measurements? Or don't we even understand enough to make use of such data in amy meanigful way for these purposes? If price's the sticking point, would it make sense for the DIYA community to develop a cost-efficient solution together, a bit like how the SEOS waveguide came about on AVS :D
 
Founder of XSA-Labs
Joined 2012
Paid Member
If someone can come up with the 3d model, post the stl file of the ear canal and external ear, we can 3d print it and stick it on the end of some calibrated mics mounted inside a mannequin head and bust. Can probably find old dummy heads somewhere - retail clothing store dumpsters?

It makes a huge difference to have that waveguide we call the ear in front of the mic.
 
Toole demonstrated that what most people prefer, even people with no experience, is the best technical speaker (ie: best measurements, best on axis, off axis, distrotion,eect)

I didn't read the whole thread but one has to keep in mind that the result of such a test will most probably show which speaker is subjecively preferred by most people not the best sounding.

BTW: Thinking that something IS good just because most people think that it is good is a common misconception (or logical fallacy). It is that widespread that there even exists a term for it: "Group think".

Regards

Charles
 
While it is obvious that one has to use proper statistical analysis on blind results (random order of A and B for example), wouldn't it be nice if the answers were correlated to measurements, thus giving the engineer a clear set of guidelines to shoot for instead of the typical "sounds good to me" criteria - which has no meaningful quantification.

Actually that has been done, for the most part, by Toole and Olive and some others. The problem is that many - several here in this thread - simply want to throw away all that so that they can continue to believe what they want to believe - themselves (and usually their own design despite how flawed it is.)
exactly. toole demonstrated all we need to know to make testing. and one has simply no choice to make blind testing if he is serious about his test, and also put the different speakers in the test, or drivers at the exact same spot in the room and for the case of a tweeter, most possibly at the same height.

however, some toole research about room treatment is quite seriously doubted and critiqued in the studio building and pro's. I actually have tried myself and many of my friends, in a blind setting, with and without first reflection panels and my founding show a dramatic increase in soundstage clarity and imaging with first reflection points treated.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.