Matters between you and your lady stay in the house, not this forum.
.
Asking for a tip on how to add some room treatment without her noticing is hardly a ‘matter between me and my lady’ .....
If it was stealth enough it would never matter.
One that increases risk of heart attack.
Incoherent, inchoate anger.


YouTube
Why? What happened to the room acoustics?
Room acoustics to speaker design, is like grounding to amplifier design. It's only a big issue to beginners (well, there are experienced ones that are not of learner types).
Three decades ago I built an amp. Took a few days to solve hum issues with frustration. Called it a day even tho it was not up to normal standard. Today, I know what causing hum, I know how to prevent the issue (instead of how to solve it). Room acoustics is similar.
For my own use, I can't imagine where room acoustics could be a problem. I don't have 'empty' room where I need to put my speakers (bath room may be). I have a dedicated room with full treatment but now used as a storage room for my speakers, amps, and other audio stuff. Small living rooms tend to be 'full'. Books and curtains to diffuse HF, sofa to absorb LF. But the biggest fortune is that my living room is not rectangular 😉
If pcb routing is the first effort to prevent bigger grounding related issues, speaker designed for off axis listening (good dispersion) and smooth LF roll off will make it room friendly too.
most sensible people understand it's an observation & listening impression.
Yes it seem that only one these diyaudio forums is a listening observation taken to be a claim of something rather than just one's listening observation.
And while some, like Evenharmonics, like to think of ABX's, and the like, as some kind of "proof" of something, I think they are just subjective listening observations relevant only to the context in which they were done.
Nobody is doubting the value of such a test, but does it work universally in everything? I for one do not think so. There was a time when I might have, but not anymore.
Is there a thing or situation where some other test might work where a double blind test does not?Nobody is doubting the value of such a test, but does it work universally in everything? I for one do not think so. There was a time when I might have, but not anymore.
I read posts about, for example, "long term listening" vs. ABX, and I wonder if there's some evidence for hearing something in long term listening that can't be heard in ABX.
Where and when did you take the poll?You call it a claim as it suits your strawman argument - most sensible people understand it's an observation & listening impression.
No need to drag me into your family affair.Asking for a tip on how to add some room treatment without her noticing is hardly a ‘matter between me and my lady’ .....
If it was stealth enough it would never matter.
Is there a thing or situation where some other test might work where a double blind test does not?
I read posts about, for example, "long term listening" vs. ABX, and I wonder if there's some evidence for hearing something in long term listening that can't be heard in ABX.
Where do I start? There are so many things, but what is clear is that ABX often simply doesn't work and can make things that do not sound the same have a 'sameness' in the end result.
Bottom line, it is not a natural test. You can even argue that the listener is being tested, maybe even more than the equipment.
What if I fear that I will not come up to scratch as a performing monkey? What if I don't perform well under pressure? A person should never be asked "what do you think?" and instead be asked "what piece of music would you like to hear next?" And then just relax, knowing that the dreaded question will not be asked. No pressure, and your chances of coming to grips with what you are hearing becomes much better.
But that does not mean there are no caveats. For example, why are so many audiophiles frustrated reviewers? Instant appraisal can be instant annoyance. A responsible reviewer never reviews on the fly. He wants to live with the equipment, take his time, examine the pros and the cons - and there will always be those and ABX can never take that into account... and so far I have only scratched the surface.
But having said all that, the Double Blind Test performs a role in some very important ways like pharmaceuticals and placebo effect. I don't think that any one of us here would deny that.
Last edited:
FUDyou can miss out on the accumulation of small differences that add up to make a big difference, psychoacoustically i.e you can miss out on the possibility of improving your replay system & hence its enjoyment factor
doesn't appear as if you are happy.

BAD = SAD1 + SAD2 + SAD3 + ................ + SADn
BAD ... big audible difference
SAD1 - SADn ... small audible differences
This is a so called "mmerrill99 audibility postulate"
BAD ... big audible difference
SAD1 - SADn ... small audible differences
This is a so called "mmerrill99 audibility postulate"
If ones cares to have a quick look at the threads started by Merrill and Jakob, two of which they were suggested to open due to them posting the same thing over and over again here iirc, it will become evident why they are back here posting the same thing over and over again.
This is as ill-posed a question as it gets. The time-scale of an evaluation has nothing to do with the method.I read posts about, for example, "long term listening" vs. ABX, and I wonder if there's some evidence for hearing something in long term listening that can't be heard in ABX.
You can compare short-term with long-term listening (blinded or not, both of course) and you can compare blinded listening vs. sighted (long-term or not, both of course).
You can change only one variable in the game and have to leave the other parameters constant, otherwise it's not a valid investigation of the variable.
For a trained listener, blinded listening experiments don't have any less resolution than sighted listening. The difference is that sighted listening is influenced by so many more mostly uncontrolled variables than the actual sound alone that make it hard, if not impossible, to draw any conclusions about the sound and the sound alone. That doesn't mean blind listening is totally free of uncontrolled variables and biases, it's not, but their amount and effect is much less. And I may repeat the first point: blind testing needs as much training as sighted critical listening does. For example, AmirM (founder of the AudioScienceReview forum) was involved with lossy audio codec development and wrote that it took six months of training to be able to identify smallest artifacts.
but my deal with the devil was nothing stereo related into the room.
It is encouraging that healthy sense of humour survives up there on the mountains together with the ability to deal with reality.
George
BAD = SAD1 + SAD2 + SAD3 + ................ + SADn
BAD ... big audible difference
SAD1 - SADn ... small audible differences
This is a so called "mmerrill99 audibility postulate"
Don't forget, it's actually the sum as t goes to infinity, because you can't notice the differences except over a long period of time. 😉
The insistence on ABX DBTs on audio forums is the result of a bias towards null results - it's exactly what is required for a particular belief system
I think you really don't have a clue.
ABX or any form of DBT is essential for any audio product development. It basically boils down to asking live human beings what they like best (after having determined that they do hear a difference, that is). The same approach is taken for mayonaise, soft drinks, dip sauce and female hygiene products, amongst others. People are asked what they like best, without knowing the providence of the product they are testing.
Blind testing is exactly what is required to break through particular belief systems.
The occurance of null results simply means that for audio, above a certain level ....... it is good enough, differences are no longer perceived, let alone that preferences can be developed.
To me, it is incredible that people can believe that differences would automagically disappear under blind testing. How can they? How could my female hygiene product suddenly become undistinguishable from the competitor's version, once brandnames are removed?
Serious businesses, also in the audio industry, test blind. Some even publish the results of their findings. On the other hand, charlatans and snake oil merchants have something to hide, so they try to create a believe system that blind testing doesn't work.
I just don't see any equivalence between reproducing live music with a high degree of fidelity versus the preferred taste between two tomato sauces - or ketchup.
It is only in very recent times that I have heard digital sound really great - all those years that I thought not, I am perfectly comfortable with my judgement on that score. Now where is my Fountain sauce for my meat pies, I was so sure I put it in the fridge. Oh dear, my bad memory strikes again, sigh... 😉
But when I heard a double-tracked acoustic guitar in the mix was actually triple-tracked, my subjective listening was highly objective, as in one-two-three. "All three present, Sir!"
It is only in very recent times that I have heard digital sound really great - all those years that I thought not, I am perfectly comfortable with my judgement on that score. Now where is my Fountain sauce for my meat pies, I was so sure I put it in the fridge. Oh dear, my bad memory strikes again, sigh... 😉
But when I heard a double-tracked acoustic guitar in the mix was actually triple-tracked, my subjective listening was highly objective, as in one-two-three. "All three present, Sir!"
I just don't see any equivalence between reproducing live music with a high degree of fidelity versus the preferred taste between two tomato sauces - or ketchup.
I do, the equivalence is almost perfect: sweet - neutral - sour > SE valve - bryston - class B zero bias current
If you are asking people which sound/ketchup they prefer then you need to hide from them the identity of what they are experiencing. Otherwise, they will answer a different question (e.g. which packaging/brand do I prefer) however much they are sure they are answering your question. The same principle applies if you are simply asking whether two sounds/ketchups can be reliably distinguished.Joe Rasmussen said:I just don't see any equivalence between reproducing live music with a high degree of fidelity versus the preferred taste between two tomato sauces - or ketchup.
If you were testing packaging preferences then you would instead 'hide' the contents by putting the same stuff in all packages.
A quite separate question is whether preferences might be different or distinguishability might be enhanced if a much longer test was carried out. This may be true, but it is difficult to carry out a long test with sufficient blindness to reduce bias. This provides a gap for the FUD merchants to squeeze through.
<snip>
To me, it is incredible that people can believe that differences would automagically disappear under blind testing. How can they? How could my female hygiene product suddenly become undistinguishable from the competitor's version, once brandnames are removed?
I hope, this explanation helps to resolve the mistery 🙂 :
This is a table excerpt from a sensory experiment (taste) which compares different test protocols:

(Yu-Ting Huang,Harry Lawless, Sensitivity of the ABX Discrimination Test, Journal of Sensory Studies 13 (1998) 229-239)
In the course of the experiment the same group of humans evaluated the same sensory difference under three different test conditions.
In the table is listed the proportion of correct answers (Proportions Correct) for each sensory difference under the various test protocols.
The first row of test results is denoted as "paired comparison" which is otherwise known as A/B - test.
The second tow lists the result for the "Triangle test protocol" which presents three stimuli, two of the three are identical (so has some similarities to the ABX protocol).
The third row lists the results for the ABX protocol trials.
It is a onedimensional test (it is only about the sweetness), as said above the same sensory difference is tested.
Obviously the proportion of correct answers is quite different between the test protocols and there is also an indication that the consistency (results of the repeat runs) is different for the three protocols.
As any "blind" test like this does _only_ tell you the number of correct responses; it __can't__ tell you, what was going on in the mind of the participant, hence it _can't_ tell you, if the participants are really perceiving a difference in taste.
That is something the experimenter concludes after analyzing the results.
This analysis is often based _solely_ on the number of correct responses, like for example in the Foobar ABX.
Assume for example a hypothetical 10 trial experiment under the three different test protocols and use the proportion correct numbers from the table (1% vs 2%, Replicate 1 )with decision criterion SL = 0.05, then you'd get :
A/B test : 0.92 x 10 = 9 hits -> exact binomial test for probability of random guessing -> cumulative probability for guessing P(X>=9 l 10) = 0.0107 (~1,1%)
-> rejection of the nill-hypothesis -> conclusion: taste difference
Triangle test: 0.8 x 10 = 8 hits -> ....->.... P(X>=8 l 10) = 0.055 (~5.5%) -> nill-hypothesis not rejected -> conclusion: no taste difference
ABX test: 0.6 x 10 = 6 hits ->......->....P(X>=6 l 10) = 0.38 (~38%) -> nill-hypothesis not rejected -> conclusion: no taste difference
I hope it is now understandable which way "differences" can "automagically disappear".
It depends obviously on the humans, on the test conditions (like protocols, question asked, onedimensional or multidimensional, number of trials and so on) and further on the method used to analyze the results for drawing further conclusions about "audibility" or "sensory difference" in general.
Last edited:
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- John Curl's Blowtorch preamplifier part III