John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
AX tech editor
Joined 2002
Paid Member
. [snip]Eliminate witness subjectivity? You have a protocol that makes all witnesses have the same taste? Can it make them all want to marry the same woman, also? ;)[snip].

I'm sorry Rod; in posting, one always assumes some basic understanding, but sometimes I cut too many corners.
Of course you can't make someone's subjectivity disappear.
What I meant was that the test protocoll will be designed to eliminate the influence of the listeners biases, prejudices, subjectivism etc from the results.
Actually you can take it one step further and design a controlled subjective test which also can give reliable and repeatable results. SY has written about that, I believe, in another thread.

jan didden
 
Jan, I am sure you could systemetise the testing to eliminate bias and prejudice, HOWEVER, your results are still encoded by the ability of the subject to discern the difference between one equipment playing and another. This ability will vary from subject to subject, not to mention environmental conditions, and even emotional conditions.

Still, let's say you perform some tests and record the answers. Will the designer of the next BLOWTORCH use such data aid his work? No, I don't believe it at all. First, the data is invalid for any other circuit environment than the original test [as a true son of Apollo, you must accept that]. What use are tests based on stale circuits?
Second, will an outstanding professional designer, like John Curl, trust the results of test upon subjects he does not know and trust? No, he will trust his own perceptions, and combine them with engineering design talent, and a lifetime of professional knowledge gathering to produce an outstanding amplifier.

I am not casting doubt on your ability to produce tests, and interpret them. I am highlighting the fact that their practical usefulness in amplifier design, which is the central theme of our thread, is very small.
 
AX tech editor
Joined 2002
Paid Member
Jan, I am sure you could systemetise the testing to eliminate bias and prejudice, HOWEVER, your results are still encoded by the ability of the subject to discern the difference between one equipment playing and another. This ability will vary from subject to subject, not to mention environmental conditions, and even emotional conditions.

Still, let's say you perform some tests and record the answers. Will the designer of the next BLOWTORCH use such data aid his work? No, I don't believe it at all. First, the data is invalid for any other circuit environment than the original test [as a true son of Apollo, you must accept that]. What use are tests based on stale circuits?
Second, will an outstanding professional designer, like John Curl, trust the results of test upon subjects he does not know and trust? No, he will trust his own perceptions, and combine them with engineering design talent, and a lifetime of professional knowledge gathering to produce an outstanding amplifier.

I am not casting doubt on your ability to produce tests, and interpret them. I am highlighting the fact that their practical usefulness in amplifier design, which is the central theme of our thread, is very small.

Well, if you do a controlled test between, say, a BT and a SONY xyz, and the result is that the majority of listeners cannot hear a difference, I would think that's a pretty significant pointer for amp design.

Again, lots of people, when given the choice, would probably get a BT instead of the SONY. But with these test results in hand, we now know that the basis for that selection is not the audible performance. Isn't that interesting to know?

Again, anybody listens to what he likes, and selects and buys what he likes, etc. But if you say: this amp sound different from that one (which is the first requirement for different preference of course), that can be tested. Simple as that.

jan didden
 
Well, if you do a controlled test between, say, a BT and a SONY xyz, and the result is that the majority of listeners cannot hear a difference, I would think that's a pretty significant pointer for amp design.

No, that's a pretty significant pointer for the 'controlled' test methodology, and is valid for the one and only test conditions, including spatial acoustics etc. The pointer is that the hypermarket buyer would not need to go for the BT.
 
Well, if you do a controlled test between, say, a BT and a SONY xyz, and the result is that the majority of listeners cannot hear a difference, I would think that's a pretty significant pointer for amp design.

Even if ONE listener could reliably distinguish between a BT and a Sony, that would be useful to know. So far, nope, no-one has been able to demonstrate anything even vaguely like that. Not one person.
 
And what exactly is your mission here? Consumer protection, warning us about the High End industry?

I find it rather presumptuous to ask for proof. If you don't believe it, let it be.
Buy a $50 CD Player, build a LM3885 Amp and thats all you need.
It will be hard to distinguish this equipment in a DBT from a $10'000.- one.

But then, I'm asking, why posting in a DIY Forum?

Tino

I'm trying to discover how many ways audiophiles including tinkerers can invent to argue about how many angels can dance on the head of a pin. This particular one is not a new one, it's been going on for well over a decade especially in regard to crossover networks. Nevertheless, so far I've tallied 137 of them. How about some originality for a change. :p
 
When communication equipement is tested, as well as all the electrical test, subjective listening tests are done. These employ such things as modified ryhme tests. To get some statistical evidence, in our case, all participents have a hearing test to determine their hearing ability and more important any frequency drop outs. Then there has to be quite a large number of participants, the more the better. There are other criteria to follow, but subjective testing can be done (certinally for communication gear) that gives results that are objective enough for even some of the most demanding customers. These are also suplemented by using some expensive test gear that imitates a human torso and how we pick up sounds (bone conduction etc) that gives further a reference.
Applying these tests to audio gear would be interesting and challenging, though costly, and possibly not always in a manufacturers (or audiophile theorists) favour.
 
AX tech editor
Joined 2002
Paid Member
No, that's a pretty significant pointer for the 'controlled' test methodology, and is valid for the one and only test conditions, including spatial acoustics etc. The pointer is that the hypermarket buyer would not need to go for the BT.

I agree. So you'd set up the test in an 'average listening room', with agreed speakers, etc. I'm not saying this is perfect, but it is so much more reliable than an unknown, isolated listener, in an unknown environment, who has a stake in his equipment, and does a totally uncontrolled test whether his cap sounds better than another cap. Such is test is really worthless for anyone except the one tester.

If I do a well controlled test in an average listening room, with accepted good speakers, between a BT and a SONY xwz, and the result is that with 99% confidence you can say there is no audible difference, that's a significant result.
If it indeed is true that a BT is so much better, has such bigger soundstage, has so much more details than a SONY xyz, I would expect it to easily be differentiated in such a controlled test, like a 98% confidence that there IS an audible difference.

jan didden
 
Last edited:
AX tech editor
Joined 2002
Paid Member
When communication equipement is tested, as well as all the electrical test, subjective listening tests are done. These employ such things as modified ryhme tests. To get some statistical evidence, in our case, all participents have a hearing test to determine their hearing ability and more important any frequency drop outs. Then there has to be quite a large number of participants, the more the better. There are other criteria to follow, but subjective testing can be done (certinally for communication gear) that gives results that are objective enough for even some of the most demanding customers. These are also suplemented by using some expensive test gear that imitates a human torso and how we pick up sounds (bone conduction etc) that gives further a reference.
Applying these tests to audio gear would be interesting and challenging, though costly, and possibly not always in a manufacturers (or audiophile theorists) favour.

Fully agree. You CAN do controlled, subjective tests that give usefull, repeatable results, in your case whether a customers pays for the gear or not.

There are some examples of such tests done by the audio industry that are documented; Floyd Toole did some speaker tests when he was still with Harman. One test result I remember was that the preference rating in a set of 4 different speakers depended on whether the participants knew which speaker was playing (sighted test) or not (blind). Very generally speaking, the larger the speaker the better it did sighted.

Interestingly, in this tests, some participants were experienced listeners that declared beforehand that they knew all about prejudice and bias and that they were convinced they could be 'objective' and judge the speakers on sound only. Turned out not.


jan didden
 
How do you or F. Toole assure adequate listening position for a large set of subjects during the speaker listening test? Usually you are able to do it for 2-3 people, no more, in case we speak about serious high end listening test with all the spatial and localization information provided. Without that, with group of 20 people sitting between the speakers, the test is just useless.
 
AX tech editor
Joined 2002
Paid Member
I still have no idea why large numbers of people are needed for audibility testing. Could one of you guys (Jan or PMA) please enlighten me?

No you are right. Even if you have only one listener that reliably, with high confidence factor, can hear a difference in a well-controlled test, that's a clear indication that there IS an audible diference. QED.

If he/she can't, that doesn't mean there is no difference, only that we don't know yet whether there is one.

jan didden
 
Two stars should read "B and S"

For some silly reason, I can't edit my previous post.
I pasted the document name from the very document and the letters ** ended up as **. A least in my browser. And I can't edit that.... :(

Edit: Aaahhh, the letters B and S together mean something special in English.....
So they end up as two stars. Really silly :-(
 
Last edited:
I scanned that document 2quad, thanks "METHODS FOR THE SUBJECTIVE ASSESSMENT OF SMALL IMPAIRMENTS IN AUDIO SYSTEMS INCLUDING MULTICHANNEL SOUND SYSTEMS". I was interested in the test method as it was almost an exact description of what was carried out & described below.

It's interesting, when evidence of blind testing is put in front of the same people arguing for it on this thread, it is rejected as having been coached. There's no pleasing or satisfying some people :p

See here

I'll quote it
I'm not being thick, I genuinely am new to this so I have no real idea of what a contolled subjective listening test is, let alone how to conduct one . In my naive way I thought that's what I'd done. The cables were provided to me by another person who gave me no information about them other than one had a blue tag and one had a yellow tag. We have never met in person and he was at least 100km away from me at all times and no communication took place between us other than what is mentioned below. The listening tests were conducted by me alone in my own home with kit I am very familiar with.
Test 1 consisted of a listening session consisting of 3 tracks, one orchestral, one female vocal and one electronic, with the existing setup to give me a current bench mark, test 2 involved repacing the coax cable linking HiFace and DAC with one of the trial cables and listening to the same three tracks and test 3 involved replacing the first trial cable with the second one and listening to the same three tracks. My first impression placed my original cable between the two trial cables in terms of the SQ I was hearing and the best cable was so obviously better to my ears with my set up that within 45 minutes of starting the test I registered with the guy who made the cable what my initial opinion was and why. Other than saying "That's interesting" he made no further comment. I continued to repeat tests 2 and 3 with different tracks for a couple of hours that evening and again at intervals the next day. The result was that my I became even more convinced that my original opinion was validated, the difference was so obvious it was a 'no brainer' as they say. I reported my confirmed opinion to the friend who made the cables and he confirmed that the cable I preferred, tagged blue, was the 10db attenuated one.
 
Last edited:
Just so as you get the level of disingenuous approach that is being used here -
Quote:
Originally Posted by SY
Yet it isn't. 50% chance, single trial with coaching from the cable guy, is not exactly persuasive.
Nah then SY, that comes too close to calling me (and by inference, my forum friend) liars, which I seriously object to - if you cannot accept the following assurances then I am wasting my time and you do not want to be open minded: -
1) there was absolutely and categorically NO 'coaching' from the cable guy - see my post above where I offer the possibility of confirming this with the guy concerned.
2) I would have no problem with your 50/50 chance between two cables if the difference was close, but it wasn't - it was so distinct as to be a 'no brainer'.
If you can't accept these assurances unreservedly and at face value then it negates the whole forum concept of sharing our experience - your choice, come out and smell the coffee or stay blinkered in your own little mindset - I have nothing to gain or lose either way.
Dave.

So I see the same avoidance tactics being used yet again here!

SY's post
Even if ONE listener could reliably distinguish between a BT and a Sony, that would be useful to know. So far, nope, no-one has been able to demonstrate anything even vaguely like that. Not one person.
doesn't exactly ring true now does it?

SO SY as Rod has asked you already - Was there a a test between BT & Sony that you mentioned?
 
Last edited:
Status
Not open for further replies.