Claim your $1M from the Great Randi

Status
Not open for further replies.
Hi,

The difference is metaphysical, admittedly, if you believe that artifacts 120 dB or more down could be audibly significant. That's where John and I part company

This kind of reminds of people saying, what the heck: 5% distortion isn't audible, so what if it has 5% distortion?
They all hear it when that same 5% isn't there however.

Same with the -120dB distortion artefacts, you don't hear them.
You quite probably don't hear it when one of those is removed either...
Remove a couple more and all of a sudden it becomes obvious that -120B isn't that inaudible as we first thought it to be.

Do two amplifiers that measure the same also sound the same?

Cheers,😉
 
SY said:
The difference is metaphysical, admittedly, if you believe that artifacts 120 dB or more down could be audibly significant. That's where John and I part company.

"Audibly significant" contains two notions: "audible" and "significant." Perhaps you and John part company on the "significant" part which IMHO concerns a value preference (ie, the audibility concerned is not worth the trouble).

As to "audible," Gregory Soo of Emm Labs recently told me Meitner's new DSD gear works as it does, in part, because he pushed the circuit's noise level to -145dB. Seems, according to Meitner, that "120dB or more down" is audible. Judging from reviews his professional and consumer equipment has garnered, the "significance" is worth troubling about, at least if one is concerned about pushing the state of the art?
 
SY said:
Based on what I saw, it is not likely to be an intrinsic property of the cables but rather some sort of interaction between them and the test gear. My guess is that John concurs.

Apparently he doesn't as even well after Bruno's measurements were made public (the ones in which he measured the same cables that John did) John said he stood behind his measurements.

So not only does he apparently not concur, he also apparently dismisses Bruno's measurements.

se
 
Hi,

Depends on what you're measuring, n'est-ce pas?

Assuming amp X and amp Y both undergo a battery of measurements and both show the same measured results, would they necessarily sound the same?

Je pense que non...

Conversely, could I make two amps X and Y to measure and sound the same given that same battery of test?

Probablement bien...

Cheers,😉
 
Steve Eddy said:


Apparently he doesn't as even well after Bruno's measurements were made public (the ones in which he measured the same cables that John did) John said he stood behind his measurements.

So not only does he apparently not concur, he also apparently dismisses Bruno's measurements.

se

I played around with his test setup and confirmed that he is indeed measuring what he claims to be measuring. The only dispute is the source of the distortion. And I don't even think there's a dispute there, if I understand John correctly.


fdegrove said:
Hi,



Assuming amp X and amp Y both undergo a battery of measurements and both show the same measured results, would they necessarily sound the same?
Cheers,😉

Depends on what's in that battery, nicht wahr?
 
SY said:
I played around with his test setup and confirmed that he is indeed measuring what he claims to be measuring.

I've never had any doubts that he's measuring the distortion he claims to be measuring.

The only dispute is the source of the distortion. And I don't even think there's a dispute there, if I understand John correctly.

Well, all I know is that even after Bruno's measurements, which measured well below where John's able to measure yet didn't turn up any signs of the distortion John is measuring, clearly indicating that the distortion John was measuring wasn't being produced by the cables themselves, John said that he stood behind is measurements and would continue to stand behind them until someone could show him exactly what was causing his test gear to produce the distortion he's been measuring.

In other words, he dismisses Bruno's masurements and continues to assume that the distortion is being produced by the cables NOT until it can be shown that the cables aren't producing such distortion, but until someone can explain why his test gear is producing the distortion he's measuring.

I dunno about you, but it looks to me that there's still a dispute here.

se
 
john curl said:
For the record, this is the situation about my wire 'measurement'. I still measure DIFFERENCES in shielded cables with my test setup. I doubt, at this time, that it is due to distortion in the center wire, itself. This was my original hypothesis, due to the fact that Dr. Vandenhul had measured wire distortion with a different test set-up. However, on further investigation of what Vandenhul had measured, he was operating at a much lower operating level than I can get my equipment to operate. I can also see differences between clean and dirty contacts, and the presence of mumetal near the wire.
At this time, however, I don't know where the distortion is coming from, and Steve Eddy doesn't either.
I do not promote this test any further, because I have run into a 'dead end' where I can measure differences, but they do not reflect similar measurements of similar cables on other equipment. Are there diodes in wires, etc? Of course there are. Virtually any impurity or oxide should create a barrier of some kind and amount. I don't think, however, that this is the main component of what I am measuring, which is unique and repeatable for a given wire configuration.

This looks pretty clear to me, Steve.
 
It's a sorry thing that Steve Eddy can't let this go. If he can 'trash' me he will. He has called me just about everything over the years. Most of us think this is 'insecurity' on his part, but it does get old sometimes.
For the record, I have a lot of test equipment gathered over the last 30 years. It is NOT brand new, but averages about 15 years behind what is available today. I have modified some older test equipment like my Sound Technology with better, lower noise, IC's for lower distortion and it was calibrated by the factory about 10 years ago, where they entirely replaced my circuit boards while they were at it.
I developed this test setup which would cost about $30,000 if it was purchased new when it was last available in order to measure my power amp designs, especially at the transition between class A and class B with a 4-8 ohm load. As my designs use negative feedback, and run fairly heavy idle current, this distortion is difficult to distinguish with a standard Sound Technology, or the HP339 which were standards in the industry for many decades, and are still useful for lab testing. As they are NOT computer controlled, they don't work as well on an assembly line.
Well, in my latest JC-1 design, this transition shows added distortion at -115dB at about 10W when I switch between high bias and low bias, which is available on the amplifier. Now many of you don't design amplifiers, so it might surprise you that I think that this distortion is important, but if I can measure it, then I can optimize the amplifier components for lowest distortion. It is like making a better running engine in an automoble. Those of you with cheap American cars probably don't know what I am talking about ;-) but European and Japanese car owners know what I am referring to.
Now, do I believe that 110-115dB down is directly hearable? No, not as a single tone, BUT as multitone IM generated with a higher order nonlinearity (kink in the transfer function), perhaps. In fact I am counting on it.
My wire tests come from developing this measurement, and unfortunately an individual connecting wire can measure -115dB down in some cases, so I am stuck with having to test my wires for testing equipment. It may not be the wires themselves, but a system interaction. I don't know yet.
 
quote:
Originally posted by Arthur-itis
If you haven't figured it by now the subjective audio press doen't like ABX because it's bad for sales. [/B]


Wow, thanks for the tip. Say, have you a peer-reviewed article using appropriate statistical methods showing in fact that the subjective audio press doesn't like blah blah?

Just a history of rags like Stereophile printing whatever they can to try and discredit ABX, but nothing on the followups that debunk them when they do.

I don't understand why it's so difficult for people to grasp, ABX is the standard for detecting small differences in audio.

The claims that it is designed to show no difference or that it is somehow flawed are not supported by any credible data.

www.pcabx has an ABX comparator you can download and learn more about the process.
 
Konnichiwa,

Arthur-itis said:
Citing a refernce to a Sterophile article regarding ABX testing is on a par with popping your radiator cap to check your gasoline level.

Did you actually bother to read the article? If so, you might have noticed that all the criticism of ABX Tests and their statistical evaluation where based upon the following AES Paper:

Les Leventhal - How Conventional Statistical Analyses Can Prevent Finding Audible Differences In Listening Tests, Preprint 2275 (C-9)

And the JAES (and thus fully peer-reviewed) article:

Les Leventhal - Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests (JAES, Vol.34 No.6)

Arthur-itis said:
The article in question should then be compared to the following:

Comments on "Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests" and Author's Replies 674942 bytes (CD aes4)
Author(s): Shanefield, Daniel; Clark, David; Nousaine, Tom; Leventhal, Les Publication: Volume 35 Number 7/8 pp. 567·572; July 1987

Yes, absolutely. I assume you have actually READ the above and noted the clobering the ABX Mob received (again)?

Arthur-itis said:
Transformed Binomial Confidence Limits for Listening Tests 468821 bytes (CD aes5) Author(s): Burstein, Herman
Publication: Volume 37 Number 5 pp. 363·367; May 1989
Abstract: A simple transformation of classical binomial confidence limits provides exact confidence limits for the results of a listening test, such as the popular ABX test. These limits are for the proportion of known correct responses, as distinguished from guessed correct responses. Similarly, a point estimate is obtained for the proportion of known correct responses. The transformed binomial limits differ, often markedly, from those obtained by the Bayesian method.

This one I am not familiar with, but as it is still based on the binominal distribution it does not address the fundamental issues of small sample size and all the issues of Type 1 and Type 2 errors remain in force.

Arthur-itis said:
Approximation Formulas for Error Risk and Sample Size in ABX Testing 442116 bytes (CD aes4)
Author(s): Burstein, Herman
Publication: Volume 36 Number 11 pp. 879·883; November 1988
Abstract: When sampling from a dichotomous population with an assumed proportion p of events having a defined characteristic, the binomial distribution is the appropriate statistical model for accurately determining: type 1 error risk (symbol); type 2 error risk (symbol); sample size n based on specified (symbol) and (symbol) and assumptions about p; and critical c (minimum number of events to satisfy a specified [symbol]). Table
3 in [1] pre;sents such data for a limited number of sample sizes and p values. To extend the scope of Table 3 to most n and p, we present approximation formulas of substantial accuracy, based on the normal distribution as an approximation of the binomial.

I doubt that there are any findings materially different from these presented by Levental. After all statistics have not changed.

Arthur-itis said:
If you haven't figured it by now the subjective audio press doen't like ABX because it's bad for sales.

No, the "subjective audio press" do not the ABX Mob because their behaviour is clearly and repeatedly proven unscientific and clearly (on the evidence of the ongoing presentation of their results) repeatedly and with an obvious agenda (as evidenced by various complaints fielded by the ABX crowd to Trade regulators over time).

I personally do not care for people who present conclusion of "There is no audibile difference between A and B!" as authoritive pronouncements when their data is not capable of supporting this thesis with ANY resonable level of confidence. In fact, if I based testimony in a trial on a similar set of Data and analysis I'd be guilty of perjury.

Sayonara
 
Arthur-itis said:
I don't understand why it's so difficult for people to grasp, ABX is the standard for detecting small differences in audio

Some people have an aversion to being classified as just a data point. The old quote from the 60's surreal TV series "The Prisoner" comes to mind- "I am not a number, I am a free man!"
 
Konnichiwa,

Arthur-itis said:
I don't understand why it's so difficult for people to grasp, ABX is the standard for detecting small differences in audio.

You are ALMOST right. I completely UNDERSTAND ABX Testing.

ABX is the SELFDECLARED standard for testing differences in audibility.

Good thing that SERIOUS RESEARCHERS do NOT employ it, like for example when researching perceptual coding. If they did Dolby Digital and MP3 would be a lot worse than they are now....

So yes, it is a standard, but NOT a standard that is widely aknowledged oustide the ABX Mob.

Arthur-itis said:
The claims that it is designed to show no difference or that it is somehow flawed are not supported by any credible data.

Levental has proven by teh use of most ELEMENTARY statistics that the ABX test is highly likely to find "NO DIFFERENCE" in situations where the difference is audible but small. The ABX Mob failed to re-adjust their testing system and criterias following this criticism for nearly 20 Years. That is for nearly 20 Years iut had been PROVEN to them that their test is unlikely to detect small audible differences and they failed to adjust their test. Surely you do not wish to suggest that the failur to corect their methode is down to gross incompetence and negligence?

Sayonara
 
Konnichiwa,

quote:
Originally posted by Arthur-itis
Citing a refernce to a Sterophile article regarding ABX testing is on a par with popping your radiator cap to check your gasoline level.


Did you actually bother to read the article? If so, you might have noticed that all the criticism of ABX Tests and their statistical evaluation where based upon the following AES Paper:

Les Leventhal - How Conventional Statistical Analyses Can Prevent Finding Audible Differences In Listening Tests, Preprint 2275 (C-9)

And the JAES (and thus fully peer-reviewed) article:

Les Leventhal - Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests (JAES, Vol.34 No.6)

quote:
Originally posted by Arthur-itis
The article in question should then be compared to the following:

Comments on "Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests" and Author's Replies 674942 bytes (CD aes4)
Author(s): Shanefield, Daniel; Clark, David; Nousaine, Tom; Leventhal, Les Publication: Volume 35 Number 7/8 pp. 567·572; July 1987


Yes, absolutely. I assume you have actually READ the above and noted the clobering the ABX Mob received (again)?

quote:
Originally posted by Arthur-itis
Transformed Binomial Confidence Limits for Listening Tests 468821 bytes (CD aes5) Author(s): Burstein, Herman
Publication: Volume 37 Number 5 pp. 363·367; May 1989
Abstract: A simple transformation of classical binomial confidence limits provides exact confidence limits for the results of a listening test, such as the popular ABX test. These limits are for the proportion of known correct responses, as distinguished from guessed correct responses. Similarly, a point estimate is obtained for the proportion of known correct responses. The transformed binomial limits differ, often markedly, from those obtained by the Bayesian method.


This one I am not familiar with, but as it is still based on the binominal distribution it does not address the fundamental issues of small sample size and all the issues of Type 1 and Type 2 errors remain in force.

quote:
Originally posted by Arthur-itis
Approximation Formulas for Error Risk and Sample Size in ABX Testing 442116 bytes (CD aes4)
Author(s): Burstein, Herman
Publication: Volume 36 Number 11 pp. 879·883; November 1988
Abstract: When sampling from a dichotomous population with an assumed proportion p of events having a defined characteristic, the binomial distribution is the appropriate statistical model for accurately determining: type 1 error risk (symbol); type 2 error risk (symbol); sample size n based on specified (symbol) and (symbol) and assumptions about p; and critical c (minimum number of events to satisfy a specified [symbol]). Table
3 in [1] pre;sents such data for a limited number of sample sizes and p values. To extend the scope of Table 3 to most n and p, we present approximation formulas of substantial accuracy, based on the normal distribution as an approximation of the binomial.



I doubt that there are any findings materially different from these presented by Levental. After all statistics have not changed.


The above referenced papers debunk Leventhal.

quote:
Originally posted by Arthur-itis
If you haven't figured it by now the subjective audio press doen't like ABX because it's bad for sales.


No, the "subjective audio press" do not the ABX Mob because their behaviour is clearly and repeatedly proven unscientific and clearly (on the evidence of the ongoing presentation of their results) repeatedly and with an obvious agenda (as evidenced by various complaints fielded by the ABX crowd to Trade regulators over time).

That's your opinion and theirs, it is not the opinon of everybody else doing audio research aside form the high end.

I personally do not care for people who present conclusion of "There is no audibile difference between A and B!" as authoritive pronouncements when their data is not capable of supporting this thesis with ANY resonable level of confidence.

Good, because no has done that, what they've done is said that this person or persons couldn't hear any difference, unless of course they did hear a difference, which does happen.

I guess that Sean Olive, Floyd Toole and Jim Johnston, and the BBC and hearing aid companies, are all charlatans being bought off by the pro ABX lobby, since they all use and endorse ABX for it's intended purpose.
 
quote:
Originally posted by Arthur-itis
I don't understand why it's so difficult for people to grasp, ABX is the standard for detecting small differences in audio.


You are ALMOST right. I completely UNDERSTAND ABX Testing.

ABX is the SELFDECLARED standard for testing differences in audibility.

Good thing that SERIOUS RESEARCHERS do NOT employ it, like for example when researching perceptual coding.

Really Jim Johnston at Bell Labs used it and endorses it and I think he knows a fair amount about percetual coding.

If they did Dolby Digital and MP3 would be a lot worse than they are now....

Opinion stated as fact.

So yes, it is a standard, but NOT a standard that is widely aknowledged oustide the ABX Mob.

You are completely and utterly incorrect.

quote:
Originally posted by Arthur-itis
The claims that it is designed to show no difference or that it is somehow flawed are not supported by any credible data.


Levental has proven by teh use of most ELEMENTARY statistics that the ABX test is highly likely to find "NO DIFFERENCE" in situations where the difference is audible but small. The ABX Mob failed to re-adjust their testing system and criterias following this criticism for nearly 20 Years. That is for nearly 20 Years iut had been PROVEN to them that their test is unlikely to detect small audible differences and they failed to adjust their test. Surely you do not wish to suggest that the failur to corect their methode is down to gross incompetence and negligence?

No I wish to suggest that Leventhal was debunked in the papers I cited, and that's part of the reason I have no use for Stereophile. They'll cite him but not the people who debunk him.
 
Status
Not open for further replies.