Claim your $1M from the Great Randi

Status
Not open for further replies.
Mr. Curl:

I do know what ABX is. Firstly it's not a TEST. It's a comparison.

My contribution? What do I need to have contributed to understand that there a lot of hype and ******** being passed off as science in the High End?

You are entitled to your opinion, I'm entitled to mine.

Until and unless there is a new more reliable, more discerning way to evaluate audio gear that removes bias and allows one to use one's ears, I'll stick with ABX.

In my expierience the people screaming the loudest about ABX and DBT's being flawed are the ones who stand to lose. It is not just an opinion however that the ABX form of comparison is the standard for evaluating differences in audio, everywhere but the high end community, and I've pointed out some of users.

Maybe you believe that Floyd Toole is wasting his employers time and money by using ABX, but I don't, and I don't I think Harman does either.

I'm a firm believer in maximizing my listening enjoyment by getting the most from my hi-fi dollar.

I know that, that really means, getting the best speakers one can afford, combined with fixing the listening environment and maybe some active EQ.
 
As a general question to the group: How many are aware of the facilities at the Harman campus for doing DBT's? They are used in teh development of the products made by the various companies under the Harman umbrella.

Not to mention NRC. Of course, that's where Harman recruited Toole from.

There are two misconceptions that keep being said over and over:

1. Blind testing = ABX. This just is not the case. There are a variety of types of blind test, ABX is just one of them. If you don't like ABX or it's unsuitable for a particular comparison, you can use a different method. There's a million bucks out there- isn't it worth setting up some sort of blind test to snag it?

2. ABX doesn't show differences. Total BS; there are LOTS of perceivable differences that ABX tests have demonstrated. Like very fine sensitivity to level mismatch, to EQ, to polar pattern, channel balance, amplifier overload characteristics...

ABX is often scorned because the results of the testing have not proved the "clearly audible" phenomena that many people wish could be shown. That's the downside to doing things scientifically- your cherished hypotheses may not withstand the search for evidence. So the poor messenger gets shot.
 
Hi,

In post #401 John Curl wrote

Yes, read that too....

and the presence of mumetal near the wire.

....only I don't think that necessarily means shielding of an I/C...Or does it?

The reason I asked is that I had the impression from your previous post that you automatically assumed mu-metal shielding.
Somehow I don't think that was implied but I could be wrong...

Cheers,😉
 
Well, there's not if you can't detect these things audibly in a blind test. Otherwise, the prize money is still there, metaphysical word games notwithstanding. Randi has confirmed this in writing at his web site and via email. If he doesn't cough up, there's no shortage of lawyers who are willing to extract it from him- I'm sure you'd take that one on contingency.
 
Arthur, do you know Floyd Toole? Do you work with him?
Most of you have little practical understanding of ABX testing. If you did, you would know that between properly equalized electronics, it is virtually impossible to hear any differences, BECAUSE of the test requirements (limitations). Since I design amps and preamps, I ignore ABX testing, but wish anyone else, who might be my competitor, to embrace it with all enthusiasm! ;-)
 
Kuei Yang Wang thinks that Leventhal had something damaging to say on ABX, but obviously missed this in the Stereophile article:

"Les Leventhal's critique of the statistical analysis commonly used in blind
subjective testing is misleading, erroneous, and borders on the incompetent.
His letter is written in a style that prompts the casual reader to think
"Someone has finally figured out what's wrong with all those blind tests
where they don't hear anything." Not only has Leventhal failed to prove his
case; he has demonstrated his own lack of understanding of how the
audiophile benefits from double-blind testing. "
 
Folks, just for completeness, I might mention that I have conversed with Floyd Tool, Drs. Lipshitz and Vanderkooy, David Clark (builder of the ABX box) and Les Leventhal on this subject over the years.
It has been shown that the worst caps (tantalum) that we could find, could not be detected in an ABX double-blind test, by one or two of the persons mentioned above. If the worst caps that we could find can't be detected, what is the point? Settle for an IC 150 and listen to music.
 
john curl said:
It has been shown that the worst caps (tantalum) that we could find, could not be detected in an ABX double-blind test, by one or two of the persons mentioned above. If the worst caps that we could find can't be detected, what is the point? Settle for an IC 150 and listen to music.


Funny you mention tantalums. Some of the most raved about amps of all time (by same golden ears who claim to have detected single resistor directivity), Krell KMA's, use AMPLE amounts of tantalums (and no, NOT in the power supply, but in sensitive input differential double pair).
Did we hear any of the golden eared reviewers say anything but the usual high end nonsencial parabolas ("airy", "liquid", "silky", "holographic", etc.) ?

And if a person (*) cannot hear a tantalum in the circuit if he/she doesn't know if it is there, yes exactly - "what is the point" ?

(*) not ordinary person - but worlds golden eared high-end Gods
 
Not ALL tantalum caps are bad. Just some cheap, offshore devices. In the '60's and '70's we all used tantalum coupling caps. Later, in 1978, I presented a paper at an IEEE conference on Audio, which showed significant nonlinear distortion in both tantalum and ceramic caps. Later, Walt Jung and Dick Marsh pointed out the effects of Dielectric Absorption in audio caps. So, tantalum caps can have both linear and non-linear distortion and the leads are usually magnetic as well, even with the best examples. What about a semi-defective component, made for the lowest possible price? Yes folks, you can actually measure differences between many cheap and expensive components.
Today, I tend to avoid all coupling caps, and use only the best bypass caps that can be used in the price range of the product. Don't tell HK! ;-) Let them find out for themselves.
 
john curl said:
It has been shown that the worst caps (tantalum) that we could find, could not be detected in an ABX double-blind test, by one or two of the persons mentioned above. If the worst caps that we could find can't be detected, what is the point? Settle for an IC 150 and listen to music.

Just to claify, even if just one person in a group can detect a difference with a greater-than-random chance correctly, doesn't this prove the effectiveness of the test? Not everyone is going to have the same sensitivity to the same artefacts, as in video where I know people who seem to be unable to detect motion-blur in MPEG files, until I hit the freeze-frame button and point out the blocks to them... if only audio was that easy to analyze 😉
 
Konnichiwa,

Arthur-itis said:
Kuei Yang Wang thinks that Leventhal had something damaging to say on ABX, but obviously missed this in the Stereophile article:

"Les Leventhal's critique of the statistical analysis commonly used in blind subjective testing is misleading, erroneous, and borders on the incompetent. His letter is written in a style that prompts the casual reader to think "Someone has finally figured out what's wrong with all those blind tests where they don't hear anything." Not only has Leventhal failed to prove his case; he has demonstrated his own lack of understanding of how the audiophile benefits from double-blind testing. "

Hmmm. This was answer from Dave Clark, who had a clear financial interest in the ABX Gizmo and great emotional investment into the ABX Methode. His letter answering Les Leventals criticism of the statistics employed was answered by Clark in incoeherent emotional rambeling as above, with not a SINGLE point Levental made about statistics being addressed in a manner that would actually dsiproove it.

With the letter from which you quoted above Clark demonstrated an appaling lack of understanding for the most basic statististics and the usuall sour grapes attitude shown by his kind.

So, no I did not miss it at all.

Incoherent and statistically plainly wrong rambelings to answer a simple, straightforward and mathematically proovable criticism of the statistics employed by Clark. I fail to be impressed. Your quoting this suggest that your perception of reality is similarly removed from reality.

Sayonara
 
Konnichiwa

arniel said:
Just to claify, even if just one person in a group can detect a difference with a greater-than-random chance correctly, doesn't this prove the effectiveness of the test?

Well, it prooves not just the effectiveness of the test but also that in fact the observed effect is "true".

THAT IS, UNLESS you choose to exclude the most outlying datapoints (a common use in statistics), but doing so has implications for the analysis. So, if, as they have on occasions where one or two persons achieved a significant score, the "lucky coins" are excluded you also must excluded the "unlucky coins" (eg the most extreme outliers in the other direction) AND you must consider the rest of the data as a block analysed apropriatly.

So, if you have (say) 10 Test subjects all of which participate in a 16 Trial ABX test and we have one person scoring 8/8 and another 14/16 and we exclude them from the dataset as "lucky/unlucky coins"with the remaining 8 People in the set on average scoring on average 10/16 with no higher than 11/16 you would be in a position to conclude with a better than .05 significance level that the difference evaluated IS AUDIBLE.

Notabene the ABX Mob usually prefers to focus on the individual sets of DATA and would conclude and publish that after excluding the one "lucky coin" all other test data suggests that no difference is audible, as no subject could identify the difference with statistical significance, ignoring in effect the fact thatir accumulated data actually suggests audibility. It is not confidence inspiring that the ABX Mob almost never publishes the full experiemental data to allow a more complete analysis (and never mind the other fundamental problems with many of the documented test setups).

In fact, the ABX Mos and their test FAILED to find differences already aknowledged as audible in other Blind Tests (which where monitored and peer reviewed within the AES), a substantial pointer that their test methode is insufficiently sensitive for the reliable exclusion of small differences in their tests, a fact however not aknowleged by the ABX Mob and carefully disguised in their publications.

To repeat it in the moist simple terms. The limited datasets usually employed mean that it is NOT possible to reject with any certainty the hypothesis of a small audible difference EVEN if the test fails to show that, yet the ABX Mob regulary published pronouncements on the audibility (or rather the lack thereof) based on such data.

arniel said:
Not everyone is going to have the same sensitivity to the same artefacts, as in video where I know people who seem to be unable to detect motion-blur in MPEG files, until I hit the freeze-frame button and point out the blocks to them... if only audio was that easy to analyze 😉

This is the next problem.

Double Blind testing the ABX Style suffers from a number of problems that ensure that in effects all findings preseted by this group must be considered with extreme caution.

1) General testsetups tend to vary between seriously sloppy and/or deliberatly crippled to obscure any small (or even moderate differences, for example as presented here:

"The Audio Alchemy DDE Version 3.0 DA converter was driven from the digital outputs of the Marantz CD-63. The same/different comparison was between the DDE's outputs and the Marantz analog outputs, however the analog signal was put through another AD-DA conversion in a Marantz CDR-610 in record mode."

http://www.pcavtech.com/abx/abx_cd.htm

For the record, the CDR-610 is a "Pro" CD Recorder with pretty poor AD and DA conversion and far from "wire with gain)

2) The propensity of people who "believe" in audible differences to hear a difference where non exists (meaning where the test pair is "the same" and equal propensity of people who "disbelieve" NOT to hear a difference even where one exists (in other word the ability of people to hear exactly what they expect to hear, which should disbar anyone with a fixed believe or disbelieve to participate in test - something that especially applies to the Randi Style challenges regulary issued by the ABX Mob).

3) The lack of interest/relevance in certain potentially audible differences to a given person as well as the lack of analytical hearing training in most Audio Engineers, Audiophiles and the general public, which prevents many people from reliably identifying HUGE differences (I "ABX" Tested polarity reversal in one channel once, most participants failed their "ABX" Test already on this).

4) The statistics applied to the Data generated from the is subjected to inaproriate statistical analysis if you wish to ascertain if something is audible or not with a reasonable certainty, it is only aproriate to to be reasonable that small differences are not considered audible when in fact their are.

5) The often deliberate and fundamental misrepresentation of the resusts of the flawed statistical analysis as proof for the inaudibility of any differences, regardless of facts and reality, especially in areas where the ABX Mob feels differences SHOULD be inaudible.

The net result is that the data itself is highly suspect, the stastical evaluation of the data fundamentally flawed, a fact repeatedly pointed out to the Experimenters but studiously ignored and finally a misinterpretation of the statistical in pulication. To me at least this constitutetes a comprehensive inditement and AS ALL THE FACTS are known to the ABX mobs makes me at least question their motives in the strongest possible terms.

Sayonara
 
If you did, you would know that between properly equalized electronics, it is virtually impossible to hear any differences, BECAUSE of the test requirements (limitations).

That's actually not correct. See, for example, "Some Amplifiers Do Sound Different," Carlstrom, Krueger, and Greenhill, Audio Amateur 13:3, 30 (1982) comparing an ARC D-120 to a CM Labs CM914a using ABX. Group score was 43/45.
 
Kuei Yang Wang said:

1) General testsetups tend to vary between seriously sloppy and/or deliberatly crippled to obscure any small (or even moderate differences, for example as presented here:

Um, as opposed to the rigourous analysis performed by the proponents of those lumps of rock we were gassing about at the start of this thread 😉

Thorsten,
It appears you're catching up with DestroyerX in the posting-coherency stakes! :clown:

One of the most beautiful effects of the internet is how it can bring an East German in North London so close to a Brasilian native in such an entertaining way 😀 ...
 
Konnichiwa,

arniel said:
Um, as opposed to the rigourous analysis performed by the proponents of those lumps of rock we were gassing about at the start of this thread 😉

Nope, not at all. As remarked before, I have zip opinion on Shakti Stones themselves, never having had any exposure to them. However, I am familiar with the effects of crystals (yup, new age/feng shui stuff) in general and in the context of my system and I have blind tested them.

What I am taking umbrage with are two main issues:

1) The pseudo scientific mindset prevalent in that poor excuse that calls itself modern "science" which will deny even the proof of their own eyes if it it disagrees with established dogma and doctrine (in audio as much as outside).

2) The making up or misinterpreting of data to support the dogmatically and doctrinally "correct" position.

Both of thee effectively eliminate any possibility of a serious and levelheaded investigation of phenomenae that appear to contradict exiting Dogma and Doctrine, regardless of the factual situation, it is basically to that group that "What cannot be (explained in their terms) must not (allowed to) be".

Sayonara
 
SY, I don't know where you get your info about ABX testing, but it is virtually worthless for amps and preamps, UNLESS the devices under test significantly change the amplitude response, or have gross distortion. I generally don't design amps and preamps with gross distortion, or low damping factor, so it is worthless for me to use ABX to try to find any differences.
EVEN IF I could find a difference with an ABX test, it is one of the most insensitive ways to find anything. This is partly because of Les Leventhal's comparison of type 1 and type 2 sensitivity to differences with ABX.
 
SY, I don't know where you get your info about ABX testing, but it is virtually worthless for amps and preamps, UNLESS the devices under test significantly change the amplitude response, or have gross distortion. I generally don't design amps and preamps with gross distortion, or low damping factor, so it is worthless for me to use ABX to try to find any differences.
EVEN IF I could find a difference with an ABX test, it is one of the most insensitive ways to find anything. This is partly because of Les Leventhal's comparison of type 1 and type 2 sensitivity to differences with ABX.
 
ABX testing, but it is virtually worthless for amps and preamps, UNLESS the devices under test significantly change the amplitude response, or have gross distortion.

Why? :scratch:

What method of blind testing is then appropriate? Would you say ABC/HR is also insensitive? That would eliminate the two most widely used methodologies. But maybe that's why...
 
Status
Not open for further replies.