I don't believe cables make a difference, any input?

Status
Not open for further replies.
Theoretical and Audible Effects of Jitter on Digital Audio Quality

Authors: Benjamin, Eric; Gannon, Benjamin
Affiliation: Dolby Laboratories Inc., San Francisco, CA
AES Convention:105 (September 1998) Paper Number:4826

Kaoru Ashihara1 et al., Detection threshold for distortions due to jitter on digital audio
Acoust. Sci. & Tech. 26, 1 (2005)

Tomoharu Ishikawa, Yukio Kobayashi, Makoto Miyahara, “Improving the Transfer Function of a Sound
System to Constant and its Effect on the Reproduction of High Order Sensations ”, WESTPRAC VII, 175-1,

Wishes
pp.393-396, 2000.

Jakob,

Is that last paper available on the 'net?

jd
 
Out of this approach, prof. Meyer et. al. had probably the best late approach on such an issue...

That's not encouraging. Meyer never bothered checking if the players he used actually were capable of full SACD-spec performance. At least one was little better than CD. Nor do I recall that he confirmed all the source material was real SACD instead of remastered Red Book on SACD disks. Many times I've come across 96/24 audio online that a spectrum analyzer shows is brick walled below 22 kHz.
I suspect that 'knowing' in advance he would find no difference, he didn't bother with the details.
 
That's not encouraging. Meyer never bothered checking if the players he used actually were capable of full SACD-spec performance. At least one was little better than CD. Nor do I recall that he confirmed all the source material was real SACD instead of remastered Red Book on SACD disks. Many times I've come across 96/24 audio online that a spectrum analyzer shows is brick walled below 22 kHz.
I suspect that 'knowing' in advance he would find no difference, he didn't bother with the details.

That's interesting. I would probably have assumed in his place that a disk sold as SACD would actually be SACD, and that a player sold as an SACD player would have performed according to that spec. It does weaken the test results, but it might be that he sincerely assumed what I would also assume.
Man, this db testing stuff is so full of traps!

jd
 
@ SY,

bitte sehr! 🙂


@ syn08,

Jacob, I am sure you also read some of the comments on those articles, e.g.

Comments on "Type 1 and Type 2 Errors in the Statistical Analysis of Listening Tests" and Author's Replies

Authors: Shanefield, Daniel; Clark, David; Nousaine, Tom; Leventhal, Les
JAES Volume 35 Issue 7/8 pp. 567-572; July 1987

Yes, i´ve read the comments back then and was very surprised. While Dan Shanefield acted in a very professional manner and with an understandable conclusion (that if not detected the audible differences might be not of practical relevance. Understandable but controversial as it wasn´t shown what a difference could have been detected), the replies of the others made me doubt if they are really searching for the `truth`.

Otherwise i would have expected a reaction like " oh, thanks we´ll retry with consideration of your hints" . (Ok, maybe after exchanging of some scientifically justified arguments to prevent the impression they haven´t thought enough)

It's essentially a very interesting (and mathematically solid) way to refine ABX test results, by including a probability of validity of the tested hypothesis.

It is more a an approach to refine the ABX (or any other test protocol) with respect to the tolerable error risk for every outcome.
This sort of test is conservative by nature, means it does normally tolerate a much higher error risk for false negatives.
But if negative test results were used as generalized "differences must be inaudible otherwise they would have been detected in the test" than it is unfair and therefore Leventhal introduced a fairness coefficient to balance the error risks for both error types.

Out of this approach, prof. Meyer et. al. had probably the best late approach on such an issue, while ABX testing the differences between CD and SACD formats. After a negative result, he never said they are identical; all he stated is that if there are any differences, they are to small to be identified/heared. And that from now on, the burden of proving any differences is on those making the claim.

I think the Meyer/Moran is a perfect example for the problems in the audio community (even the scientific one), because it doesn´t discuss earlier studies that came to other results. I´ve cited earlier some papers like Thieles paper from the IRT in Munich and others.

Meyer/Moran doesn´t provide any positive control, so we just have to believe that the participants were able to detect at least something under the test conditions.
They didn´t provide at least a basic set of measurements wrt to all parts of the experiments and afair i remember they concessed themselves that this this piece was more informative than anything else.

Could it really be that the perfect acoustically transparent way to convert down form highres to cd format is the external unit they had used?
What about the double blind tests on the best dither algorithms? Why didn´t they feel a need to discuss this obvious contradiction?

BTW, i still wonder about the very different appreciation of different studies; if for example Oohashis et al. were `awfully thin´or ´totally flawed´ what does it mean wrt to the methodology Meyer/Moran were using?

This approach is valid for a hypothesis that could be (theoretically) investigated, if the results were positive. But when the tested hypothesis is (if true) on a direct collision course with the fundamental laws of physics (and here's my favourite example, the ByBeee device) then the upfront burden to provide positive testing results is on those making the extraordinary claims. Why do you think nobody is testing today pepetuum mobile designs?

The burden of proof is always on the claimant, but otoh everyone who is doing tests is obliged to do _valid_ tests.

There might be examples on direct collision to laws of physics but this is certainly not true for our cable topic. It is more a matter of psychoacoustics and the findings in this field aren´t normally that authoritative.

Otherwise, I have begged about a million times for a valid/good/workable example of what you call a "positive control" in ABX audio testing. All I got was silence.

I´ve answered your question about a million times.....ok ok, maybe we could agree that you asked twice and that i answered twice? 🙂
Look at this:

http://www.diyaudio.com/forums/mult...ake-difference-any-input-739.html#post1963423

and

http://www.diyaudio.com/forums/mult...ake-difference-any-input-539.html#post1902250

Btw, i really thought that my answers were helpfull. 🙂

Wishes
 
Last edited:
That's interesting. I would probably have assumed in his place that a disk sold as SACD would actually be SACD, and that a player sold as an SACD player would have performed according to that spec. It does weaken the test results, but it might be that he sincerely assumed what I would also assume.
Man, this db testing stuff is so full of traps!

jd

That is exactly the point why i argued about the different appreciation; please compare for example the effort Oohashi and his colleages took in their studies on the `Hypersonic effect´ to provide measurements of nearly every part of the systems used with the Meyer/Moran approach.

Wishes
 
I´ve answered your question about a million times.....ok ok, maybe we could agree that you asked twice and that i answered twice? 🙂
Look at this:

http://www.diyaudio.com/forums/mult...ake-difference-any-input-739.html#post1963423

and

http://www.diyaudio.com/forums/mult...ake-difference-any-input-539.html#post1902250

Well, I think you should come up with something better than MUSHRA, I have already replied to this:

http://www.diyaudio.com/forums/mult...ake-difference-any-input-540.html#post1902300

At the end of the day, those having a vested interest in promoting cable differences (and I mean cable manufacturers and the associated salesforce) should pay for such testing, as much as the drug manufacturers do for testing their products. Expecting others to do your work, while reserving only the right to point fingers and complain about so-called "flawed" test methodologies is going nowhere. Want to prove physics and thermodynamics wrong? Go ahead and do the work, at least the Nobel is around the corner.

BTW, are what you call "positive controls" included in drug testing? And if not, why?
 
..........................
..........................
At the end of the day, those having a vested interest in promoting cable differences (and I mean cable manufacturers and the associated salesforce) should pay for such testing, as much as the drug manufacturers do for testing their products. Expecting others to do your work, while reserving only the right to point fingers and complain about so-called "flawed" test methodologies is going nowhere. Want to prove physics and thermodynamics wrong? Go ahead and do the work, at least the Nobel is around the corner.

BTW, are what you call "positive controls" included in drug testing? And if not, why?


My understanding is that in the pharmaceutical industry most of the testing is carried out by the industry itself! No doubt there are very strict and state of art procedures imposed by Government agencies. I most certainly would not want any certification of specification of expensive audio equipment confirmed by the companies themselves without a firm industry standard having been being imposed. As this thread illustrates there seems to me that no meaningful set of valid criteria could ever be agreed. 🙂

When we see the numerous claims made in their promotional attempts, and the use of marketing trickery I most certainly would not trust the industry at large! Then take the international nature of production. The plastics can come from one point, the metals from another and the making process from yet another. It would with the best intention possible be impossible to adequately monitor such diverse sourcing. Generally speaking cables used with a modicum of skill and common sense for audio connections -other than power cables - can do little harm other than to your wallet and subjective performance expectations.😉
 
@ syn08,

you´ve asked in which way positive controls could be used and i gave the example like in the MUSHRA protocol or, if you´re doing paired preference tests, that you have to do additional runs with the control.

Also the ITU-**.1116 uses some (but very crude) positive controls to sort out participants that are totally unreliable under test conditions.

Why should we argue at that point about statistical analysis?

A test according to scientific standards has to be objective, valid and reliable.
A test can only be valid if it tests the ability which it claims to do.

Let me try to explain it this way:

If a test claims to test the audibility of a difference between two DUTs, than it has to be shown that it doesn´t only test the abilitiy of a listener to fail under test conditions.

Does that make more sense?

It is the baseline in sensory testing that it is as much a test for the participant as it is for the DUT.

Without a positive control it is simply impossible to show that a listener is able to distinguish something under the specific test conditions.

So, if you´re asking for controls in drug testing; i don´t know, but maybe they have another way to show that the test is valid?
But, for example allergic tests usually use both, a positive control and a negative control.

Wishes
 
If I grabbed the right paper from memory, here's another which employs impressive and properly scientific attention to detail.

That's about SACD vs. DVD auduio, but you are right, it's an impressive study. Here's a quote from the conclusions:

These listening tests indicate that as a rule, no significant differences could be heard between DSD and high-resolution PCM (24-bit / 176.4 kHz) even with the best equipment, under optimal listening conditions, and with test subjects who had varied listening experience and various ways of focusing on what they hear.
 
My understanding is that in the pharmaceutical industry most of the testing is carried out by the industry itself! No doubt there are very strict and state of art procedures imposed by Government agencies. I most certainly would not want any certification of specification of expensive audio equipment confirmed by the companies themselves without a firm industry standard having been being imposed. As this thread illustrates there seems to me that no meaningful set of valid criteria could ever be agreed. 🙂

When we see the numerous claims made in their promotional attempts, and the use of marketing trickery I most certainly would not trust the industry at large! Then take the international nature of production. The plastics can come from one point, the metals from another and the making process from yet another. It would with the best intention possible be impossible to adequately monitor such diverse sourcing. Generally speaking cables used with a modicum of skill and common sense for audio connections -other than power cables - can do little harm other than to your wallet and subjective performance expectations.😉

In the US, pretty much all testing is done outside the drug companies by CROs. They however have no control over the claims made by the drug companies and their marketing departments.
 
@ rdf & syn08,

yeah, it is quite impressiv, but there are some interesting points; first they weren´t using controls and second, quite surprising and in contrary to their abstract, 4 of their participants did pass the 20 trial ABX with 16 and more correct answers.
AFAIR, given the number of particpants overall, that was well above the expectancy value (Erwartungswert) .

So, one could argue that they didn´t need a positive control as the test provided a positive outcome.
The weak points were that they didn´t had enough time to run additional tests with these 4 participants and that they didn´t use a negative control, so we can´t be sure that the listeners were able to detect a difference only by the EUT.

Wishes
 
@ syn08,

you´ve asked in which way positive controls could be used and i gave the example like in the MUSHRA protocol or, if you´re doing paired preference tests, that you have to do additional runs with the control.

Nope, here's what I asked for:

a valid/good/workable example of what you call a "positive control" in ABX audio testing.

Also the ITU-**.1116 uses some (but very crude) positive controls to sort out participants that are totally unreliable under test conditions.

So should I understand there are no procedures developed to your satisfaction? Then it's time to work on that.

Why should we argue at that point about statistical analysis?

A test according to scientific standards has to be objective, valid and reliable.
A test can only be valid if it tests the ability which it claims to do.

Let me try to explain it this way:

If a test claims to test the audibility of a difference between two DUTs, than it has to be shown that it doesn´t only test the abilitiy of a listener to fail under test conditions.

Does that make more sense?

It is the baseline in sensory testing that it is as much a test for the participant as it is for the DUT.

Without a positive control it is simply impossible to show that a listener is able to distinguish something under the specific test conditions.

Not that it makes more sense, but I can somehow persuade myself to agree with everything you said, yes, current protocols are crap, studies are flawed, no positive controls, Meyer was not throughout, yada... yada... yada...

The only point you missed (or avoided) is about who's supposed to do the right (and quite complex) work, to your satisfaction. Why should anybody else but the manufacturers and the associate salesforce invest in such a massive study. And why is this not happening?

Again, sitting on the fence and pointing fingers doesn't prove anything.
 
That's about SACD vs. DVD auduio, but you are right, it's an impressive study. Here's a quote from the conclusions:

How about this one, right from the link, bolding mine?

The results showed that hardly any of the subjects could make a reproducible distinction between the two encoding systems.

Implying the authors felt some could, which is what I further recall from reading the paper. At least one subject's results suggested a solid ability to differentiate and the authors regretted not being able to pursue it further.
All of which speaks less to the question of cable audibility than the way test results are reported here.
 
Status
Not open for further replies.