I don't believe cables make a difference, any input?

Status
Not open for further replies.
Yes, if you can get them. Usually in well funded DBTs the listeners are trained and have passed standard audiometry tests. If you can't get them I suppose any random selection of "cable believers" would do nicely - if any of them would be brave enough to put their hands up for the challenge.

Is audiometry test good enough reason for someone to be able to detect cable differences?"Objectivist's" comments so far shows it is not,and most subjectivists may agree.
Also,ask 100 persons in the streets if they believe cables make any difference.A huge percentage will answer yes.Of them,how many are trained,have a decent system,know a few things about music or audio,know what differences to look for?Very few.Is therefore a test with these listeners scientific enough,no matter what the result will be? I don't think so,but then,I'm just a believer.:)
 
And 'valid' means what here? Surely it doesn't mean 'eliminates all unconscious bias' ? 'Known good practice' is known to whom?

Valid because thousands of dudes in cable threads all over the internet claim it to be "valid". Duh! :D

Known to all the Dudes in all the cable threads. Double Duh!! :D

Do you consider that they might influence cable audibility and if not, why not?

No because so many Dudes in all the cable threads all over the internet reject that. Triple Duh!!! :D
 
And 'valid' means what here? Surely it doesn't mean 'eliminates all unconscious bias' ?

No, minimizes it to a level where it's unlikely that known bias affects will corrupt the result, hence the double-blind nature of the test.

'Known good practice' is known to whom?

Accumulated knowledge from audio engineering research and good experimental design. For instance, peer reviewed articles usually explain the detail of how their DBTs were designed and implemented, and SY's protocol seems aligned with the commonly held understanding of how a good DBT is to be managed.

Oh that word 'valid' once again. I do hope you elucidate on it, to me its starting to bear some of the hallmarks of a weasel word.

I'm curious, what exactly and specifically is the problem with the word 'valid' and with the concept of validity?

Well we're dealing with what constitutes audible reality here, so 'unlikely' just isn't rigorous enough.

Actually, we are dealing with experimental design. Part and parcel of that is dealing with probabilities. So words such as 'likely' and 'unlikely' come up all the time.

You seem to want a certain level of rigor to be achieved in the design of the test. Fair enough. So, I would be interested to know (a) your suggested protocol (or enhancements to the existing protocol) for ensuring the level of rigour that you expect, and (b) what the actual level of rigor looks like - is it a numeric confidence level, a qualitative description, or what?

Incidentally, did you read my earlier comments about potential noise modulation effects owing to RF ingress? Do you consider that they might influence cable audibility and if not, why not?

Sorry, not my area of expertise, although it sounds interesting. There are plenty of talented people who read this forum so there might be someone who can give you some feedback.

Also the Hydrogenaudio forum has some very technically oriented readers who might be able to make some useful comments.
 
Last edited:
You seem to want a certain level of rigor to be achieved in the design of the test. Fair enough. So, I would be interested to know (a) your suggested protocol (or enhancements to the existing protocol) for ensuring the level of rigour that you expect, and (b) what the actual level of rigor looks like - is it a numeric confidence level, a qualitative description, or what?

I've snipped out your other points and hopefully, they'll get dealt with as I respond to this. If not and you feel I've left out something vital, please do get back to me.

I'm after an extreme level of rigor for one reason - that is because this question is one I originally put to Steve Eddy and he's so far not given it the time of day. So perhaps you're answering the question I put specifically to him but don't bring along his ideological baggage. The rigor is essential because Steve is specifically proposing to override perception when he gets a certain result out of this test. In short he's saying that 'hear a difference' only occurs when there's a rejection of the null hypothesis in a DBT. With all other results, 'perceive a difference' is to be used.

By doing this, he's set the result of an arbitrarily designed DBT over and above perception in certain cases. He thereby denies the experience of those who describe differences in cable sound. This denial is arbitrary, because the test design is arbitrary in various factors.

Within the scientific method, such a proposal is unprecedented. We pretty well all recognise that perception is not infallible, but in such cases where its likely to be a problem, the accepted solution is to cross-check perception with perception. Perception, whenever it does turn out to be fallible, is never fallible in an arbitrary way, there always turn out to be patterns which can be accounted for. Steve's proposal introduces a "back door" whereby any result gained through perception can be overturned on a whim. Put more simply, this is no longer science, but religion. Authority has been attributed to a DBT to override perception, and for this to be acceptable, it does have to be totally reliable. Since I believe total reliability to be impossible, I contend that we accept this at our peril.
 
Last edited:
No, you didn't. You have suggested no appropriate "positive controls" whatever. Please remember, a positive control is a stimulus of the thing to be detected at thresholds above known detection levels. All you could come up with was frequency response, which is NOT the thing to be detected.

You are simply using a very narrow definition of the phrase "positive control" ; in the more general scientific usage is a positive control an instrument to show that the experimental design is functional wrt to the objective of the test.

Of course it would be nice to cover every single thinkable effect with varying positive controls on different sensitivity levels, but practical considerations (time frames for tests are usually not unlimited) will normally not allow that.

In general we are trying to test if participants in a test are able to detect an audible difference (if there exists one) that is below the known thresholds.
Or in other words, we don´t have a clue what the reason for any audible difference will be, in such a situation it is very good idea to ensure, that the participants reach the highest possible sensitivity level wrt to the detection of small audible differences. (small means wrt to the measurements we are able to do and in comparison to the signals we are dealing with).

Wishes
 
@terry_J,

as said (several times, did we already discuss redundance in this thread? :) ) before, we are not only talking about the specific test of tubeguy/SY, but more generally about various aspects of testing and blind testing.

But the goal of every (even if only approaching) scientific procedure is to eliminate most of the in which you have to "believe" .
It simply doesn´t make sense to rely on something because you "believe" that it is functional.

Of course if tubeguy is using his system and his interconnects and if he did a lot of / some training under the specific test conditions and is able to get positive results, then that it is already sort of a positive control.
But that way you do not know if he is in a good "working condition" at the final test.

Wishes
 
Sure, and SY's protocol is valid and follows known good practice.

Any comment on validity of a test design in advance is pure speculation as it does not take into account the most important part of the test system (i.e. the listener).

And parts of SY´s protocol are already known to favor the null hypothesis.
See for example Tom Nousaines article on this topic or look up the experimental data of the well documented dbts like the stereophile DBT on amplifiers i´ve linked a lot of times to.

Only the training of tubeguy will eventually help to overcome these issues (if he´s able to get used to this protocol).

Wishes
 
@ abraxalito,

of course you´re right regarding the constructivism, i´m struggling with this language.

What i tried to say is, that i am opting for the more pragmatic approach; while it might not be real what we perceive, these procedure have shown to a sufficient level (at least for me) that they are working for the group.

If somebody really perceives a difference between two DUTs and this perception difference is based on a difference in the acoustical stimuli then he principially is able to perceive that difference under blind test conditions too.
Provided that the listener is able to get used to the specific test conditions or that we (as really exceptional experimenters) are able to work out a test in which a listener does not even recognize that he is participating in a double blind test. :)

I once tried that latter approach and was quite satisfied with the results.

Wishes
 
I was thinking. I know, that's a first!

Human hearing is the same as the freq specs on most comercial amps right? 20-20

Has no one ever measured a cable(s) with distortion levels between those numbers? Is there even a relatively easy way to do that?

I just keep thinking how fragile and variable blind testing is and that makes me want to go in the other direction where accepted human hearing limits would be the only unstable part of the equation.

The only problem I see is generating tones is not the same as putting music through the cable, so would there be a way to compare the frequency levels of an entire song on cable A to the freq levels of cable B?
 
Last edited:
You are simply using a very narrow definition of the phrase "positive control" ; in the more general scientific usage is a positive control an instrument to show that the experimental design is functional wrt to the objective of the test.

The second part of this sentence is correct- and contradicts the first part when it comes to the issue at hand: non-mundane differences in cables. Well done!
 
In response to a direct question, I might point out that at normal test equipment measuring levels, most audio cables have no significant distortion. About 15 years ago, following VDH's lead in finding a change of resistance at very low levels of voltage, I decided to make some distortion measurements on cables approximately 30 times lower than normal measurement voltage, and I found differences in various audio connecting wires. I would have preferred measuring 100 times to 1000 times lower, but my test equipment was never designed to measure in that region, as NOISE dominates any test done due to the design of the test equipment and its method of distortion measurement.
My first 'hypothesis' or opinion was that I was measuring what VDH had measured earlier and refers to as 'Cross Crystal Distortion' or CCD. However, in following up with VDH and what he did write about regarding his test, my test voltages levels were off by about 1000 times from his, so I apparently am measuring something else. I now think that the most likely hypothesis is non-linearity in the ground return connections in the cables, themselves, often made with cheap shielding material and mechanical connections.
In any case, the measurements are consistent at a given time interval and use time on a particular cable. Some cables like the JPS, measured outstandingly well, and still are used by me as calibration references. A similar result was with a VDH cable that was designed specifically for video use. Both cables are 75 ohms and use teflon insulation. Many other cables measured all over the map, some very expensive, others very cheap. It was impossible to know if a specific cable would measure well, from either looks or cost, before measurement. However, specific brands of cables tended to measure the same, depending on burn-in that sometimes made a 'bad measuring' cable measure better at a later time. The real problem, and why I found it important was the excessive amount of higher order distortion added to the signal at these nominal working levels in an audio system, much akin to crossover distortion in some power amps.
 
Last edited:
we (as really exceptional experimenters) are able to work out a test in which a listener does not even recognize that he is participating in a double blind test. :)

I once tried that latter approach and was quite satisfied with the results.

Wishes

I think this is a good idea as it addresses one of the major stumbling blocks that I can see with the methods mentioned so far. It was never the detecting part of my brain that was able to find differences and to then realize what the differences are. This includes tubes and cables. It always went more like this: I add a new cable and the first thing I do is to try and detect any differences. This usually involved repeating a collection of tracks that highlight certain aspects of the music where I believed it easiest to hear a difference based on positive results using tubes that were super easy to differentiate.

The initial out of the box always created nothing more than confusion. Couldn't tell if there is a difference and certainly could not tell what that difference was. So I would give up looking for "it" and resume my usual routine of listing through my 650's for about 4 hours per night.

I am one of those people who will listen to a song or collection of music I like with extreme repetition until I need a break for a few months. I know, its crazy but I have a few songs on the hard drive in our Jeep and I love the "repeat track" setting.

Anyway, when going back to my usual routine where I would reek havoc with some of the lower life forms and chat up tubes and cables and gear with some of the nice folk over at head-fi, I would have my favorite Jazz, Soul, Rock, Trance, blah blah playing through my cans. The entire time though I was concentrating on what I was writing and thinking about all kinds of stuff "but I was not focused on the music in the slightest". It was during these sessions when I was concentrating on other things that the not so easy to detect differences in my gear (cables & tubes) would become apparent. Eg: I'm arguing with a psychotic hypocritical moderator over at head-fi and right in the middle of the conversation I realize I can hear silverware clanging on plates in the background during one of Franks live performances at the Sands. I actually hear people putting forks etc. Down on their plates. This really made an impression on me as I had listened to that same track(s) a zillion times and that detail had never been there for it was masked by distortion or tipped highs caused by silver. Tipped highs, distortion, same same.

Sure I could be wrong but I have serious doubts that things like that would present themselves had I known I was being tested and would at some point be requested to provide some kind of results.

BTW: Anyone interested in getting into Frank or maybe you are a fan but don't have this album, you really must give "Frank Sinatra Live At The Sands with Count Basie" from 1966 at spin, Its not just music but a window into his world at the pinnacle of his career. The guy is funny as hell too.


Now all we have to do is Tip-Toe into someones house and swap out they cables. Know anyone who owns those nice JPS cables,,...??????
 
Last edited:
The second part of this sentence is correct- and contradicts the first part when it comes to the issue at hand: non-mundane differences in cables. Well done!

You should admit that both parts of the sentence were obviously correct. :)

But in general i still don´t get the idea behind mundane vs. non-mundane.
Afair after the invention of LC copper and other specialities by japanese manufacturers Jean Hiraga was the first one (at least the first one i was aware at that time) who published audible differences between various cables; i can´t even remember if he was writing about interconnects or speaker cables, but i think that it was interconnects.

Back then and since then it was stated that no audible differences were possible because every parameter (within the usual .... and so on) was below the known hearing thresholds, if calculated from cable data and input/output impedance.

So, in reality it doesn´t matter if the unknown factors are mundane or non-mundane the key points are the known hearing thresholds.

Wishes
 
In response to a direct question, I might point out that at normal test equipment measuring levels, most audio cables have no significant distortion. About 15 years ago, following VDH's lead in finding a change of resistance at very low levels of voltage, I decided to make some distortion measurements on cables approximately 30 times lower than normal measurement voltage, and I found differences in various audio connecting wires. I would have preferred measuring 100 times to 1000 times lower, but my test equipment was never designed to measure in that region, as NOISE dominates any test done due to the design of the test equipment and its method of distortion measurement.
My first 'hypothesis' or opinion was that I was measuring what VDH had measured earlier and refers to as 'Cross Crystal Distortion' or CCD. However, in following up with VDH and what he did write about regarding his test, my test voltages levels were off by about 1000 times from his, so I apparently am measuring something else. I now think that the most likely hypothesis is non-linearity in the ground return connections in the cables, themselves, often made with cheap shielding material and mechanical connections.
In any case, the measurements are consistent at a given time interval and use time on a particular cable. Some cables like the JPS, measured outstandingly well, and still are used by me as calibration references. A similar result was with a VDH cable that was designed specifically for video use. Both cables are 75 ohms and use teflon insulation. Many other cables measured all over the map, some very expensive, others very cheap. It was impossible to know if a specific cable would measure well, from either looks or cost, before measurement. However, specific brands of cables tended to measure the same, depending on burn-in that sometimes made a 'bad measuring' cable measure better at a later time. The real problem, and why I found it important was the excessive amount of higher order distortion added to the signal at these nominal working levels in an audio system, much akin to crossover distortion in some power amps.

Thanks, Very Nice

This high order distortion, is it possible for you to give us an example of how that might affect the music? Loss of detail? Compression? Etc.
 
Anyway, when going back to my usual routine where I would reek havoc with some of the lower life forms and chat up tubes and cables and gear with some of the nice folk over at head-fi, I would have my favorite Jazz, Soul, Rock, Trance, blah blah playing through my cans. The entire time though I was concentrating on what I was writing and thinking about all kinds of stuff "but I was not focused on the music in the slightest". It was during these sessions when I was concentrating on other things that the not so easy to detect differences in my gear (cables & tubes) would become apparent. Eg: I'm arguing with a psychotic hypocritical moderator over at head-fi and right in the middle of the conversation I realize I can hear silverware clanging on plates in the background during one of Franks live performances at the Sands. I actually hear people putting forks etc. Down on their plates. This really made an impression on me as I had listened to that same track(s) a zillion times and that detail had never been there for it was masked by distortion or tipped highs caused by silver. Tipped highs, distortion, same same.

Sure I could be wrong but I have serious doubts that things like that would present themselves had I known I was being tested and would at some point be requested to provide some kind of results.

Now all we have to do is Tip-Toe into someones house and swap out they cables. Know anyone who owns those nice JPS cables,,...??????

Ever been to a party... experienced the din of numerous conversations, music, etc. going on for a period of time, then discovered you've zoomed in on a specific conversation some distance away inspite of all the generic noise about you?

Thought so... might have some relevance here.
 
Has no one ever measured a cable(s) with distortion levels between those numbers? Is there even a relatively easy way to do that?



Just for fun here is what an interconnect looks like in my CLIO loop. The references is -10db top of the graph that's a THD measurement.

Rob:)
 

Attachments

  • Distortion in Interconnect.jpg
    Distortion in Interconnect.jpg
    92 KB · Views: 225
Status
Not open for further replies.