• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

Poor phase margin means speaker cables matter?

I am proposing a new idea for a dbt listening test.

2 identical stereo tube amplifiers (4 channels)
4 loudspeakers, 2 per stereo amplifier
2 Identical loudspeaker cables of one manufacturer
2 Identical loudspeaker cables of a different manufacturer (same length as the other set of cables)

The loudspeaker cables are to be soldered directly to the output transformers secondary leads.
The other ends of the loudspeaker cables are to be soldered directly to the loudspeaker crossovers.
(no connectors allowed)

A signal source is connected to a switch box, to send the stereo signal Either to one amplifier, Or to the other amplifier.

Now comes the items that need to be solved:

1. What about the signal cables from the signal source to the switch box, and the signal cables from the switch box to the two amplifiers?
Do these all need to be soldered directly to the signal source; switch; and amplifier inputs?

2. What about the switch contacts, Gold, Silver, some sort of Alloy, etc.?

3. What kind of solder should be used?
Eutectic 63 / 37 (forbidden by Rhos); or Rhos approved solder?

4. I am sure there are many other problems to be solved to make this a scientific dbt test, can you list them all, please?
That is why I am enlisting all of you to help.

5. Who is willing to set up the test, when we get all the problems solved?

6. If the dbt is done scientifically, what percentage of people will accept the results, and what percentage of people will Not accept the results?

7. Is it reasonable, or is it not reasonable, to ever run a dbt listening test?

Thanking all of you in advance for your consideration, and your inputs!
 
What is the purpose of the test? IOW, what scientific question is being investigated? You need to write it out as a hypothesis to be tested (by disproving the null hypothesis) before you start planning anything.

As to the question of, will anyone accept the results? Well, it probably depends on the quality of the work you do, the quality of your documentation, procedures you followed, etc. There should be enough information so that someone else could replicate your experiment themselves. That means they should be able to do exact same experiment exactly as you did it.

If you want more details and or discussion about how to plan, etc., please feel free to PM.
 
Last edited:
  • Like
Reactions: 1 user
Markw4,

I was proposing a dbt listening test to find if there is any difference between the sound of two identical complete stereo systems, with the
only Intended difference of the two stereo systems . . . to be one pair of loudspeaker cables, versus another pair of loudspeaker cables (such as a pair from one manufacturer; and a pair from another manufacturer).

I think I made it clear, I am not running the test.
I was looking for others to further design the test to make it scientific (at least so, in the eyes of some readers and posters).
I was also soliciting someone with the interest, and all the requirements of stereo systems, listeners, venues, and whatever else is designed into the test, etc. that is needed to scientifically run the test, After it had been designed to be made scientific.

I don't know what else I can add to the idea.

Thanks for your response!
 
Okay. It seems like it would more practical to use one set of everything except for the cables to be compared. That means connectors and or switches/relays would be needed. If they are needed, then in may be sufficient to measure the apparatus before and after the tests, to make sure, say, that connection resistance/linearity hasn't changed. Also have to make sure that test subjects don't have any way of knowing what is being switched or not switched.

That said, that speaker cables can make an audible difference is probably not all that controversial. If you measure cables with a VNA to determine their impedance and you dig down into how the amplifier and speaker interact with that impedance, there will probably be enough evidence so the skeptics will say something like, "of course the cables sounded different, one of them was pathological." Just cable capacitance is pretty well understood factor. Characteristic impedance is a less commonly understood factor, but once explained properly and measurements are taken of the system, its going to turn out there is an explainable difference. In fact, its already been done. If Bateman is fully understood, reflections happen above audio frequencies, but they still can affect sound. There is already a thread in the forum with links to Batemans papers, and examples of zip cord speaker cable being compensated with lumped elements to lower Zo. And the sound changes as a result. Probably the difference can be measured if one knows what to measure for. IOW, its possible to measure physical changes. The question then is whether or not the changes are audible, and whether or not that audibility is consistent with known psychoacoustics.

Once everything is explained and shown, then people will remember for a few months cables can sound different in some cases. Eventually another group of people will be in the forum and its back to the same old things. "Cables are nothing more than resistors." "Power cords are only a small fraction of the total resistance going back to the power generator," etc. Its that folks pick up oversimplified models early on and then start believing that the models are the reality. And, then you have a new batch of people to deal with.

Basically, the human issues with why this stuff keeps coming up was well described by @soundbloke :

I would suggest the most important misconception is the assumption of linearity in our hearing. Our learning capabilities are (predominantly) irreversible and therefore very non-linear. Unless rigourous blind testing is carried out on a per listener arrangement (which will often be practically extremely difficult), there is little reliable information to separate the well-trained listener from a delusional one when we approach commonly accepted hearing thresholds. Regardless of our physiology, we all have a capability to train our hearing on minute details that an untrained listener would find inaudible. Likewise we all possess a substantiative capability to delude ourselves into hearing something we are not. Perception and sensation are not the same thing...

https://www.diyaudio.com/community/threads/measuring-the-imaginary.409662/post-7613254

Just to clarify a few points where I may not have been sufficiently clear...

Learning occurs all the time in all that we do - at least all of the things that we are aware that we do. Purposeful learning (training) is just where we have some emotional driver to direct the process and highlight the relevant details. A skilled listener may not then need purposeful training, just the relevant experience.

When something is learned, it is generally not unlearned: It is a very non-linear process and makes our full hearing capabilities very difficult to model.

In forming our perception, there exists a substantial capacity for delusion - such as hearing a detail that is not actually sensed.

It is often stated that blind testing can identify delusions in listening evaluations, and I suspect for the most part this is actually true.

However, our inherent non-linearity also means that our hearing capability now can be highly dependent on various emotional drivers and what we might have learned previously: Thus self-proclaimed "golden ears" are afforded a valid defense too.

Hence perception and sensation are not the same thing, and the task of the audio engineer is restricted to optimizing the fidelity of the sensation.


https://www.diyaudio.com/community/threads/return-to-zero-shift-register-firdac.379406/post-7620209
 
Last edited:
Barth's Distinction says: "There are two types of people, those who divide people into two types, and those who don't"

A few corollary generalizations based on Barth's Distinctions:

Some people have Good Audio Memory and can compare the sounds of two systems even when given a long time between the sounds of the two systems; some people are not able to do that.
That is the thought basis for doing reasonably quick AB testing. Switches, yes, soldered in, probably not.
Or, two systems, can do reasonably quick AB testing, of different "parts" (capacitors, or speaker cables, etc.) are soldered into the otherwise identical systems (soldering was someone else's idea, but I decided to use it).

Some people believe in careful measurements; some people do Not believe in careful measurements.

Some people Will hear differences between two parts, setups, systems, etc.
Some people will Not hear differences between two parts, setups, systems, etc.
(And the above is true, whether there are, or are Not, any differences).
That includes the Audio Placebo, if one is employed.

All the above, except Barth's Distinctions are my opinions.

All readers are allowed to disagree.
 
I am only going to provide a few questions, thoughts/comments. This is my only comment.

1. One has to prove audio dbt/abx testing is equal to normal listening. Nearly impossible to perform.

2. Switches, wire, connectors, termination techniques, mating different materials, component layout, minimizing
channel to channel coupling, etc etc.

As 6A3 correctly mentions, soldering a connection takes time etc.
A. Different types of solder influences the sound.
B. Simply unplugging and plugging in an ic alters the sonics for a period of time.
C. Silver contact toggle switches alter the sound differently than brass contacts.
D. Is each component optimally warmed up for the comparison.

3. E. Then there are the mental/brain/listening variables that need addressing.
F. Cochlea fatigue is one.
G. The atmosphere can and will cause Phonophobia and Testophobia. Even one out of 20 subjects with such will
skew the audio test towards no sonic difference.
H. If we have a test with 10, 20 or whatever, with statistically half placed in bass increasing modes and the other
half placed in bass decreasing modes, how does on obtain a 95% confidence figure.
I. Related to H, the frequency response each subject hears at their listening position.
J. The timing involved in AB switching.
K. The number of listening "periods" of the test.
L. The length of each listening "period" has to be very closely monitored.
M. Takes weeks if not months to perform correctly due to habituation to stimuli variable.

These are a few variables that need to be addressed, off the top of my head.

cheers

pos
 
Last edited:
  • Like
Reactions: 1 user
Positron,

You mentioned the # 1, Major flaw I had in my dbt listening test that I ran in 2008 at VSAC:
You said: "I. Related to H, the frequency response each subject hears at their listening position."
The problem of bass differences (H) was not the major part of my flaw, although it was a problem, instead the mid and high frequencies were the biggest problem.

All other Items you mentioned above only take second place to things related to the listening position (they are good items, but are not as bad as my major flaw.
As soon as I stood up in the front of the room, to run the test, I recognized the multi faceted problem.

The idea of the test:
The sound of two mono-block amplifiers were being compared. The same signal was applied to both amplifiers, and one speaker cable was connected in reverse phase. The loudspeakers were faced to each other, about 3/4 inch apart, they were mirror images, so the tweeters faced each other, the woofers faced each other, and the ports faced each other.
The relative volume of the amplifiers was adjusted to give an extremely Deep Null of the sound (because of the equal amplitudes that were out-of-phase).

Then the speaker cable of the one speaker was re-connected with the correct phase. The speakers were moved apart (flaw), and faced forward.
Then the signal was switched between the inputs of the two amplifiers, so they could be heard one at a time. That was the listening test.

The signal source for the null test, and for the listening test, was a CD player. Some recordings were re-issue of old but good mono recordings; other recordings were of good stereo recordings. But in all cases, only the Left channel was connected to the switch box, that connected the signal to both [null test]; and then to one, or the other amplifiers for the sound comparison.

The major problem Factors were:

The room was full of many [too many] listeners. I simple entered the room, but I had no idea so very many would come.

Listeners with one ear, two ears, ears of different sensitivity, ears with different frequency response, etc.

Some listeners had one or more people in front of them, which cut off the direct sound from one, or from both speakers.
Some listeners had a clear straight shot of one speaker, or of both speakers, for the sound to directly reach their ear(s).

Some listeners with only only one ear, or only one good ear, did not have a direct path from each speaker.


If I had thought of this when I designed the test, I would have used a second switch box.
During the process to find the deep null, when I found it, I would switch the speakers and the amplifiers (different pairing of amp & speaker).
If the sound still had a deep null, that would have meant that the speakers were close to each other in sensitivity, frequency response, and impedance versus frequency; and that the amplifier's damping factors and their gain were close to identical.
That would require a second switch box, for the speakers to connect them both, and then one amplifier, or the other amplifier to compare the sound of the two amplifier speaker pairs.

Live and Learn, I think I did.

I am putting a link below, of the only review of VSAC 2008 that I am aware of that mentions my listening test.
My picture is the first one, at the top of the report, and the brief write up is just under the picture.
Caution: wear your sun glasses, the reflection off the top of my head might blind you.

VSAC 2008 Report By Jeff Poth Page 1:
Link:


vsac_2008

Un-successful test.
i did not really want to spend all the effort to write all this up in this thread, but I finally made myself do it:
This covers the very Un-successful dbt I designed and ran.

"Successful" test
The reasonably good dbt test I designed and ran years earlier, is still only written up in the very last issue of Glass Audio.
After that, Glass audio was rolled into a single magazine, combining it with two other magazines.
I do not have the desire, the time, or the strength to write it up in a thread on diyAudio.

I wish there could be another VSAC, but after a number of them over a decade, there will never be another.
 
Last edited:
Are they all 12 volts? Even for larger speaker wires. I've learned that connecting the wrong color wires causes the radio to be out of phase. I learned from some expert technician. So in actuality it's the make or model of the head unit that determines the wiring gauges used. Though they are still 12 volts. And connected color to color.
 
One has to prove audio dbt/abx testing is equal to normal listening.
There is no "proof" in science.

"Science neither proves nor disproves. It accepts or rejects ideas based on supporting and refuting evidence, but may revise those conclusions if warranted by new evidence or perspectives."

https://undsci.berkeley.edu/understanding-science-101/the-core-of-science-relating-evidence-and-ideas/#:~:text=Correction: Science neither proves nor,by new evidence or perspectives.


Moreover, I would assert that "normal" listening is not the same as dbt/abx testing. The important point is that it doesn't have to be the same, and it shouldn't be the same. Those are two different ways of listening for good reasons. What matters is that if an auditory perception is real rather than delusional, then someone can learn to hear it blind. Its also true that abx/dbt can be fatiguing and require sustained concentration. Its also true that abx as a protocol has a bias towards false negatives in untrained test subjects.

A properly conducted abx/dbt test must therefore provide sufficient training for the test subjects. If training is not desired or not possible then another blind protocol besides abx should probably be used. There are ways to find out what you can really hear and what is delusional in a comfortable and relaxed way. However, its only going to be comfortable if you can be accept that some of what you normally hear probably isn't real. That's the nature of human hearing and our biological computer brains. So, if you are open to finding out how much is real and how much isn't real, then I would suggest to work with someone who isn't out to prove to you that you can't believe anything you hear, because that isn't right either. Once more I would point to the words of @soundbloke which I already posted in #115. What he said is supported by a lot of good evidence.
 
Last edited:
6A3, glad you are learning, a good sign. But it is a much more complicated process and takes
a long time to actually perform, weeks or months. If one takes only a few days or less to perform an audio dbt test,
public, it is a flawed, skewed test.

Positron,

You mentioned the # 1, Major flaw I had in my dbt listening test that I ran in 2008 at VSAC:
You said: "I. Related to H, the frequency response each subject hears at their listening position."
The problem of bass differences (H) was not the major part of my flaw, although it was a problem, instead the mid and high frequencies were the biggest problem.

> Interesting. Now you are Now an expert on variables when you could not mention a single
variable multiple times before. Continuing to reinvent yourself since post #72.

> A null test is just as flawed as a typical dbt audio test because of high compression in a
small area problems leading to your “very deep” null. Compressed air distorts and one has to
consider masking problems.

All other Items you mentioned above only take second place to things related to the listening position (they are good items, but are not as bad as my major flaw. As soon as I stood up in the front of the room, to run the test, I recognized the multi faceted problem.

> But once again, you could Not mention even one variable (besides sight), since page 4 and until I mentioned such in my previuos post.
It skews the listening test 100% of the time towards No sonic difference.

The idea of the test:
The sound of two mono-block amplifiers were being compared. The same signal was applied to both amplifiers, and one speaker cable was connected in reverse phase. The loudspeakers were faced to each other, about 3/4 inch apart, they were mirror images, so the tweeters faced each other, the woofers faced each other, and the ports faced each other.
The relative volume of the amplifiers was adjusted to give an extremely Deep Null of the sound (because of the equal amplitudes that were out-of-phase).

> So how deep of a null, we don't know. What was the spl of the two speakers.

Then the speaker cable of the one speaker was re-connected with the correct phase. The speakers were moved apart (flaw), and faced forward.
Then the signal was switched between the inputs of the two amplifiers, so they could be heard one at a time. That was the listening test.

> Yes, obvious flaw of course. So you also broke a connection and reconnected. Flaw.

The signal source for the null test, and for the listening test, was a CD player. Some recordings were re-issue of old but good mono recordings; other recordings were of good stereo recordings. But in all cases, only the Left channel was connected to the switch box, that connected the signal to both [null test]; and then to one, or the other amplifiers for the sound comparison.

The major problem Factors were:

The room was full of many [too many] listeners. I simple entered the room, but I had no idea so very many would come.

Listeners with one ear, two ears, ears of different sensitivity, ears with different frequency response, etc.
Some listeners had one or more people in front of them, which cut off the direct sound from one, or from both speakers.
Some listeners had a clear straight shot of one speaker, or of both speakers, for the sound to directly reach their ear(s).
Some listeners with only only one ear, or only one good ear, did not have a direct path from each speaker.

> Copy cat point, but only after I explained them to you in my previous post.
> You were radically altering the frequency response each was hearing, thus a fatal flaw, as per
my points H and I in my previous post #120, last post on page 6.

If I had thought of this when I designed the test, I would have used a second switch box.
During the process to find the deep null, when I found it, I would switch the speakers and the amplifiers (different pairing of amp & speaker).
If the sound still had a deep null, that would have meant that the speakers were close to each other in sensitivity, frequency response, and impedance versus frequency; and that the amplifier's damping factors and their gain were close to identical.

> Unfortunately, you still manipulated the information by the wires and switch box, the exact distance
between speakers etc. thus invalidating the test by itself.

That would require a second switch box, for the speakers to connect them both, and then one amplifier, or the other amplifier to
compare the sound of the two amplifier speaker pairs.

> This would further manipulate the musical information as I pointed out above.

Live and Learn, I think I did.

I am putting a link below, of the only review of VSAC 2008 that I am aware of that mentions my listening test.
My picture is the first one, at the top of the report, and the brief write up is just under the picture.
Caution: wear your sun glasses, the reflection off the top of my head might blind you.

VSAC 2008 Report By Jeff Poth Page 1:
Link:


vsac_2008

Notice the bare wall and lack of room treatments. Echos skew the test towards No sonic difference
100% of the time by masking effects.

Un-successful test.
i did not really want to spend all the effort to write all this up in this thread, but I finally made myself do it:
This covers the very Un-successful dbt I designed and ran.

> Good to see you are learning more of what you did wrong by the points I presented in #120.

> It takes a lot more work, and actual time performing an audio dbt test than anyone thinks and it must equate
to normal listening. Otherwise it is an apples to oranges comparison.

Cheers

pos
 
Positron said:

One has to prove audio dbt/abx testing is equal to normal listening.

Mark: There is no "proof" in science.

"Science neither proves nor disproves. It accepts or rejects ideas based on supporting and refuting evidence, but may revise those conclusions if warranted by new evidence or perspectives."

https://undsci.berkeley.edu/understanding-science-101/the-core-of-science-relating-evidence-and-ideas/#:~:text=Correction: Science neither proves nor,by new evidence or perspectives.

Sigh. Mark, please reread the article until you understand it in proper context. There exists facts. I will leave it at that.

Mark: Moreover, I would assert that "normal" listening is not the same as dbt/abx testing.

I agree. That is why typical dbt/abx testing, practiced in audio realm, is comparing apples to oranges. It does not simulate normal listening.

Mark: The important point is that it doesn't have to be the same, and it shouldn't be the same. Those are two different ways of listening for good reasons. What matters is that if an auditory perception is real rather than delusional, then someone can learn to hear it blind. Its also true that abx/dbt can be fatiguing and require sustained concentration. Its also true that abx as a protocol has a bias towards false negatives in untrained test subjects.

Oh yes it should. If the dbt skews 100% of the time, then it is not mimicking normal listening. You are now making the claim that due to dbt/abx testing all components/systems sounds the same when one listens to different systems, which is a lie. I would rather have a mistake X% (less than 100%) of the time vs skewed 100% of the time. 100% of the time is rigging the test to fail in my book.

Important: The problem is no one is performing a proper, non skewed dbt/abx test. Performing a rigged test is not science in any way, shape or form.

Mark: A properly conducted abx/dbt test must therefore provide sufficient training for the test subjects. If training is not desired or not possible then another blind protocol besides abx should probably be used. There are ways to find out what you can really hear and what is delusional in a comfortable and relaxed way. However, its only going to be comfortable if you can be accept that some of what you normally hear probably isn't real. That's the nature of human hearing and our biological computer brains. So, if you are open to finding out how much is real and how much isn't real, then I would suggest to work with someone who isn't out to prove to you that you can't believe anything you hear, because that isn't right either. Once more I would point to the words of @soundbloke which I already posted. What he said is supported by a lot of good evidence.

Important: Again, the problem is no one is performing a proper, non skewed dbt test. I cannot emphasize this enough. I am not against PROPER dbt testing, but no one is performing such.

Performing a rigged test is not science in any way, shape or form. That is marketing. And so is ABX testing, marketing.

Cheers

pos
 
Last edited:
You are now making the claim that due to dbt/abx testing all components/systems sounds the same when one listens to different systems...
Please show a quote where I said the words as you claim above. I don't think you can do it, because I said no such thing.

What you have just done is known as a strawman:

"A straw man fallacy occurs when someone distorts or exaggerates another person’s argument, and then attacks the distorted version of the argument instead of genuinely engaging."

https://owl.excelsior.edu/argument-...ogical-fallacies/logical-fallacies-straw-man/
 
You are now making the claim that due to dbt/abx testing all components/systems sounds the same when one listens to different systems...
Please show a quote where I said the words as you claim above. I don't think you can do it, because I said no such thing.

When you suggest we continue with the obviouly flawed testing procedures, which are skewed 100% of the time, why would anyone
want to perform a rigged test.

You need to perform a dbt test that equates to normal listening but does not have the flaws of normal comparison testing.
Is stating this way more clear and make more sense to you? Maybe I was not as clear as what I should have been?

pos
 
When you suggest we continue with the obviously flawed testing procedures, which are skewed 100% of the time, why would anyone want to perform a rigged test.
Another strawman already? Would you stop trying to put words in my mouth? Eventually it starts to look like you are intentionally violating the rules.

Maybe try it this way: "When you suggest we continue with what postitron believes are the obviously flawed testing procedures, which postitron believes are skewed 100% of the time..."

Then you would be getting a little closer to the truth of what I said versus what you are saying.
 
Positron said:
When you suggest we continue with the obviously flawed testing procedures, which are skewed 100% of the time, why would anyone want to perform a rigged test.

Mark: Another strawman already? Would you stop trying to put words in my mouth? Eventually it starts to look like
you are intentionally violating the rules.


Because what you stated leads to that statement. It is not my fault you do not understand the subject matter.

You are the one who wrote a dbt test does not equate to normal listening. See quote below in bold.

Mark: "Moreover, I would assert that "normal" listening is not the same as dbt/abx testing."

When a dbt test is skewed towards no sonic difference 100% of the time, that is not science by any definition and stretch of the imagination. And I agreed with you. It does not, as it is skewed toward no sonic difference all the time.

Mark says *: Maybe try it this way: "When you suggest we continue with what postitron believes are the obviously flawed testing procedures, which postitron believes are skewed 100% of the time..."

Would not that be a misleading statement based upon my previous scientific posts, based on medical science, back to page 4, while you have presented just your opinion and misunderstanding of your own link.

pos
 
Last edited:
...what you stated leads to that statement...
Not in reality. Maybe in Positron's opinion. From my point of view, Positron has many personal opinions which do not comport well with generally accepted science. If he refuses to have any interest in understanding science and instead insists on substituting wildly exaggerated claims then there is probably nothing anybody can do to help.

Would not that be a misleading statement based upon my previous scientific posts...
No. Your recent responses contain strawman exaggerations based your non-scientific opinions.
 
Last edited: