speaker cable myths and facts

Status
Not open for further replies.
Hi,

FYI soldering on gold is considered bad practice because the gold dissolves in the solder and produces brittle intermetalic compounds.

Not all solder leads to problems with gold embrittelment.

Switching from the tin/lead standard used in the industry pre ROHS to new methods lead to problems for some. I have long avoided lead containing solder and in much analysis lead contamination combined with goldis proven the problem.

For that reason, the European Space Agency requires that gold plating on components is removed by mechanical means before soldering. Now, most audio equipment does not experience the same high G forces that space based electronics do, but when we aim for the best, why not avoid issues known from other fields of application?

If I intended to shoot my gear into space and I limited control over subcontractors use of solder, I would view gold plating also with concern. It is another issue if we do it intentionally and manage the process well.

Ciao T
 
Hi,
All this is meaningless. Any REAL calculation of the "damping factor" must include the speakers voice coil DCR and the resistance of any inductors.

In which case we see that real damping factor quickly approach unity, as long as we take the rule "rated impedance must not be greater speakers minimum impedance plus 30%".

In other words, the whole talk about damping factor is one big red herring or bottle of snake oil.
Huh ? You seem to be trying to put words into my mouth, and rebut something I didn't say - where exactly did I try to extol the virtue or legitimacy of so called "damping factor" ?

I'm fully aware of how electrical damping works on a speaker, how it affects the Qes, and that the often quoted "damping factor" of amplifiers is more or less meaningless as a measure of actual damping.

However, it does provide a way to calculate the output impedance of an amplifier. Many amplifier manuals provide a "damping factor" figure relative to a certain load impedance, however no amplifier manuals that I've seen provide an output impedance figure.

I merely mention damping factor as a way of taking a fairly useless specification that is often quoted in spec sheets and manuals, and using it to calculate something that does have some usefulness and relevance to the discussion - output impedance.

The output impedance can then be added to the speaker cable impedance and provide useful information on how the speakers response will be changed from the "ideal" voltage source response.

Sure, there is a sensible limit as how much series resistance will be too much, however, given that moving ones head a few inches easily causes frequency response changes of many dB, even 1-2dB change in frequency response is in practice unlikely to be meaningful in the sense of degrading performance.
And here we must completely disagree. I can't seriously believe that you consider a 1-2dB change in frequency response due to speaker cable impedance (or any other cause) to be "unlikely to be meaningful".

If the change in frequency response is over many octaves, for example a shelf that boosts or cuts >300Hz then even a 0.5dB change is audible in A/B testing, and a 1dB change is pretty obvious. 2dB is a serious error over a broad frequency range.

Yet this is exactly the sort of thing that can happen with speaker cable resistance with some speaker designs. Many 2.5 way designs, or 3 ways with 2 woofers simply connect two 8 ohm woofers in parallel in the bass, resulting in a speaker that is nominally 4 ohms in the bass, but 8 ohms in the midrange and treble. (Something I consider to be "cheating", to get the 1m/2.83v sensitivity rating up)

Even with 0.5 ohms in series, such a speaker will see ~1dB attenuation in the bass, (except at the resonance impedance peaks) and ~0.5dB in the midrange and treble, causing a net boost of 0.5dB in midrange and treble over bass.

A huge error ? No, but certainly audible with careful listening, and that's with a minimal 0.5 ohms of added resistance. Because such a speaker is ~4 ohms in the bass region, the effect of the series resistance has more effect on raising Qes than it would on an 8 ohm design as well.

The ear is also particularly sensitive to small changes in certain frequency ranges, such as the presence region from 2-4Khz, where a 1dB cut or boost over that single octave completely changes the characteristic sound of the speaker.

On many multiway designs there is a large dip or peak in the impedance curve at the crossover frequencies - one of which often falls between 2-4Khz. If you assume the total speaker response was designed flat with no series resistance, too much speaker cable resistance will put either a dip or peak in the very critical presence region, where I find even a half dB change is quite noticeable, and 1dB unacceptable.

Why leave the frequency response of the speaker to chance ? Keep the speaker cable resistance as low as possible so it's not a significant factor. Practically speaking, keep the maximum shift induced by cable resistance to less than half a dB, which often works out to about half an ohm, but depends on the speaker.

Now it could be that on particular speaker designs, a bit of extra series resistance actually brings the speaker closer to a pleasant balance and sounds "better". So be it - but the same effect could have been achieved with a resistor, and more often than not if the speaker is well designed adding resistance will make the speaker less flat.
 
Hi,

{{SNIP}}For example, I know precisely why certain types of mains cables make a measurable and audible difference and I have it on good authority that the cables designer knows the same things, however you will not find anything about this in the marketing material for these cables...{{SNIP}}

Ciao T

Hello Thorsten. I read on Stealth Audio's website the reason AC power cords have a “sound” mostly depends not on the wires carrying the AC, but on the ground wire, which NEEDS to be shielded since it's included into the sound SIGNAL PATH (ground path, to be more precise) - This is what's mostly, but not entirely responsible for why different AC power cords sound differently! Is this what you're talking about? And whether it is or isn't what you were refering to, what do you think about their statement?

Thetubeguy1954 (Tom Scata)
 
I'm confused, why would the ground on an ac power chord need to be shielded, it's draining to ground. A decent quality low impedance path is important but the cable isn't a radio transmitter.

The same reason the power cords on those $100,000+ EEG machines that measure micro-voltages on your scalp to accurately calculate brain waves use them.

Wait! The EEG machines I worked on for 25 years had 16 AWG 3 conductor regular copper wire and a hospital grade plug. They did use silver wire leads but not braided, no crystals and not the size of car jumper cables--they were rather thin.

The sound booths were calibrated daily with sound sources and none of them had "audiophile grade" anything on them. Regular wires, (Belden) signal cables and the headphones did not cost $10,000 or have a transformer. The money went into the signal generators, booth sound proofing and the transducer quality of the headphones. The headphone jack was not gold plated :eek: Granted, the headphone jack had to be unplugged daily, inserted in the cal box then reconnected to the jack in the booth. That kept any corrosion from ever forming.

I figure it this way, if an EEG machine does not need or use any "audiophile grade" materials to show accurate brain waves--neither does my audio equipment. None of my audio equipment has detachable power cords so it is not an issue (I REFUSE to buy audio equipment that does)
 
@Thorsten: I have great respect for some of the products you developed, so I am confident you know which solders to use on gold plated components. Most DIY will not have these special solders, so I feel I gave some solid advice.

When discussing wires, always a sensitive issue, I think it is important to point out issues that are just plain wrong from an engineering point of view. Let's at least do those things like soldering right.

For the rest, it is often a bit like the discussion how many Angels you can balance on the tip of a needle. According to some, there are no Angels. Others even dispute there is a needle. I believe in what I can perceive, that is, what I can measure.

The Ohmic resistance of cables is the single most important variable with measurable effects at the output of a speaker system. I don't understand your position that it is not that important, it is. The problem is that there is no optimum, because it depends very much on how the loudspeaker behaves electrically and acoustically, in combination with the damping factor of the amplifier. To quote you from a earlier post on this threat "In other words, the whole talk about damping factor is one big red herring or bottle of snake oil". Actually, it is quite the opposite. For some set ups, a relatively thin cable may be beneficial, e.g. in case of a loudspeaker designed with tube (high output impedance) use as design parameter, connected to a modern SS (low output imedance). In other situations, fat cables might be prescribed. My point being that it the best cable is a function of the rest of the setup. From a measurable point of view.
 
Presumably, the EEG equipment was designed by real engineers who were concerned with real engineering issues, so wouldn't need oddball things like shielded ground leads.

Which leads to a cable skeptics assertion. Equipment that reveals audible differences in power/speaker cables may actually have a design flaw, not special resolution. Conversely, imagine all the information missing in those EEG's.
 
This 10 AWG cable seems to get 5 stars on many forums, most saying there is no point in paying more and is far superior to supposedly class leading QED cables(ie What Hifi BS).
'UP LCOFC' was developed by Hitachi in the 80s. Even if there is a hint of snake oil, it seems good value.
Van Damme 2x6.0mm UP LCOFC Hi-Fi Speaker Cable - v506000

3.0 Ohm/km.. is that a good spec?
 
Last edited:
Hi,

And here we must completely disagree. I can't seriously believe that you consider a 1-2dB change in frequency response due to speaker cable impedance (or any other cause) to be "unlikely to be meaningful".

As I rarely if ever position my head with micrometer accuracy, I stand by that. You get a greater difference from sitting a few inch to one side or the other.

Of course, practically all my own speaker designs tend to have a flat impedance (excluding LF enclosure tuning peaks, which I usually only damp a little first mechanically and secondly with a modest parallel resistor), so it takes a lot of resistance to get 1dB difference anyway.

For reference, a 3m Cable with 18 Gauge wire is around 0.12 Ohm total resistance from the wire (roundtrip), and even 24 Gauge wire will only amount to 0.5 Ohm.

Let us take a 1 Ohm Speaker cable (so 6m of 24 Gauge wire, which I doubt anyone in their right mind would attempt) and a speaker with 6 Ohm minimum impedance and 100 Ohm maximum impedance, not something I would consider "sound design" (pun intended) in any case.

How much real difference in FR can we get?

Around 1.25dB.

So I am sorry, but under real living/listening room conditions speaker cable DCR rarely amounts to more than a hill of beans.

In complex home cinema installations with the equipment sited very far from the speakers we may need to pay SOME attention, however even there using enough Cat 5 cabling flood wired with two Cat 5 cables for each speaker we only get 40 mOhm per meter, or 0.4 Ohm for a 10m Run.

If the change in frequency response is over many octaves, for example a shelf that boosts or cuts >300Hz then even a 0.5dB change is audible in A/B testing, and a 1dB change is pretty obvious. 2dB is a serious error over a broad frequency range.

You get greater changes than that by moving your head a few inches, plus it does help to look at the impedance curves of real speakers to see just how much we really get.

Yet this is exactly the sort of thing that can happen with speaker cable resistance with some speaker designs. Many 2.5 way designs, or 3 ways with 2 woofers simply connect two 8 ohm woofers in parallel in the bass, resulting in a speaker that is nominally 4 ohms in the bass, but 8 ohms in the midrange and treble. (Something I consider to be "cheating", to get the 1m/2.83v sensitivity rating up)

Well, this speaker is clearly a 4 Ohm Speaker likely with 3 Ohm DCR (some are even lower), so we should limit the Speaker cables DCR to 0.3 Ohm or at least 22 Gauge wire if the cable is 3m long.

I find even a half dB change is quite noticeable, and 1dB unacceptable.

I would like to enquire how you repeatably position your head with accuracy sufficient to avoid larger errors, if you find them unacceptable (and yes, you can get such larger errors in the 2-4KHz region from small head movements).

Why leave the frequency response of the speaker to chance ? Keep the speaker cable resistance as low as possible so it's not a significant factor. Practically speaking, keep the maximum shift induced by cable resistance to less than half a dB, which often works out to about half an ohm, but depends on the speaker.

In prefer the 10% of DCR rule, which actually ends up lower than your figure. However, this never leads to cables one might call "low DCR'. For "low DCR" let's consider a 10AWG 3m Cable, it has a DCR of 20mOhm.

I do not argue that such a cable is a "bad thing" due to very low DCR, however, it is also not a "good thing" because of low DCR. Other factors will become more relevant.

Now it could be that on particular speaker designs, a bit of extra series resistance actually brings the speaker closer to a pleasant balance and sounds "better". So be it - but the same effect could have been achieved with a resistor, and more often than not if the speaker is well designed adding resistance will make the speaker less flat.

If a speaker is well designed, adding series resistance should not make it less flat. Such designs have been made since the late 1930's (like the German Eckmiller Coaxial Studio Monitor Driver).

However it seems that speaker designers prefer to make their life easy and hope the Amplifier designer will deal with all the problems they leave un-addressed, while amplifier designers hope speaker designers make speakers that are 8 ohm resistors and the cable designer is stuck in-between.

Non of which strikes me as smart or sensible design. But that, as they say is another story.

I repeat, the importance of DCR in Speaker cables is largely overstated, very little dedicated speaker cable of decent quality has enough DCR to cause any issues and there is no need for 10 or 12 AWG Wires either.

What I would concern myself with by far more are such esotheric subjects as the action of the cable as Aerial for RF signals, inductance (which can be quite notable with thick figure 8 cables), resonances caused by the cables inductance and capacitance which can do amusing things to amplifier stability and all those things that supposedly make no difference, but of course make a lot of difference, simply because basic electrics and electronics DEMAND that they do...

But hey, that's just me and WTFDIK.

Ciao T
 
Hi,

@Thorsten: I have great respect for some of the products you developed, so I am confident you know which solders to use on gold plated components. Most DIY will not have these special solders, so I feel I gave some solid advice.

When DIY'ing, given how much stuff we work with is goldplated, it is very good practice to use high silvercontent leadfree solder anyway. I have used this for ages before ROHS, largely because we actually found in a quite interesting test (it was a single blind preference test) that different solders do sound different and preferences where for this kind of solder (I buy it in bulk, multicore brand).

Arguably the test items involved goldplating in a number of areas and it may very well have been the result of the goldplating reacting with the solder that caused the differences observed. We were not interested in the theoretical background but a practical "what works best" at the time.

When discussing wires, always a sensitive issue, I think it is important to point out issues that are just plain wrong from an engineering point of view. Let's at least do those things like soldering right.

Sure. Given how ubiquitous gold plating is in HiFi Stuff, let's not argue that gold plating is bad, but give practical advise for the best practice in soldering gold flashed and gold plated items, plus how to distinguish flashing from plating...

For the rest, it is often a bit like the discussion how many Angels you can balance on the tip of a needle. According to some, there are no Angels. Others even dispute there is a needle. I believe in what I can perceive, that is, what I can measure.

The problem is of course, that you rarely if ever, measure what happens in a real system. THe problems with cables in HiFi systems do not stem from the first order issues that relate to getting some signal or power from A to B, but with all the unintended side-effects.

I will repeat that cables with different electrical parameters are almost always bound to cause measurable differences in interconnected systems of multiple mains powered devices in an RF rich environment.

The Ohmic resistance of cables is the single most important variable with measurable effects at the output of a speaker system. I don't understand your position that it is not that important, it is.

It is not, if the resistance is kept at a reasonably low level, which is almost always inherent to dedicated speaker cables.

To quote you from a earlier post on this threat "In other words, the whole talk about damping factor is one big red herring or bottle of snake oil". Actually, it is quite the opposite.

Okay, to be more precise:

"The whole concept that a large damping factor is a good thing is one big red herring or bottle of snake oil, as in reality the electrical damping in a speaer approximates unity"

For some set ups, a relatively thin cable may be beneficial, e.g. in case of a loudspeaker designed with tube (high output impedance) use as design parameter, connected to a modern SS (low output imedance).

I would argue using a resistor equal to the design target impedance of the speaker would be a preferable solution and I have been using that methode in preference.

What I would worry by far more what all that nice RF the speaker cable picks up at certain spot frequencies does when it is injected into the feedback loop of my amplifier, or perhaps not, as I often do not use looped feedback from the speaker terminals (but before build out network naturally).

In other situations, fat cables might be prescribed. My point being that it the best cable is a function of the rest of the setup. From a measurable point of view.

This we agree, but DCR really has not much of a word in this.

But yes, it is a function of the rest of the set-up and must be evaluated in context, as evaluation in isolation yields false negative conclusions when very real positive ones would have been drawn in the real context of use.

Ciao T

PS, While I have not worked with EEG stuff, I have worked with fairly high precision industrial measurement and control systems. Non had special mains cables, but all had an amount of effort to avoid leakage into the system and to isolate the measurement systems from the powergrid that is rare in HiFi.

And all the sensor cables (which arguably where quite long) where not your average "bog standard RG-59" either and for good reasons.
 
Hi,

Which leads to a cable skeptics assertion. Equipment that reveals audible differences in power/speaker cables may actually have a design flaw, not special resolution.

This is in fact a very reasonable conclusion.

The same wilful blindness on the part of EE's when it comes to to real effects of cables also aids these build in flaws, especially when combined with the requirements of having to conform with electrical safety codes, UL certification et al...

And given that non of these issues raise their head when testing an individual item of a more complex system using (for example) an Audio Precision test set (because it is designed to avoid them) they are neither observed nor corrected.

That coupled with the fact that a lot of DIY and High End gear tends to be "modified from generics" we tend to find bigger mains transformers and bigger capacitors, but the problems in grounding and circulating chassis currents etc. are faithfully replicated...

Ciao T
 
Last edited:
ENIG is probably the most popular surface finish for PCB's these days, due to its shelf life, inherent flatness and good solderability for both the leaded and lead free soldering process. ENIPIG is beinfg pushed more by PCB manufacturers as it save on the gold used and is less prone to black pad syndrome. Hard gold is used where there are PCB edge connectors such as PCB cards. Very few commercial components these days have gold plating where the components are soldered, apart from some connectors, mostly they are tin based, apart from some very specialised components for space and satelite stuff.
I dont know of any designs that plate the routes in gold or for what reason you would do it apart from protection, if you wern't using a solder resist, I would be interested in some examples of why it is being done, though I did work with an engineer doing video cameras that insisted his tracks were gold plated, there was no difference between them and the non plated boards.
 
DBMandrake said:
And here we must completely disagree. I can't seriously believe that you consider a 1-2dB change in frequency response due to speaker cable impedance (or any other cause) to be "unlikely to be meaningful".
As I rarely if ever position my head with micrometer accuracy, I stand by that. You get a greater difference from sitting a few inch to one side or the other.
I don't buy the whole "I get more change in frequency response moving my head a few inches" argument as an excuse in the slightest.

The reason is quite simple - you're talking about the steady state frequency response, eg direct field + time delayed room reflections summed and averaged over time. In other words what you would measure at the listening position if you were to do a slow sine sweep or an un-gated FFT measurement with an omni microphone.

Indeed in typical rooms such a measurement will reveal frequency response abberations on the order of +/- several dB right through the entire frequency range, and at higher frequencies those aberrations will shift all over the place with a few inches of microphone movement. (Particularly with a stereo speaker pair reproducing the same dual mono signal)

However this is not the same as introducing errors in the frequency response of the speakers because the ear doesn't perceive like an un-gated omni measurement.

There are reams of research that show that for frequencies above approximately 200-300 Hz the ear perceives not the steady state frequency response of the room, but the frequency response of the first arrival, effectively windowing out the reflections unless they're very large in amplitude or have insufficient time delay. (in which case they can merge together)

In fact the ear is pretty damn accurate at perceiving the frequency response of the first arrival (usually the on axis response of the speaker if you're sitting reasonably near to on axis) such that small errors in on-axis frequency response can be easily distinguished despite far larger position dependent errors in the steady state response being present in the room at the same time.

An analogy would be if you were to measure the response of a speaker a couple of metres out into the listening room both un-gated and gated - the un-gated response is going to fluctuate wildly as you move the microphone around, the gated response will vary very little. The latter is what you perceive, except at bass frequencies.

So to claim that because steady state frequency response in a room varies +/- several dB with moderate head movements automatically means that several dB variations in the frequency response of the speaker will go unnoticed is completely untrue, and doesn't take into account any of the research that has been done into how the ear interprets frequency response in a reflective environment where you have both direct and reflected signals.

If it weren't for the ears ability to sort between the direct and reflected signal paths in the time domain, listening to speakers in a typical room would be a horrible experience. Fortunately it's not.

Of course, practically all my own speaker designs tend to have a flat impedance (excluding LF enclosure tuning peaks, which I usually only damp a little first mechanically and secondly with a modest parallel resistor), so it takes a lot of resistance to get 1dB difference anyway.
I completely agree about aiming for a fairly flat impedance curve. Although I don't try to do anything about the bass resonance peak in the impedance, I will make an effort to make the rest of the curve as flat as reasonably possible, even if it means adding a few extra components that technically might not be necessary to get the desired transfer function to the drivers.

By doing so the speakers frequency response becomes far less sensitive to any unknown variations in cable impedance. It's not just cable impedance either - who is to say that someone might not try to drive your speakers with a valve amplifier with 3-4 ohms output impedance ?

A lot of designers just don't bother, including a lot of commercial designs. Personally I consider a speaker which is nominally 4 ohms in the bass yet nominally 8 ohms elsewhere (and peaking up much higher in places) to be a lazy design, it's certainly not an approach I would ever follow.

In prefer the 10% of DCR rule, which actually ends up lower than your figure. However, this never leads to cables one might call "low DCR'. For "low DCR" let's consider a 10AWG 3m Cable, it has a DCR of 20mOhm.

I do not argue that such a cable is a "bad thing" due to very low DCR, however, it is also not a "good thing" because of low DCR. Other factors will become more relevant.
I don't think I ever made DCR of the cable out to be a big deal, despite its still being the most significant factor of cable properties. In fact I don't think speakers cables are a big deal at all to be honest. Make nice reliable connections with good quality connectors, size the cable gauge so that DCR is acceptably low, job done.

Working out what the maximum acceptable resistance is is up for debate - it obviously depends on what the maximum frequency response error you're will to accept is, and the impedance curve of the speaker. 0.5ohms is only a rule of thumb.

If you consider a speaker cables job is to get the signal from the amplifier to the speakers with the minimum amount of alteration, erring on the low side is the safe option, and with moderate cable runs of reasonable gauge I don't think it's ever an issue, at least for speakers with "reasonable" impedance curves.

It is still something that needs to be considered though, because although the errors could be small, all frequency response errors in a system are cumulative, so once you have a small error from the cable, a small error in the amplifier, a small error from the source device, and so on, the cumulative effect can be significant.

If a speaker is well designed, adding series resistance should not make it less flat. Such designs have been made since the late 1930's (like the German Eckmiller Coaxial Studio Monitor Driver).

However it seems that speaker designers prefer to make their life easy and hope the Amplifier designer will deal with all the problems they leave un-addressed, while amplifier designers hope speaker designers make speakers that are 8 ohm resistors and the cable designer is stuck in-between.

Non of which strikes me as smart or sensible design. But that, as they say is another story.
In an ideal world speakers would have flat impedance curves, in the real world most don't. Whether it's laziness in design, costs cutting in leaving out components, or whether designers just don't care doesn't really matter - in the context of a cable you need to consider the broad spectrum of impedance curves that it might be expected to drive. Again, erring on the side of conservatism is the way to go.
What I would concern myself with by far more are such esotheric subjects as the action of the cable as Aerial for RF signals, inductance (which can be quite notable with thick figure 8 cables), resonances caused by the cables inductance and capacitance which can do amusing things to amplifier stability and all those things that supposedly make no difference, but of course make a lot of difference, simply because basic electrics and electronics DEMAND that they do...

But hey, that's just me and WTFDIK.

Ciao T
While you're right in theory, any amplifier which became unstable due to the inductance and/or capacitance of 2-5 metres of speaker cable attached to it deserves to go into the bin. Instead of spending a lot of money on exotic cables in such a case, the money could be better spent on a stable amplifier.

Of all the amplifiers I've owned over the years I've never once had one that had any instability problems, and I typically use fairly long runs of speaker cable on the order of 5-8 metres since I usually have the speakers at the opposite end of the room to the equipment.

As a former radio amateur operator I'm quite familiar with the potential for RF interference - and on any decent amplifier it's the inputs where you usually have trouble, especially a turntable input. I've yet to see RF pick-up on the speaker leads working its way back into the amplifier being a problem such that "better" quality cable could solve the issue. Proper RF filtering inside the amplifier is the only real solution here.
 
I don't buy the whole "I get more change in frequency response moving my head a few inches" argument as an excuse in the slightest.

Well, here you don't even have to make a theoretical argument. Two minutes with an equalizer and a bypass switch will demonstrate that these EQ changes are quite audible.

While you're right in theory, any amplifier which became unstable due to the inductance and/or capacitance of 2-5 metres of speaker cable attached to it deserves to go into the bin. Instead of spending a lot of money on exotic cables in such a case, the money could be better spent on a stable amplifier.

Interestingly, you're far more likely to do better with spending LESS. I've never had a decently engineered non-voodoo amp oscillate due to speaker cables. I have seen some so-called High End amplifiers go crazy with some cables, but that's the tradeoff when you buy equipment which is heavily designed but lightly engineered.
 
Well, here you don't even have to make a theoretical argument. Two minutes with an equalizer and a bypass switch will demonstrate that these EQ changes are quite audible.
Something that I've done many hundreds if not thousands of times, and it really is surprising just how sensitive the ear can be to certain types of frequency response variations.

My EQ has a minimum step size of 0.5dB and I actually find that too coarse when making a broadband change - for example increasing or reducing the entire midrange from 300-3000 by 0.5dB is a surprisingly obvious change if the speaker was quite close to flat before the change. Likewise an octave wide change of only 1dB either way in the presence region can entirely shift the character of the speaker from bright and forward to dull and laid back.

On the other hand, narrow band fluctuations (especially in less important frequency ranges) much greater than this where the 1/3 octave smoothed response is still flat can tend to go unnoticed.
Interestingly, you're far more likely to do better with spending LESS. I've never had a decently engineered non-voodoo amp oscillate due to speaker cables. I have seen some so-called High End amplifiers go crazy with some cables, but that's the tradeoff when you buy equipment which is heavily designed but lightly engineered.
Or a few snake oil cables that actually have L or C components embedded in them somewhere, which is not unheard of... ;)
 
Hi,

I don't buy the whole "I get more change in frequency response moving my head a few inches" argument as an excuse in the slightest.

It is not an excuse, it is an observation. Any piece of MLS based test software and a mike can illustrate this in seconds, no steady state signals required either.

Of all the amplifiers I've owned over the years I've never once had one that had any instability problems, and I typically use fairly long runs of speaker cable on the order of 5-8 metres since I usually have the speakers at the opposite end of the room to the equipment.

Oscillation can result even so, it is not always such that it causes amplifiers to blow up. It is also not just limited to "lightly engineered" High End equipment. What some may complain about is that some amplifiers in the High End audio market deliberately omit build-out networks, the benefits and drawbacks of such an approach are debatable...

As a former radio amateur operator I'm quite familiar with the potential for RF interference - and on any decent amplifier it's the inputs where you usually have trouble, especially a turntable input. I've yet to see RF pick-up on the speaker leads working its way back into the amplifier being a problem such that "better" quality cable could solve the issue. Proper RF filtering inside the amplifier is the only real solution here.

Well, first, most common amplifier designs offer a very direct way for RF picked up by the speaker cables directly to the RF detector, ooops, bipolar input stage, secondly, performance can be strongly degraded long before we get audible radio reception.

As always, things are never as black/white as some would like to make it out.

Ciao T
 
Van Damme 2x6.0mm UP LCOFC Hi-Fi Speaker Cable - v506000

3.0 Ohm/km.. is that a good spec?
The cable construction is 7*3*37*0.1mmdiam.
That comes to ~6.1sqmm.
Look up any copper cable whether stranded or solid single core or ofcc or six 9's and all will show ~3r/km for ~6sqmm.

That tells me that 3r/km is not good nor is it bad. It simply confirms that it is composed of copper with very few contaminants. It should always give 3r/km if it is copper.
 
Last edited:
It is not an excuse, it is an observation. Any piece of MLS based test software and a mike can illustrate this in seconds, no steady state signals required either.
I did specifically mention "un-gated FFT" measurements as well as steady state sine-wave measurements. By un-gated I mean using the full FFT length, which depending on the software is typically 100-200ms, and is more than long enough to capture the room reflections - thus the measured frequency response will be the same as the steady state response, and show the same position dependent variations as a steady state response.

When I referred to a gated response I'm talking about one with ~6ms gate time, which is long enough to work down to about 200Hz, but short enough to window out most of the reflections. This is roughly equivalent to what the ear is doing >200Hz. Try doing your MLSS test with a gate time of around 6ms and moving it around a few inches.
Well, first, most common amplifier designs offer a very direct way for RF picked up by the speaker cables directly to the RF detector, ooops, bipolar input stage, secondly, performance can be strongly degraded long before we get audible radio reception.
True, but again, like any device that requires RF immunity it's the job of RF filtering in the amplifier, not some magical cable. If the amplifier doesn't have at least a toroidal choke and shunt capacitor on every input/output line, and decent shielding then you're in trouble.

Substituting one speaker cable for another is not going to help you here if the design is bad, and unfortunately a lot of consumer grade equipment has little or no attention paid to RF suppression and filtering.
 
Status
Not open for further replies.