• These commercial threads are for private transactions. diyAudio.com provides these forums for the convenience of our members, but makes no warranty nor assumes any responsibility. We do not vet any members, use of this facility is at your own risk. Customers can post any issues in those threads as long as it is done in a civil manner. All diyAudio rules about conduct apply and will be enforced.

Modulus-86: Composite amplifier achieving <0.0004 % THD+N.

No, I think it is essential to evaluate whether cable imbalance will cause variation in sound quality. There are lots of unknown issues which effect sound quality. I just ask for information I feel valuable. Since lots of discussion went into balanced input filtering, I think the impedance variation of the cable really shows that the more significant issues are not addressed.
Since we don't have access to the cable impedance variations can we put numbers to some assumed cable impedance variations to allow comparison to the magnitude of the variations that are not tolerable for the impedances at the terminations?
 
While that very well might be the curve for a particular unbalanced interconnect circuit that he measured, no way does it even resemble the audio frequency characteristic impedance of a coax cable.


I agree,the curves do not match characteristic impedance, but we must not forget that these are measurements, and when you read the signals back into the instrument you also have input load impedance in parallel. So it is a more realistic understanding of the interface.
 
Since we don't have access to the cable impedance variations can we put numbers to some assumed cable impedance variations to allow comparison to the magnitude of the variations that are not tolerable for the impedances at the terminations?

Scale of the impedance is linear, and starting from 0 in the plot I posted. Most cables are going to go below 100 ohms in the high frequencies. So that will give you some kind of idea. As you lower the input impedance at the receiving end, the flatter the impedance will be over the audio range. This is one reason why lots of instrument use 50Ohm as standard input impedance. For audio systems, this interface needs to be more carefully optimized.
 
Modulus-86: Composite amplifier achieving &lt;0.0004 % THD+N.

That is what I posted, the measurements. I did not consider balanced interconnects at the time of measurement. But if you have listed to Bill Whitlock's seminar where he explained his experience about different sound because of different color insulation, it is not hard to understand how imbalance can occur.
 
Last edited:
The 5534 has pins into the input pair collector loads.
There are sch showing how to convert the 5534 to jFET input.
Would this conversion make the 5534 more tolerant of RFI?
Could the jFET conversion exceed the RFI performance of the 0134?
I've done this sort of thing but not with 5534.

You have to ask if you need better performance than OPA134 and if adding your own i/p stage would degrade the parts of 5534 performance you like.

If you are going to this trouble & cost, a better solution might be OPA627 which is FET i/p with similar Env to 5532/4. It's also $$$ and has Golden Pinnae cred.
________________

soongsc, the solution to your cable problems is to use OPA627 at all critical points. They will restore the Clarity & Definition lost due to any unbalance and Optimise any interface. :D
 
Member
Joined 2014
Paid Member
Blimey. Spend a morning in meetings and it all happens :). I am rubbish at multi-quoting so excuse that

Krglee: Had forgotten about the tab some XLRs have as the ones I have lying around don't. Plan to tap up a friend who has suitable kit to try some sweeps from dc to daylight to see how different schemes compare. Whilst I agree that you can fix an obvious break in of RF as 'fixed/not fixed' it would be nice to see what the difference is between approaches. After all when one has an amplifier down in the weeds noise and distortion wise, would be silly not to try and feed it as clean a signal as we can make :)

Haven't got the mod-86 complete yet, new babies have a habit of really cramping the free time and funds! Plus the balls to take a saw to working speakers!

Andrew: If you can find the overture spreadsheet great. I downloaded it once but never saved it. It may still be that its a 'might work might not' but new data is always good to have.
 
Blimey. Spend a morning in meetings and it all happens :). I am rubbish at multi-quoting so excuse that

Krglee: Had forgotten about the tab some XLRs have as the ones I have lying around don't. Plan to tap up a friend who has suitable kit to try some sweeps from dc to daylight to see how different schemes compare. Whilst I agree that you can fix an obvious break in of RF as 'fixed/not fixed' it would be nice to see what the difference is between approaches. After all when one has an amplifier down in the weeds noise and distortion wise, would be silly not to try and feed it as clean a signal as we can make :)

Haven't got the mod-86 complete yet, new babies have a habit of really cramping the free time and funds! Plus the balls to take a saw to working speakers!

Andrew: If you can find the overture spreadsheet great. I downloaded it once but never saved it. It may still be that its a 'might work might not' but new data is always good to have.

Overture Design guide and user guide (PDF): View attachment Overture Design Guide.zip

Mike
 
That is what I posted, the measurements. I did not consider balanced interconnects at the time of measurement. But if you have listed to Bill Whitlock's seminar where he explained his experience about different sound because of different color insulation, it is not hard to understand how imbalance can occur.

That would be SCIN. These from the Jim Brown page:
*****************************************
"Shield-Current-Induced Noise"
Current flowing on the shield of balanced audio cables will be converted to differential mode voltage on the signal pair by imperfections in cable construction.

http://www.audiosystemsgroup.com/Shield_Current_Induced_Noise.pdf
http://www.audiosystemsgroup.com/SCIN-2.pdf
*****************************************
"Common-Mode to Differential-Mode Conversion in Shielded Twisted-pair Cables" (Shield-Current-Induced Noise)
Neil Muncy has shown that audio frequency current flowing on the shield of balanced audio wiring will be converted to differential mode voltage by any imbalance in the transfer impedance of cables, and hypothesized that the effect increases linearly with frequency. Whitlock has shown that conversion also occurs with capacitive imbalance. This paper confirms Muncy's hypothesis, and shows that shield current induced noise can be significant in the MHz range.

http://www.audiosystemsgroup.com/AES-SCIN-ASGWeb.pdf
***********************************************
"A Novel Method of Testing for Susceptibility of Audio Equipment to Interference from Medium and High Frequency Radio Transmitters"
The author has shown that radio frequency (RF) current flowing on the shield of balanced audio wiring will be converted to a differential signal on the balanced pair by a cable-related mechanism commonly known as Shield-Current-Induced Noise. This paper investigates the susceptibility of audio input and output circuits to differential signals in the 200 kHz - 2 MHz range, with some work extending to 300 MHz. Simple laboratory test methods are described, equipment is tested, and results are presented. Laboratory data are correlated with EMI observed in the field.

http://www.audiosystemsgroup.com/AESPaperNY-SCIN-ASGWeb.pdf
 
No, I think it is essential to evaluate whether cable imbalance will cause variation in sound quality. There are lots of unknown issues which effect sound quality. I just ask for information I feel valuable. Since lots of discussion went into balanced input filtering, I think the impedance variation of the cable really shows that the more significant issues are not addressed.

I personally rather doubt that a few mΩ of impedance mismatch between the two leads in the cable will provide any measurable degradation when loaded by 48 kΩ of input impedance. Any imbalance in the input impedance will swamp out the imbalance of the cable and any imbalance of the input impedance is already covered by the measurements I post on my website.
That's not to say that you won't be able to detect a difference in sound quality in a sighted test. However, that's likely due to confirmation bias rather than any actual difference.

If you want to explore the effects of imbalance in the two differential leads, why don't you just set up an experiment where you introduce a known imbalance and run a double blind ABX test?

With an imbalance in resistance between the two leads in the differential input, you will be able to measure a degradation in CMRR. I've measured it... You can also easily calculate the amount of CMRR degradation by grabbing an op-amp reference, such as Franco, the equivalent circuit schematic for the THAT1200, and do the math if you find it interesting. Or run a simulation if math isn't your cup of tea. You're the one with the burning desire to figure these things out. You're the one who "feels" this is important. Do it! While you go do that, I'll develop more circuits and move the field forward for everybody's benefit.

Does this sound like a reasonable divide and conquer approach?

Tom
 
Scale of the impedance is linear, and starting from 0 in the plot I posted. Most cables are going to go below 100 ohms in the high frequencies. So that will give you some kind of idea. As you lower the input impedance at the receiving end, the flatter the impedance will be over the audio range. This is one reason why lots of instrument use 50Ohm as standard input impedance. For audio systems, this interface needs to be more carefully optimized.

Sorry dude. This is flat-out wrong.

For RF (is the emphasis clear enough?), the choice of characteristic impedance of a cable is a tradeoff between power handling and loss. For a 10 mm diameter cable with air dielectric, the characteristic impedance should be 30 Ω if you're optimizing for power handling and 77 Ω if your optimizing for low loss (Microwaves 101). On test equipment 50 Ω is commonly used as a compromise between these two values (except for video test equipment that's 75 Ω).

None of this is relevant for audio, however. Transmission line theory starts becoming relevant once the wavelength of the signal you're dealing with starts to approach the length of the transmission line itself. So for audio frequencies (20 kHz), you need to start thinking in terms of characteristic impedance once the cable length approaches about 10-15 km (l = c/f). Some might argue one tenth of that. Still. The cables used in typical residential settings are 3-4 orders of magnitude shorter than this.

Tom
 
Last edited:
I personally rather doubt that a few mΩ of impedance mismatch between the two leads in the cable will provide any measurable degradation when loaded by 48 kΩ of input impedance. Any imbalance in the input impedance will swamp out the imbalance of the cable and any imbalance of the input impedance is already covered by the measurements I post on my website.
That's not to say that you won't be able to detect a difference in sound quality in a sighted test. However, that's likely due to confirmation bias rather than any actual difference.

If you want to explore the effects of imbalance in the two differential leads, why don't you just set up an experiment where you introduce a known imbalance and run a double blind ABX test?

With an imbalance in resistance between the two leads in the differential input, you will be able to measure a degradation in CMRR. I've measured it... You can also easily calculate the amount of CMRR degradation by grabbing an op-amp reference, such as Franco, the equivalent circuit schematic for the THAT1200, and do the math if you find it interesting. Or run a simulation if math isn't your cup of tea. You're the one with the burning desire to figure these things out. You're the one who "feels" this is important. Do it! While you go do that, I'll develop more circuits and move the field forward for everybody's benefit.

Does this sound like a reasonable divide and conquer approach?

Tom
I think you fail to recognise the fact that the impedance will be much lower that 47KOhm as you gradually approach 20KHz and above, and the difference will not be only a few mOhm in this range where the cable impedance starts to dominate the the impedance.
 
Plan to tap up a friend who has suitable kit to try some sweeps from dc to daylight to see how different schemes compare. Whilst I agree that you can fix an obvious break in of RF as 'fixed/not fixed' it would be nice to see what the difference is between approaches. After all when one has an amplifier down in the weeds noise and distortion wise, would be silly not to try and feed it as clean a signal as we can make :)
Bill, why don't you build a Calrec CB radio?

I can guarantee that if your gear is silent with this, it will pass ANY RFI/EMI test with flying colours.

See Figure 11 of the LMV851 App Note. Maybe the designers of the OPA134 got lucky. I'd prefer to see data before jumping to conclusions, though.
Finally got to read this properly.

I'm not sure their EMIRR is relevant to most of us. IM not so LE, RFI is usually a Yah/Nay thing (though I've mentioned one marginal case) so their linear-looking susceptibility is highly suspicious.

Their direct injection into inputs bla bla is also suspect.

An important caveat. Your layout, grounding & earthing scheme is a MAJOR factor in RFI .. at least as important as choice of OPA.

soongsc, I wasn't joking about OPA627. If you replace whatever OPAs you use with OPA627, I GUARANTEE you and your Golden Pinnae listening panel AND also any true golden pinnae will hear significant differences. :)

If you can't afford OPA627, I can sell you some Balancing Paint that when painted on even coaxial or RCA cables will fix any imbalance and the sound deterioration due to it. Send me US$100 in used bank notes (no Confederate money please) for a small sample. You will not be disappointed.
 
Last edited:
Sorry dude. This is flat-out wrong.

For RF (is the emphasis clear enough?), the choice of characteristic impedance of a cable is a tradeoff between power handling and loss. For a 10 mm diameter cable with air dielectric, the characteristic impedance should be 30 Ω if you're optimizing for power handling and 77 Ω if your optimizing for low loss (Microwaves 101). On test equipment 50 Ω is commonly used as a compromise between these two values (except for video test equipment that's 75 Ω).

None of this is relevant for audio, however. Transmission line theory starts becoming relevant once the wavelength of the signal you're dealing with starts to approach the length of the transmission line itself. So for audio frequencies (20 kHz), you need to start thinking in terms of characteristic impedance once the cable length approaches about 10-15 km (l = c/f). Some might argue one tenth of that. Still. The cables used in typical residential settings are 3-4 orders of magnitude shorter than this.

Tom
The fact is the cable impedance is not constant in the audio range. I have shown my measurements, and you are welcome to prove me wrong with measurements.
 
Last edited: