My Balanced Line Receiver

Moderator
Joined 2003
Paid Member
The output can be configured as ground sensing (see above) or as balanced (meaning hot and cold outputs with equal impedances, not two signals with opposite phase)
Hi Alex,
I'm looking for a line driver for the Zeus amp. It should be able to drive the 600 ohm input transformer.
https://www.diyaudio.com/community/threads/zero-feedback-impedance-amplifiers.42259/post-488194
Current configuration with balanced DAC output 100ohm to the transformer doens't work. 100ohm seems too much.
1. You state "not two signals with opposite phase". Guess that's a problem?
2. Is the output impedance low enough?

I'd prefer to keep everything balanced from in to output so I would hope this driver could do the trick?

Hugo
 
Member
Joined 2009
Paid Member
Driving the Zeus with a bunch of opamps?

1. Unlike some BTL amps, a transformer's primary does not need two opposite-phase signals; one is enough.
2. In the stock schematic, the balanced output impedance is 44 ohm - it is just two resistors 22 ohm each:
1690997076168.png

The output impedance of the opamp is effectively zero thanks to NFB. R17 decouples the opamp from possible capacitive loads (e.g. cables) that otherwise might affect stability. R18 provides an impedance on the "cold" wire identical to that on the "hot", which helps the common mode rejection of the downstream device.

Transformers generally like being driven by low (or even negative) impedance sources, both for distortion and for the flatness of frequency response reasons. You can reduce R17/R18 as long as the opamp remains stable with the load, including when clipping and on fast transients. I think it should be easy to make it work.

But what doesn't work with just the trafo?
 
  • Thank You
Reactions: 1 user
Let me see if I understand the design goal here. You've reduced the already well below audible THD by an additional 0.00028% over a THAT1206, but have thrown away 37dB of CMRR to do it? Are you aware that the main, and only point of balanced transmission is common mode rejection? You may never run into this in smaller installations, but in a studio, 53dB of CMRR is NOT adequate. I question the validity of the trade-off.
 
  • Like
Reactions: 2 users
Administrator
Joined 2004
Paid Member
I think that is bit harsh, and while true - have you looked at the typical op-amp based balanced input circuit? A lot of them came with 1% tolerance resistors and achieved consideably less than 40dB CMRR - enough to still be useful in a lot of instances. I like the ThatCorp balanced drivers and receivers and have used the 1206 and others in my designs in the past. (I am not doing much audio design these days.)

Likely careful matching of all of the complementary resistors in the balanced input circuit would improve the CMRR. The lack of input capacitors is a nice feature as long as input current is limited by suitable means if the input is driven with no power present.
 
Last edited:
  • Like
Reactions: 1 user
I've built many. One "trick" is not to use individual resistors, but use a laser-trimmed R-pack. The total resistance value might be 1%, but section to section match is far better. But the monolithic solutions exist because this has always been an issue, and have existed since the PMI/SSM days. There simply no need to try to roll your own.

Harsh as it may seem, the one purpose to balancing is CMRR. Otherwise, it's not a thermal noise advantage, and never has been. If you want the cleanest interface, unbalanced is the solution. If you don't need CMRR in the first place, unbalanced will out-perform any pair of balanced interface devices. Always. And for short, like 1-2 meter interconnects, unbalanced works just fine.
 
Member
Joined 2009
Paid Member
@jaddie: first you need a better CMRR, then in the next post, an unbalanced connection with zero CMRR works fine for you. It would be interesting to look at one of your builds as an example and see what choices you made and why, and what outcomes that brought about. Are there any measurement you could share?

In any case, this is a topic I'd like to explore deeper, and on which I very much welcome opinions, so I am happy you raised the issue. But let me unpack it a little.

First, a "well below audible THD" not inaudible. I learned it while debugging some power and headphone amps with ultra low, as in below -140dBc, THD (of which Omicron has been published on this forum). An SPL calculation tells you that either way, the distortion products are below the auditory threshold, yet you can clearly hear the difference. Don't take my word for it, build a copy of Omicron on a solderless breadboard and hear for yourself. Or look at @tomchr's Modulus series of amplifiers. Tom used THAT1200 initially but designed it out (see e.g. here, here, or here) in later versions.

Second, as @kevinkr pointed out, the CMRR at lower frequencies largely depends on resistor matching. The measurements above were taken on a board with 1% resistors. I feel that -53dB is quite good compared to what can be achieved with a single ended connection and e.g. a ground loop breaking resistor aka GLBR. Still, choosing resistors with a tighter tolerance is an easy fix. Out of curiosity, I built one board with 0.1% resistors, and the results were, predictably, about an order of magnitude better:
Common Mode Matched Impedances.png
Common Mode 10ohm Mismatched Impedances.png
Common Mode 600ohm Mismatched Impedances.png
If one wants to go extreme, matched resistor networks such as the LT5400, are available, but IMO it would be an overkill for a line level DIY audio project.

Third, the CMRR at higher frequencies depends on matching not only the resistors but also the input EMI filter parts, including capacitors. Bill Whitlock is super clear on this is his patent underlying THAT1200:
1692067055840.png
THAT1200 and other line receiver ICs benefit from matched resistors on the chip, but the EMI filter is still discrete, and I wish anyone luck finding capacitors with better that 1% tolerance. With that in mind, I would be interested to see a CMRR vs frequency chart (like one above) for a good piece of studio equipment.

Fourth and most practical, THAT1200's CMRR is supposed to really shine with mismatched hot and cold source impedances. THAT Corp. calls it InGenius, but it is just bootstrapping that raises the common mode input impedance. (I use a humble high value resistor for that purpose.) I feel that this feature is a bit overrated. In a studio environment, I imagine one can expect balanced outputs with reasonably matched hot and cold impedances, so a superior ability to ignore an impedance mismatch has little practical significance. In a home system, where single ended sources are common, as @jaddie mentioned, one may never run into an issue because the installation is small. Do you think this feature is needed. desirable, and worth of additional complexity?
 
I explained why high CMRR is necessary, AND why in some cases, no CMRR is necessary. Did you miss the exmplanation?

First point: THD at these figures is not audible. You might be confusing some other audible property, but harmonic distortion at any of those figures is not audible. But this gets into a discussion of test signals, test methodology, and bias management that we might not want to do here. But assuming the application is music, there are no music signals that permit audibility of these levels of distortion, and none that haven't already be far more distorted along the way to release. Masking swamps harmonics. Again...another discussion.

Second point: Yes, resistor match is the problem, and that's why you don't build it with 1% resistors. -53dB CMRR is NOT adequate for professional applications. Absolutely not. Haven't you looked at resistor arrays? Is a 0.05% section match interesting?
Here's one example:
https://www.digikey.com/en/htmldatasheets/production/9832041/0/0/1/tdpt16032002auf

But why bother when the monolithic solution is already better...and done?

You might, or might not need EMI filtering, it's application specific.

My point is, you don't need ANY CMRR in small systems with interconnects 1-2 meters, and with proper system grounding. You just don't. But if you're building large systems, studios, etc., CMRR becomes essential very quickly. You might feel the THAT1200 series is "over-rated", but it's used by the 1000s in huge numbers of professional systems. You're already listening to dozens of them in series.
 
Neurochrome.com
Joined 2009
Paid Member
I feel that -53dB is quite good compared to what can be achieved with a single ended connection and e.g. a ground loop breaking resistor aka GLBR. Still, choosing resistors with a tighter tolerance is an easy fix. Out of curiosity, I built one board with 0.1% resistors, and the results were, predictably, about an order of magnitude better:
Any CMRR is better than no CMRR. I agree there. I'm surprised you're not seeing better than about 50 dB CMRR even with ±1% resistors, though. I wonder if the DC servo is mucking things up for you.

I do understand that 40 dB is the best you can guarantee with ±1% matching, but I've always found the matching of four resistors on the same tape to be much tighter than ±1% - at least as judged by the CMRR of the implemented differential receiver. I don't recall seeing a big improvement going from 1% to 0.1% resistors. The main reason I specify 0.1% resistors in critical places is so that it can't come back to bite me later.

I'm not sure where in Whitlock's patent(s) your snippet came from. I'm thinking it's from the prior art section of the patent. I think we can all agree that good matching of the impedances in the differential receiver is key for good CMRR. At least if these impedances influence the gain of the circuit. I think you may be missing the central part of Whitlock's patent, however: The common-mode bootstrap is where the secret sauce lies. That's the invention. That's how the THAT1200 achieves such good CMRR across frequency. The on-die resistor matching helps too, but unless THAT Corp uses a semiconductor process optimized for analog electronics (which they probably do) the resistor matching on die won't be any better than what you could do with discrete components and careful circuit layout.

Kudos to you for measuring your circuits, though. I appreciate that. Too many rely exclusively on simulations and forget to check their work. Keep it up!

Tom
 
  • Like
Reactions: 1 users
The only real issue with the THAT InGenius devices is noise. The driver (1646) is -101dBu and the corresponding (1200 series) receiver is around -106dBu.

You can do a whole lot better than that with opamps (around -120dBu), but lose the superb CMRR of the THAT units. As Tom says you end up in the territory of matched or tight tolerance resistors with low or matched tempco, or tight tolerance resistor networks (although most or all of these seem to be thick film - not the best technology for audio).

Craig
 
  • Like
Reactions: 1 user
The total dynamic range of a THAT driver and receiver combo is over 21 bits, 20KHz bw, unweighted. They’re made for pro audio levels and systems. 128dB of DR is not reproducible in anything but a very special acoustic environment. However common mode noise down 75dB or so is.

You take your pick and apply technology that makes sense in the specific conditions.

IMO, bothering about THD below 0.01% is tilting at windmills because more than that is baked into all your recordings, and much, much more is added by your speakers. And it’s all masked by the music.
 
  • Like
Reactions: 1 user
Not to mention the many and various distortions involved in scraping a bit of diamond over a vinyl groove. But nonetheless it manages to sound glorious.

Low order distortions (particularly 2nd and 3rd) are harmonious with all major and minor western scales and are are benign. But high order harmonics are truly noxious; manufacturers of first rate pianos take acoustic measures to suppress high order harmonics for that reason.
 
Member
Joined 2009
Paid Member
Let me conduct a little experiment. Let's say we have an amplifier that adds 0.01% of the 2nd harmonic (H2) and 0.001% of the 3rd (H3):
1692241964387.png

The distortion is not particularly high and is all low order, harmonious and benign, perhaps even euphonic.

Now let me play a simple chord, A-C#-E:
1692242317361.png

Oops. In addition to H2 and H3, our benign test amplifier sputters a bunch of intermodulation products, musically unrelated to the chord, with levels comparable or in some cases above those of H2 and H3. These are not euphonic and would not be masked by music. Such an amplifier might be ok for simple music such as a solo vocal, but would get confused with anything moderately complex, and would mush a full orchestra.

So, while various components of the audio chain undoubtedly distort, 0.01% THD does not give your amplifier transparency, and you do get audible benefits from making one of them more linear.
 
  • Like
Reactions: 1 users
Oops. Someone needs to google "psychoacoustic masking".

And your example never happens in real life anyway. No actual musical instrument produces a chord made up of 3 pure sine waves. Once you throw in harmonics made be real instruments (plus all the aother stuff in real sound), and you apply the principles of masking, your argument fails completely.
 
  • Like
Reactions: 1 user
Member
Joined 2009
Paid Member
Someone needs to pause googling and actually listen to some music through some gear ;)

Ever since I got interested in making more linear amplifiers, I hear comments along the lines "linearity makes no audible difference". Yet, if you actually try (I did), both distrtion and intermodulation products and - remember what this thread is about? - common mode noise can clearly be heard, even with imperfect speakers or headphones.

To give you one example, some years ago I designed a board for the LM3886 that has 0.0018% THD at 1kHz, 18V RMS into 8ohm. Another well known board, similar in size and layout, has 0.0052% under the same conditions, mostly because of its trace routing. The difference between these two boards can clearly be heard and is not masked even by simple musical material.

Now, some people may prefer the (slightly) distorted sound, and linearity is not just a single THD+N number (ever tried measuring it? THD depends on the measurement conditions, which no one here mentioned), but that does not invalidate the point that more linear gear sounds different. And when someone hears a difference, it's a real experience, even if Google tells you there shouldn't be any.
 
Last edited:
It is extremely rare that a synth (yes, it’s a real musical instrument) patch is made up of ultra pure sine waves that are not processed by some other modifier. True of all synth technologies. Sine waves from analog synths are not pure at all, and sine waves from digital synths also have impureties, though different.

Let me emphasize this again: pure sine waves are extremely rare in a syth. A sine wave, even one with 0.5% distortion, is a very bland, unexpressive sound that doesn't occur naturally in any acoustic instrument. It barely exists in electronics, and arguable does not exist once the tone is played through speakers. Transducers, especially speakers, are massively nonlinear, and produce over 1% of THD without any problem. So, no, any test signal made with one or more sine waves is invalid.

Even more rare, that such synth patch would exist as a solo with absolutely no other sound around it. Masking is powerful, look it up. The principle is that a low level signal (fundamental or harmonic) becomes inaudible in the presence of a higher level signal within a give masking frequency curve. All lossy codecs are based on the principle of masking. Some even work well enough to be transparent.

I have never once said "linearity makes no audible difference". That's not the point at all. Linearity is not a binary characteristic, it's completely variable with the device, level, and frequency. THD numbers don't even reflect the complete reality of nonlinear distortion at all because a single number doesn't reflect level or spectrum. However, vanishingly low THD numbers that are obtained at a wide range of levels and frequencies, those below 0.005% are in fact inaudible for many reasons, and masking is the major swamping factor. We don't need to argue this, it's well researched by every codec developer.

So when you use a sine wave to judge linearity, but you're hearing it through a distortion generator producing 1% or more (a transducer), that's just not a valid test. Then, when music is the test signal, the signal is already rich in spectral density, and self-masking. It takes massive amounts of distortion to become clearly laudible to every listener, less for trained listeners.

Now, I didn't want to get into this, but clearly, it's inevitable. Testing. When testing for audibility of anything where the effect is extremely low it is vital that all biases and variables be considered carefully and brought under control. This is not trivial. The test methodology known as ABX/DBT must be followed strictly. Any direct A/B test (even if might think it's blind) is not bias controlled. When we're dealing with minute, and barely audible effects, bais swamps just about everything. So you absolutely must adhere to the protocol. Otherwise you don't have valid evidence, you have subjective opinion.
 
  • Like
Reactions: 1 user