Open-source USB interface: Audio Widget

USB is the most obvious interface. It allows data and clock to travel in opposite directions. Its main drawback is the lack of Windows drivers.

My hope is that one day soon an audiophile Windows USB expert joins our efforts :)

Alex does have some good points. The project is mainly for hackers. But it is possible to buy a kit which which plays out of the box and needs no technical interaction.

Børge
 
I think this is an unfortunate line to take, there's plenty to indicate that 44k1/16 is perfectly adequate and that a 'superior SQ' is not achievable.

To describe somebody who is 'satisfied with ... 44.1/16 CD quality music' as a 'casual user' is to denigrate the judgement of the many users and professionals who have assembled and weighed up the evidence in a far from casual manner.

Doubtless, superior paper performance is achievable, and a performance margin is desirable, but encouraging people to believe that an audible superiority is achievable is to fly in the face of the available evidence.

I've thought a few times about offering assistance to what has appeared to be a very worthwhile project but I'm certainly not encouraged to put my skills behind a development driven by ideology. To my mind the purpose of a device such as the audio widget would be to put audible differences beyond question, not to encourage further in-fighting and an overkill 'arms race' in an already confused and confusing arena.

Please Counter Culture - don't count yourself out. There will allways be all kinds of animals under the heavens - and it's OK - it help's to keep things dynamic in a groupcontext. I have lots of ordinary CD's which are of very good quality and I have also lots that is of lesser quality. The lesser ones are perfect as remastering candidates - If and when time is given (or taken sometimes). I have only done two so far but I got them to sound fine to me - sorry for the slight OT...

There will allways be different camps but as long as the discussions are kept on topic and that everyone understands that UAC2 implementationwise is a moving target - this will undoubtly be an succesful project.

Brgds
 
USB is the most obvious interface. It allows data and clock to travel in opposite directions. Its main drawback is the lack of Windows drivers.

My hope is that one day soon an audiophile Windows USB expert joins our efforts :)
FireWire Audio also allows data and clock to travel in opposite directions. An advantage that FireWire has is that it is peer to peer, rather than host to device, so you can sometimes connect devices without a computer. There's even the possibility that FireWire could replace SPDIF in this aspect as a component to component audio transport.

I have no idea what the driver situation is like for Windows, though.
 
it would be interesting to create a board that has all three (USB, FW, EN), even though that would be even harder.
sure, it may be interesting to do something like that (definitely it would be a big technical challenge...). But, from a practical point of view, I don't quite see the point on doing that.

To our purpose, USB and FW would be practically equivalent. AFAICT, any machine which has an FW interface also have some USB ones too. Unless windoze have native drivers for async FW audio (does it? I doubt), we would need a driver anyway. So what would be the advantage?

Besides, IMHO FW is kind of vanishing, if not dead already.

FW400 was about on par WRT USB2, which is cheaper and simpler (and way more common). With USB3 already around, even the newer (and rare) FW800 has become basically useless. Apple Mac was about the only architecture which were kind of "pushing" for it. But recently I've even seen new Macs which does not have any FW port (e.g. the "air").

Thus, IMHO developing an FW interface would be just a waste of time and effort.

A network (Ethernet and possibly WiFi) based device would be a different story. Yet it is extremely simple to get some small and cheap "SBC", install Voyage MPD on it (on an SSD or other small SS media) and plug in an USB AW to it...

Of course, one may consider the possible advantages (and disadvantages) of having some kind of SBC directly producing the data stream for the DAC, avoding the extra interface, ecc.
 
Last edited:
Member
Joined 2004
Paid Member
A few facts

I see a lot of assumptions but fewer facts flying here.

I2S standard can have the clock originate at either end per the spec. Its the implementation that is a limiter.

AES and SPDIF can be locked to an external clock. Word Clock is used for that commonly in professional applications where several ADC's need to be locked to a common clock. You can spend a fortune on fancy word clock generators. It becomes unmanageable with a mixed sample rate playlist.

The AB 1.1 could be isolated pretty easily at the I2S end. Hardware to do it at the PC end really doesn't exist yet. I'm told that there will be a USB2 isolation solution from ADI next year. Probably expensive and hard to get. I would use transformers for master clock and bit clock and opto's for word clock and data. I would resync the data returning at the dac chip with a fast D latch so the edges are all within spec for the DAC chip.

This would be a re-layout and to get benefit you need separate isolated supplies on each side of the isolation barrier and very careful layout to minimize the coupling from input to output. Putting the MCU and isolation stuff inside a floating can with a double shield may be the best and the hardest to do.

When you are comparing different PC's keep in mind that they well might be radiating a lot of noise and no physical connection is required to degrade the sound. It has been many years since we have had EMI and RFI free listening spaces so introducing a new source of EMI may be less obvious. I have had experience where digital displays (LED's etc) have made obvious degradations in sound even though they are not connected in any way to the audio system.

Adding a mix of ferrite and steel around a USB cable would increase the common mode impedance and should make an audible difference. Not too difficult to try.
 
I2S standard can have the clock originate at either end per the spec. Its the implementation that is a limiter.
I do not doubt you, but where is this written down? Can you cite a reference?

AES and SPDIF can be locked to an external clock. Word Clock is used for that commonly in professional applications where several ADC's need to be locked to a common clock. You can spend a fortune on fancy word clock generators. It becomes unmanageable with a mixed sample rate playlist.
Read the white papers from Dan Lavry. Due to the complications of interconnects, no external clock can outperform a properly designed internal clock. It's a matter of physics. Granted, when you have more DAC channels than will fit in a single chassis, you must have at least one set running on external clock if they are all to be locked together. That's a common situation in studios with 32 or more channels. But for the purposes of stereo or basic multichannel surround, sound quality is going to be much better when a properly design internal clock is used.
 
I'm not saying that the open-source USB interface should be modified to add FireWire. I'm merely pointing out that FireWire is a viable option. It's probably not practical, but it's certainly technically possible.

Besides, IMHO FW is kind of vanishing, if not dead already.
There have been several new audio products released this year alone which are exclusively FireWire. People have been saying that "FW is vanishing" for a decade or more. So far, it hasn't come true.

FW400 was about on par WRT USB2, which is cheaper and simpler (and way more common). With USB3 already around, even the newer (and rare) FW800 has become basically useless. Apple Mac was about the only architecture which were kind of "pushing" for it. But recently I've even seen new Macs which does not have any FW port (e.g. the "air").
There is more to an interface than the raw clock rate. FW400 far exceeds USB2 because of the nature of the data, particularly the fact that in FW isochronous data gets assigned specific clock cells where in USB there is just a vague idea that the host should fit the data in "somewhere" in the frame. The reason that FW handles this more precisely is because it is a peer-to-peer interface, and cannot rely on the concept of a "host" to have full control over all timing. USB3 is even rarer than FW800 when you consider that there are no USB3 audio interfaces and there isn't even a specification for USB3 Audio Class devices. USB3 is nothing but a combination of USB2 and ESATA wires coiled into the same cable, which does nothing to make it compete with FW800 in terms of real performance. At least FW800 audio interfaces exist and work with all audio software.

As for Apple, they periodically introduce lower models that do not feature FireWire, but the top, pro models always have FireWire. The Air is a space-constrained product targeted for people who aren't even doing audiophile music, so the lack of FW makes perfect sense. The Air doesn't have an optical drive, either, which is the most common physical medium for audio.

I do agree that USB and FW are basically equivalent for the purposes here. I just want to be clear that for every disadvantage that FW has, there is a bigger disadvantage with USB.
 
In keeping with the nomenclature above, the USB-I2S module is not a "push interface". The clock generators on the analog board are located right next to the DAC. This means MCLK goes from the analog side to the digital while the other I2S signals go from the digital side to the the analog.

I see - thanks for the clarification. I'd just suggest then that the lowest frequency needs to be sent back from the DAC -> uC, not the master clock. Not all DACs have MCLKs (PCM1704 comes to mind) - how about just sending a word clock or the BCLK?

So my idea, although not yet implemented, is to use optos from AB to module for MCLK and from module to AB for I2S. Adding a bit of jitter to the clock seen by the MCU isn't such a big deal, I _blieve_. But adding jitter to the DAC is absolute taboo.

So optos should work fine, choose optos which are just fast enough to keep power and cost down.

Making the DAC board the clock master makes the most sense from the point of view of jitter. But this does rather force multi-channel solutions to be all on the same board. There's no chance of adding on extra channels afterwards as its impractical to have more than one clock master in a system. Do you intend to produce boards in various numbers of output channels?
 
The easiest point to break the loop should be PC-to-USB, since the data lines are differential and the power is optional. It seems like the safest choice is to not even connect the traces from USB jack for Vusb and GND.

As Demian points out, such a solution running at 480Mbits/s is not yet available, and even when it becomes avaiilable may well be cost-prohibitive. If I were doing the design I'd be looking at powering the PHY from the PC's USB supply and seeing if it were possible to isolate the ULPI between PHY and uC. But I've not looked at the details - of course this could only work where the USB PHY isn't on-chip.

I personally prefer balanced audio, even with a DAC output, so that affords another opportunity to break a potential ground loop. Balanced outputs connect ground for shielding purposes, but proper balanced inputs should not connect the ground at all since the signal is differential and doesn't need a reference to ground. Thus, you can also break the DAC-amp ground.

This isn't quite correct. A professional (XLR) balanced connection does indeed connect pin1 (which is the GND). This is to control the common-mode voltage. Only a transformer isolated balanced feed can operate with no ground connection at all.

Between these two easy options, it should be fairly simple to avoid ground loops. Designing an audio grade power supply will be less easy in comparison.

In my view the opposite - its the system level considerations which are hardest because so many variables are outside the designer's control (common-mode voltage and common-mode interference being just two). Totally self-contained designs (like power supplies) in comparison are relatively easy.
 
As Demian points out, such a solution running at 480Mbits/s is not yet available, and even when it becomes avaiilable may well be cost-prohibitive.
As I understood it, Demian is talking about isolation solutions. I'm talking about skipping isolation altogether, and avoiding common mode noise by simply not connecting the USB power and ground. The differential data lines should not conduct common mode noise into the circuit's ground.

A professional (XLR) balanced connection does indeed connect pin1 (which is the GND). This is to control the common-mode voltage. Only a transformer isolated balanced feed can operate with no ground connection at all.
You are totally wrong on the last two points and only partially correct on the first.

Pin 1 can be connected to "something," but it should never be connected to the signal or signal ground portion of a circuit. The ultimate in audio frequency noise rejection is to ground cable shields only at the output driver where the signal is ground referenced. RF interference is another story, and improves when the shield is grounded at more than one point. However when grounding at the receiver the connection should be made to chassis ground or safety ground, not to signal ground; and even the connection to chassis ground should be via series capacitance and resistance, not direct. There is even a line of XLR input jacks that incorporate the capacitor on pin 1 for this. The capacitor makes sure that the receiver is only grounded at high (RF) frequencies, not at audio frequencies. That said, you will still find examples of incorrect circuits in many professional products, but that doesn't make them correct circuits just because they have a popular brand name attached.

Pin 1 does nothing to "control the common-mode voltage." All that a balanced input needs to reject common mode voltages is the + and - signal lines. You will need some voltage headroom in the differential amplifier to handle the largest expected common mode voltage without clipping the signal. You cannot "control" the voltage; you can only ignore it by canceling it.

A transformer is not the only balanced connection that works without a ground connection. I learned the above from Bill Whitlock of Jensen Transformer during his AES Seattle Section presentation on November 19, 2002. If ever there was an incentive to claim that transformers are the only solution, it would have been an employee of Jensen Transformer. The fact is that the only reference needed in a balanced system is the pair of differential signals. The + signal needs only refer to the - signal for reference, and you have a complete circuit with built-in noise rejection. Connection of ground only serves to mix in the common-mode noise with the signal. It helps to understand this when you realize that a proper differential interface is a Wheatstone bridge, which is every bit as good as a transformer, particularly without ground (or shield) connected.

See IEC Standard 60268-3, but note that the first couple of editions of this standard even got some things wrong. Also see Neil Muncy's famous 1995 AES paper, "the pin 1 problem." My primary reference has been a custom draft of Bill Whitlock's Design of High-Performance Balanced Audio Interfaces.
 
As I understood it, Demian is talking about isolation solutions. I'm talking about skipping isolation altogether, and avoiding common mode noise by simply not connecting the USB power and ground. The differential data lines should not conduct common mode noise into the circuit's ground.

'Should not' is not sufficient when doing design, we need to understand the mechanisms at play to guarantee it does not, by design. There are common-mode chokes which reduce the coupling (I've cited one on Lorien's thread I think) but they mitigate it, don't eliminate it. Since the bandwidth is so high then there's only a limited inductance that can be placed on these lines without corrupting the signal. The Murata choke I linked to has if memory serves me correctly a 90ohm impedance @100MHz. Above its SRF (which is probably in the 100's of MHz) it will cease to operate as a choke.

However its unwise to rely on the two data lines to establish the common-mode voltage - so disconnect the USB GND at your own peril :D

You are totally wrong on the last two points and only partially correct on the first.

Goody, goody - I love it when I learn something new :D

Pin 1 can be connected to "something," but it should never be connected to the signal or signal ground portion of a circuit.

Reference please for the 'can' here - that implies there's a choice not to connect it at all and I'd like to know who (other than you) suggests that's an option? It most certainly should NOT be connected onto the PCB, but it normally (in a competent design) goes to chassis, which in turn is normally connected to the circuit's 0V, perhaps via some kind of ground lifting impedance. Any impedance here will get a common-mode voltage imposed on it, so its a design compromise between the CMRR of the receiver and this ground lifting impedance's value.

However when grounding at the receiver the connection should be made to chassis ground or safety ground, not to signal ground; and even the connection to chassis ground should be via series capacitance and resistance, not direct.

I see we're almost in agreement then. But direct most certainly is an option.

Pin 1 does nothing to "control the common-mode voltage."

Reference please - this sounds to me on first reading like a statement from ignorance, so I'd like to check out its provenance.

All that a balanced input needs to reject common mode voltages is the + and - signal lines. You will need some voltage headroom in the differential amplifier to handle the largest expected common mode voltage without clipping the signal. You cannot "control" the voltage; you can only ignore it by canceling it.

OK so perhaps 'control' was the wrong word - I perhaps should have said 'mitigate' the common-mode voltage. Otherwise when there are local shifts in the ground potential, large transient differences in CM voltage will exceed the common-mode input voltage range of your receiver circuit. That's something a designer would wish to avoid is it not?

A transformer is not the only balanced connection that works without a ground connection. I learned the above from Bill Whitlock of Jensen Transformer during his AES Seattle Section presentation on November 19, 2002. If ever there was an incentive to claim that transformers are the only solution, it would have been an employee of Jensen Transformer.

Indeed, one would think so. But even transformers can't work when the ground differential goes beyond their insulation ratings. So there are practical limits to the CM voltage with them too.

The fact is that the only reference needed in a balanced system is the pair of differential signals. The + signal needs only refer to the - signal for reference, and you have a complete circuit with built-in noise rejection.

Another argument from authority. Do you have a link to something on Jensen's website (I know its an excellent technical resource) where they say what you're saying here? I'd appreciate it. How I interpret your words is you are saying that in effect 'common-mode voltage is totally irrelevant'. Do I read you correctly?
 
'Should not' is not sufficient when doing design, we need to understand the mechanisms at play to guarantee it does not, by design.
By those standards, nothing can guarantee that noise will not be conducted. Every real circuit component has limitations and imperfections.

At the very least, any isolation solution that might be offered in chip form can be built now from discrete components. Some of the examples you gave are probably a great starting point.

OK so perhaps 'control' was the wrong word - I perhaps should have said 'mitigate' the common-mode voltage. Otherwise when there are local shifts in the ground potential, large transient differences in CM voltage will exceed the common-mode input voltage range of your receiver circuit. That's something a designer would wish to avoid is it not?
Yes, we could have merely a semantic disagreement.

You can't really stop excessive common-mode voltages by simply grounding everything together - that's where ground loops come from. Every conductor has resistance, and even every earth ground has a different voltage potential. There is no absolute ground. Check the papers I've linked to for details.

Do you have a link to something on Jensen's website (I know its an excellent technical resource) where they say what you're saying here? I'd appreciate it.
You asked for references several times in your reply. Did you read any of the references I've given so far? The link to Bill Whitlock's paper is going to explain far more than I can do here without plagiarizing.

However, I did find a link to a paper on the Jensen Transformers site. It's a bit heavy on the product sales content, but if you skip to section 3.6 - About Cables and Shield Connections, then you'll get almost exactly the same content that Bill Whitlock has been teaching since 1995. He goes into a great deal of detail, and I think he makes it clearer than the famous "pin 1 problem" paper that Neil Muncy published in the June 1995 Journal of the AES.

The linked document describes it best: Maximum CMRR is attained when grounding the shield at the output and not connecting it at all at the input. However, we also have RF noise to contend with. Guarding against both common-mode noise and RF noise becomes somewhat of a tradeoff. If you want the highest possible CMRR while also improving RF immunity, then the best practice is to ground the shield at the output and connect the shield through a capacitor to ground at the input. As I mentioned before, there are actually XLR input jacks which have the necessary capacitor integrated in the connector.

How I interpret your words is you are saying that in effect 'common-mode voltage is totally irrelevant'. Do I read you correctly?
I do not see how you can "get rid of" common mode voltages, as you imply by suggesting that you just ground them out. Current only flows in loops, and every conductor has resistance and therefore a voltage drop. The only workable solution to common mode voltages is to accept them, making sure that your balanced circuit provides identical impedances to both arms of the differential signal, and therefore cancel out the common mode voltages on the output of the first input stage. I've seen no examples of how to get rid of the common voltage before the first stage (other than a transformer, which is not the only solution).
 
Last edited:
By those standards, nothing can guarantee that noise will not be conducted. Every real circuit component has limitations and imperfections.

Perhaps you're not taking my remarks in context. No guarantee is water-tight, but by diligence in design we understand the limitations of our design. Of course we never eliminate all imperfections. I was responding to your 'should' claim - do you wish to continue to assert that just saying 'X should happen' without giving explanations for why is sufficient when doing electronics design?

Yes, we could have merely a semantic disagreement.

Indeed we could. So then you're not really saying 'common-mode voltage is totally irrelevant' ? It would then follow that your 'you are totally wrong' claim was er, how to put this gently - totally wrong.

You can't really stop excessive common-mode voltages by simply grounding everything together - that's where ground loops come from. Every conductor has resistance, and even every earth ground has a different voltage potential. There is no absolute ground. Check the papers I've linked to for details.

None of this I wasn't aware of and none of this contradicts what I've been saying. Perhaps you're tilting at windmills? By connecting pin1 you stand a much better chance of stopping excessive CM voltages than by not connecting it. In the latter case, all bets are off.

You asked for references several times in your reply. Did you read any of the references I've given so far? The link to Bill Whitlock's paper is going to explain far more than I can do here without plagiarizing.

I found Jensen's website some time ago and downloaded everything I found there because it was so relevant to what I was working on at the time. I'm assuming that what you linked to is amongst the stuff I have already digested so no, I have not gone to your link this time around. Practically all of Bill's work is top-notch and I've learned plenty from what he's written. I'm a big fan so I avidly gobble up what he writes. Where does he say that CM voltage is irrelevant or that its an option NOT to connect pin1 please?
 
I see you've added some later edits, so I'll address those here

The linked document describes it best: Maximum CMRR is attained when grounding the shield at the output and not connecting it at all at the input. However, we also have RF noise to contend with. Guarding against both common-mode noise and RF noise becomes somewhat of a tradeoff. If you want the highest possible CMRR while also improving RF immunity, then the best practice is to ground the shield at the output and connect the shield through a capacitor to ground at the input. As I mentioned before, there are actually XLR input jacks which have the necessary capacitor integrated in the connector.

This is all good stuff but totally irrelevant. Pin1 is not the shield, pin1 goes to the drain wire. Shielding is optional for balanced cables (Cat5 ethernet gets along quite nicely without it, but relies on transformers to make it immune to CM voltage variations) but grounding is not where the receiver is an electronic one.

I do not see how you can "get rid of" common mode voltages, as you imply by suggesting that you just ground them out.

I've never talked about "getting rid of" them to my knowledge. Rather reducing them to a manageable level, controlling, mitigating them. But if I've inadvertently used that phrase somewhere, point it out please. I'll be sure to correct it because its never been my meaning.

Current only flows in loops, and every conductor has resistance and therefore a voltage drop. The only workable solution to common mode voltages is to accept them, making sure that your balanced circuit provides identical impedances to both arms of the differential signal, and therefore cancel out the common mode voltages on the output of the first input stage. I've seen no examples of how to get rid of the common voltage before the first stage (other than a transformer, which is not the only solution).

Well I'm certainly interested to learn new things. One new thing would be an electronic circuit that has the same level of immunity to common-mode voltages as a transformer. Do give us the link please to electronic solutions you're aware of. This is now the second time you've claimed that 'a transformer is not the only solution' but still no dice :)
 
So then you're not really saying 'common-mode voltage is totally irrelevant' ? It would then follow that your 'you are totally wrong' claim was er, how to put this gently - totally wrong.
The quote that you provide is your interpretation, not my words. You can't suggest that I'm wrong, gently or otherwise, by turning your own interpretations of what I actually said into any kind of refutation. That's known as a Straw Man argument.

None of this I wasn't aware of and none of this contradicts what I've been saying. Perhaps you're tilting at windmills? By connecting pin1 you stand a much better chance of stopping excessive CM voltages than by not connecting it. In the latter case, all bets are off.
Interesting. Now you're talking about a 'chance' of stopping CM voltages. Please explain exactly what these chances are; when they apply and when they do not.

My point is that you don't 'stop' the voltages, you merely let them cancel each other out within the differential amplifier so that they do not appear in the output of the first stage. You certainly don't need to take some chance on partially stopping the voltages at the input as if that would somehow improve the CMRR. I don't see any evidence that you understand how differential circuits work.

I found Jensen's website some time ago and downloaded everything I found there because it was so relevant to what I was working on at the time. I'm assuming that what you linked to is amongst the stuff I have already digested so no, I have not gone to your link this time around. Practically all of Bill's work is top-notch and I've learned plenty from what he's written. I'm a big fan so I avidly gobble up what he writes. Where does he say that CM voltage is irrelevant or that its an option NOT to connect pin1 please?
Ok, so what you're saying is that you read every possible reference already, and you're not willing to read any of the references that I provide. But you still want me to cite more references. Please explain what kind of reference I can cite that you will actually read. I'm not sure that I have time to collect a long list of references, weed through the ones that you say you've already read, and provide something that you will actually go and read, but maybe we'll get lucky.

Bill Whitlock said, and I quote, "The driven end of a balanced cable should always be grounded, whether the receiving end is grounded or not." He goes on to explain that leaving the receiving end ungrounded is not only an option, but it provides the maximum CMRR. Only in combating RF noise do you add a ground on the input side, but the best practice is to use a series capacitor so that you do not sacrifice CMRR. In other words, grounding the receiver kills CMRR.

Whitlock: "Myth 2: Wires have zero impedance and can "short out" noise voltages in a ground system." This refutes your claim that grounding pin 1 improves your "chances of stopping" common-mode voltages.

Maybe you should review everything you've read by Bill Whitlock with regard to grounding.
 
Last edited:
This is all good stuff but totally irrelevant. Pin1 is not the shield, pin1 goes to the drain wire. Shielding is optional for balanced cables (Cat5 ethernet gets along quite nicely without it, but relies on transformers to make it immune to CM voltage variations) but grounding is not where the receiver is an electronic one.
In all of Bill Whitlock's papers, he uses the term "shield" - there is no distinction between drain and shield. The only time he even mentions the term drain is when explaining how cables with a foil shield and drain wire are inferior to various braided shield cables. All of his circuit diagrams depict balanced interconnects as three wires, + - 0, where only the + and - signals connect to the differential amplifier, and the shield connects via series capacitance to chassis ground (which eventually leads to signal ground via the common star ground point) on the input (but is connected to ground on the output).

By focusing on the term "drain," you've not created a new signal that must be connected, or that can reduce common-mode voltages. There is no fourth signal.

I've never talked about "getting rid of" them to my knowledge. Rather reducing them to a manageable level, controlling, mitigating them. But if I've inadvertently used that phrase somewhere, point it out please. I'll be sure to correct it because its never been my meaning.
Ok, forgot "get rid of" - you cannot reduce, manage, control, or mitigate common-mode voltages by "grounding them out." What you do is render common-mode voltages innocuous by building in enough voltage headroom in your differential amplifier to carry the common mode voltages through the circuit long enough for them to cancel each other out, and then they are absent from the output of the differential amplifier. There is no "grounding" that can minimize them ahead of the differential stage, at least not in anything that Bill Whitlock has written. Since you've read everything he's written, maybe you can point out where he claims that you should ground the input of a balanced stage in order to reduce/manage/control/mitigate common-mode voltages - then I can maybe see where you're coming from.

Well I'm certainly interested to learn new things. One new thing would be an electronic circuit that has the same level of immunity to common-mode voltages as a transformer. Do give us the link please to electronic solutions you're aware of. This is now the second time you've claimed that 'a transformer is not the only solution' but still no dice :)
Bill Whitlock himself introduced me to this one: An InGenious Way to Eliminate Noise. Since you seem to be a fan of Mr. Whitlock, you may be impressed that he designed and patented this circuit. I had the pleasure of attending an AES talk where he explained how he dreamed up the circuit in the shower one day and then later mentioned it to THAT Corporation at an AES meeting, whereupon they all decided to work together to bring the product to market.
 
The quote that you provide is your interpretation, not my words.

Sure, that's why I phrased it as a question. If you'd go back to review what I said I asked if that was your meaning. So far you have not answered. Avoiding giving a direct answer to a direct question is the response of a politician, not one of an engineer.

You can't suggest that I'm wrong, gently or otherwise, by turning your own interpretations of what I actually said into any kind of refutation. That's known as a Straw Man argument.

And indeed I have not done so, because its all part of the conditional 'if'. So your reference to 'Straw Man argument' is in fact a great example of a straw man.

Interesting. Now you're talking about a 'chance' of stopping CM voltages. Please explain exactly what these chances are; when they apply and when they do not.

No, because discussion is a two-way street and there are pending requests of mine which you've left unaddressed :)

My point is that you don't 'stop' the voltages, you merely let them cancel each other out within the differential amplifier so that they do not appear in the output of the first stage.

Yes, with the proviso that their absolute magnitude does not exceed the supply voltage of the receiver. Do you reject that proviso?

I don't see any evidence that you understand how differential circuits work.

Simply put - you're not looking for it. No surprise.

Ok, so what you're saying is that you read every possible reference already, and you're not willing to read any of the references that I provide.

Nope, misunderstanding. Go back, read again.

But you still want me to cite more references.

'Want' would be a bit strong here. I dunno how badly you want me to believe what you're saying. 'More' is also erroneous, despite my requests you've not provided one reference in response to my detailed question - rather vague hand waving in the general direction of a whole paper.

Please explain what kind of reference I can cite that you will actually read.

False premise.

I'm not sure that I have time to collect a long list of references, weed through the ones that you say you've already read, and provide something that you will actually go and read, but maybe we'll get lucky.

Oooh, look another straw man :D

Bill Whitlock said, and I quote, "The driven end of a balanced cable should always be grounded, whether the receiving end is grounded or not."

Do please give us more context. Is this with a transformer receiver for example?

Whitlock: "Myth 2: Wires have zero impedance and can "short out" noise voltages in a ground system." This refutes your claim that grounding pin 1 improves your "chances of stopping" common-mode voltages.

Where did I claim that wires have zero impedance? Claim still stands because purported refutation doesn't apply to what I was saying. There will always be some common mode voltage - we want to keep that fairly small, and most certainly within the working range of the receiver. What's so hard to understand about that?

Maybe you should review everything you've read by Bill Whitlock with regard to grounding.

Maybe, but if you'd like me to then you'll need to be a tad more persuasive than you have been so far. Arguments from authority don't cut it.
 
Bill Whitlock himself introduced me to this one: An InGenious Way to Eliminate Noise. Since you seem to be a fan of Mr. Whitlock, you may be impressed that he designed and patented this circuit. I had the pleasure of attending an AES talk where he explained how he dreamed up the circuit in the shower one day and then later mentioned it to THAT Corporation at an AES meeting, whereupon they all decided to work together to bring the product to market.

Yep, its neat. I copy a line from the datasheet:


Input Voltage Range VIN-CM Common mode ±12.5 ±13.0

What happens if the common-mode voltage exceeds 13V? Every isolating transformer I've ever heard about beats the figure of 13V. So no, this is not something that replaces transformers where the CM voltage can be greater than 13V.