RF Attenuators = Jitter Reducers

Do you have a SPDIF transformer in your Digital Device

  • Yes

    Votes: 40 71.4%
  • No

    Votes: 16 28.6%

  • Total voters
    56
Status
Not open for further replies.
well I do know SPDIF mismatched impedance induced jitter is probably number 20 of importance on a list of other things that can affect jitter to a greater degree.
Impedance mismatch is not a pass/ fail number. It depends on many other factors related to the total link budget and how low your BER target is. jkeny you are lost by focusing just on this silly attenuator scheme. I guess you will make me sorry I helped you look at the bigger picture.

I understand that - I never said that this was the solution to all jitter problems (far from it) - it was posted as a quick, simple, cheap experiment with a possible theory behind it's operation. Where else would you find such a quick & cheap experiment that MIGHT address jitter?

If Wakibaki hadn't hijacked the thread it would have probably developed into something useful but ce'st la vie.

So don't get me wrong - most of what I'm doing now is defending myself from the slanderous statements Wakis levelled at me.

The whole point of the thread has been overthrown!
 
Last edited:
Generally SPDIF receive ends don't have transformers on them since you don't need galvanic isolation, since you're in 99% of cases connecting a television set, dvd player, etc. to a home receiver that's beside it on the same shelf. Transformers are almost universally used on AES/EBU connections, but AES/EBU tends to get used in places like radio transmitter sites where you can get significant pickup, ground currents, etc... flowing around.

Anyway, to try and bring a few clues back into this thread, let me try answering something.

jkeny said:
You simply don't know what level of mismatch will cause a problem, do you!

This depends on so many factors that it's impossible to quantize. I'll start with a story.

My first experience with this whole topic was with an AES/EBU interface. One of my co-workers wired up a rack which used a bunch of different equipment connected with AES/EBU. On a particular piece of receiving equipment, AES wouldn't lock. Helping them out, I grabbed a cable off the shelf, plugged it in, and it worked! Aha I thought, bad cable, I'll just have them change out the cable that's wired in. So they built another cable (off a drawing, with the same length) and wired it into the rack... still wouldn't work. So I grabbed a 'scope, looked at the AES at the receiving end, and it was garbage. Looked at the source, and it was different garbage, but still garbage.

So into the schematics I went. On the sending end, they used a CS84xx driving a transformer with a series capacitor, then a pair of 22 ohm resistors on each output wire, yielding something around 50 ohms for the source impedance. On the receiving end, it was terminated with 600 ohms, instead of the 110 it was supposed to be. Bingo.

So I soldered a resistor inside the XLR connector at the receiving end to terminate the line at 110 ohms, and everything worked. Then I proceeded to call up the manufacturer of the receiving end and bitch at them. They still haven't fixed their equipment to this day, but they now sell a short XLR cable with a resistor in it as an accessory. 😀

So in this case, the mismatch made it not work at all.

...

But based on the amount of mismatch and cable length, you'll get different effects *at the electrical interface*. The effect of this on subsequent stages (clock recovery, etc) depends on the design of that circuitry. So lets start with the electrical interface first.

If your mismatch gives you an "instantaneous" reflection (short cable) it will cause a glitch. Depending on how bad the glitch is, this can manifest itself as a ring, and the receiver can get double clocked by it. This will generally cause bit errors. But assuming that the ring stabilizes before the next edge, you won't get much of a jitter contribution from this effect since the effect of each glitch (time skew, whatever) will generally effect all edges the same way.

Longer cables are where the bad jitter happens. The reflection caused by a previous edge can land on top of your current edge, if the reflection's bad and your cable's a bad length. This can cause electrical glitches, it can cause time skew, etc... but the worst part is that not every edge will be hit with a reflection depending on what the content of the data being sent across the link is. The result is data dependent jitter, which can have a hideous phase noise distribution.

Now, at the *receiving end*, you're recovering a clock from the SPDIF interface using a PLL of some sort. The loop filter bandwidth of this PLL will determine how much of the incoming SPDIF jitter will show up on the recovered output clock.

And from there, assuming this is a DAC, the type and design of DAC (1-bit, multibit, NOS, etc) will effect the audibility of jitter. Then there's I/V converter bandwidth, filters, amplifiers, speaker/headphone frequency response... all of which can attenuate or enhance the frequencies where spurious from the recovered clock phase noise ended up becoming audible. Finally you've got the listener, and depending on the music they're listening to, their mood, # of scotches, the ambient noise in the environment, etc... you may or may not have a detection of jitter.

In summary, it's absolutely impossible to say quantitatively what level of mismatch is required for the listener will hop out of their chair and proclaim "this sucks".

Does that answer your question?
 
Jan, that paper should be the starting base of any further discussions..
I usually suppose that people - specially those who claim themselves "experts" and "trying to conduct an argument here about RF attenuators" are already aware of the basic findings in that paper.
And so they already understood why Gmarsh is saying this:
but the worst part is that not every edge will be hit with a reflection depending on what the content of the data being sent across the link is. The result is data dependent jitter, which can have a hideous phase noise distribution.
Or why I had been talking about inter-symbol interference at right from the beginning here.

A nice collection of papers can be found also here:

DIYHiFi.org • View topic - Jitter

Ciao, George
 
Jan, that paper should be the starting base of any further discussions..
I usually suppose that people - specially those who claim themselves "experts" and "trying to conduct an argument here about RF attenuators" are already aware of the basic findings in that paper.
And so they already understood why Gmarsh is saying this:

Or why I had been talking about inter-symbol interference at right from the beginning here.

A nice collection of papers can be found also here:

DIYHiFi.org • View topic - Jitter

Ciao, George

Indeed. When I visited Malcolm Hawksford for my interview ( http://www.linearaudio.nl/beenthere-2.htm )
he actually had me listen to some of the 'data-dependent' jitter and you could actually hear the music, albeit heavily distorted!

But I still am not convinced that the attenuator actually gives an audible improvement except maybe in pathological cases. ... ;-)

jd
 
Infinia,

I said:
There is nothing digital in clock recovery from the SPDIF data. It's an analog process. Data extraction is digital, with high enough safety margins to achieve very low error levels even with a noisy transmission channel. There are no data errors in SPDIF, in normal circumstances.

You said:

But the interface here is digital! The extraction of the clock from the random data stream is the only analog part.

The only difference that I see here is the order of the sentences? 😀

Then I continued - There are no data errors in SPDIF, in normal circumstances.
So I don't understand, why did / do You want to see Bit Error Rate measurements of the non-existent errors? Or eye diagrams which will never even approach dangerous limits - in the sense of data recovery?

BTW your single shot Oscope plots of random data don't prove anything without showing a test setup for starters. The only proper way to prove anything is using BER test set. The next best is eye diagram plots or phase noise plots. It would be more helpful to show pseudo eye diagrams using a digital sampling scope.

Phase noise plots would be the right approach - anybody here with some nice Wavecrest, free for our experiments? Jocko has it..
I have done some primary phase noise tests with my 3585a. Down to the highest resolution of the analyzer, ~110dbc at the very close-in spectra, from ~10Hz to 1KHz, the Hiface driver behaves not different from an XO.
And this is a result which was previously un-imaginable with the classic USB converters. Which were able to produce up to 70db higher close-in noise.

So the inherent jitter in the case of this driver is close to that of the crystal oscillator applied. Which should be in the range of 10 psecs. (And the power supply mods should help here even more).

Infinia, I think to understand Your approach, which clearly shows professional experience - though field specific, in an other field.

Ciao, George

Ps.: The shots made by me of the real SPDIF driver were not single sequence shots, this is a tek3054 DPO, which by default overlays a series of acquisition shots on the screen. So, while it's not an infinite persistence shot, it's not single shot anyway.
If there was a great level of jitter present (inherent to the driver), it would have come out. As I told earlier, I am aware of the fact that we are talking about subtle effects here. Though the mechanisms pointed out by Gmarsh are complicating the situation, and maybe can amplify it?
 
Last edited:
Hi
How about this.. can we agree that simply adding an attenuator may or may not improve jitter but it depends on specifics of the Rx and Tx circuit/s used since most importantly "Any digital audio receiver IC has finite dynamic range and CMRR. Low level, unbalanced signals, (e.g. SPDIF from a weak source on a lossy cable) will increase the error rate and jitter since the receiver has less margin and interference rejection. Astep-up transformer can increase the signal voltage level, improving both receiver performance and common-mode rejection."

Infinia,

You are absolutely right in this - and I have never claimed differently!

Then, Gmarsh was putting all this in the right light, which is really important:
I would discuss about attenuators ONLY in the special case of one special device, the Hiface driver. Because (at least for now) it is the only one which permits to attenuate without compromising the standard levels at the receiver; and because it is fast enough to bring up the problem of reflections

Also, at diyhifi.org I ~started with this:

I have only brought up it now, because the Hiface is the special case, where it's really relevant.
Because of the high output level. Padding in the case of standard levels, 500mVpp, only makes it worse, because it makes life even more difficult for the already very critical input receiver sections.

Ciao, George
 
As regards Scientific Conversion transformers / paper, I would like to point at the (not very slight) controversy between different point's of view here.

The "selling point" at Scientific Conversion seems to be the common mode noise rejection. The very low interwinding capacitance designs and their extra shielding are all maxing out CMRR but also minimizing SWR or reflection losses.
(Again, Jocko)
Minimizing stray capacitance means maximizing leakage inductance.
Also those 2:1, or even more, their non-integer transfer ratio models are less then ideal for the reflection loss.

Question is: which hurts more in a home environment? I do not want to take sides here, but:
At home I'm not aiming at tens of meters of interconnection. Also, for other reasons I keep as a small number of active appliances as possible. No wifi, satellite dishes, wireless phones, desktop computers etc. And it's a little mountain village.
So, from my side, I would vote for less reflections..

Ciao, George
 
Hi George or Joe K
In my experience common mode interference of PC audio using single ended connections can do much greater damage to a Rx clock than Tx poor return loss ever can, so yes pulse XFMR/s with greater isolation ie less inter-winding capacitance is the right approach.
I think phase noise plots are the best way to go forward if you want.
 
Question is: which hurts more in a home environment? I do not want to take sides here, but:
At home I'm not aiming at tens of meters of interconnection. Also, for other reasons I keep as a small number of active appliances as possible. No wifi, satellite dishes, wireless phones, desktop computers etc. And it's a little mountain village.
So, from my side, I would vote for less reflections..

Ciao, George


This statement is a contradiction in so many ways...
It is my understanding we are talking specifically about the HiFace USB>SPDIF RCA to a consumer level clock recovery/DAC. Now you are saying no PC desktops maybe only laptops running on batteries need apply?
 
Now you are saying no PC desktops maybe only laptops running on batteries need apply?

Exactly, laptop on batteries. I would have thought anyway that USB is a prime "laptop solution", otherways an ESI Juli@ would be satisfying already?

By the way, I did try the difference between batteries/ wall brick. Obviously there is a difference, for me is a bit less evident than the difference between att/ no att.

By the way, If you are admittedly using a mis-terminated line because of the preference for other optimization points, then You could be a principal target here for trying some attenuators (maybe with a Hiface.., to have low inherent jitter and high initial output level) 😀 😀
 
Last edited:
............
It is my understanding we are talking specifically about the HiFace USB>SPDIF RCA to a consumer level clock recovery/DAC. ............
NO, we are not talking about RCA - I certainly hope that all concerned know the mis-termination that Joseph is referring to here:
By the way, If you are admittedly using a mis-terminated line because of the preference for other optimization points, then You could be a principal target here for trying some attenuators
 
well your views of CMRR and interference is to kill all sources of conducted noise ie living on a mountain top, batteries off the grid, kill PC desk tops, and other high tech gear. The fact is that 99% of people can't or don't choose this lifestyle.

My guess is HiFace has chosen a really bad quality pulse XFMR ie brandX, pinching pennies and now their customers are paying for it in the end! Now they need to pony up $12 (not including shipping and the hit for RCA to BNC adapters VSWR anyone) for 3 dB pad is now useful when it would of been better to use the TI recommended XFMR in the first place.
 
Last edited:
By the way, If you are admittedly using a mis-terminated line because of the preference for other optimization points, then You could be a principal target here for trying some attenuators (maybe with a Hiface.., to have low inherent jitter and high initial output level) 😀 😀

I'm not admitting to using mis-terminated connects. I'd just choose not to buy crappy USB solutions that claim Hi-Fi.
The title of this thread should be {How to get the most out of your HALF-A55'd USB audio system} dontcha think.
 
Last edited:
Status
Not open for further replies.