Sample rate convertors and jitter (attenuation)

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I would like to start a discussion thread on Sample rate convertors and jitter.

I wholeheartedly agree that sample rate convertors "can" reduce jitter. However, knowing how they work (i.e. AD1896), I know they can't completely reduce jitter.

If you look at the AD1896, it does not create a simple up-conversion of the input signal, but performs a complex conversion based on measuring timing relations between the input clock and the output clock. There are certainly mathematical implications for input jitter to appear on the output as some form of audio imperfection.

What I have not been able to determine is how well sample rate convertors reduce jitter, and over what jitter bandwidth they are effective. I have read some papers that would imply that some forms of jitter could actually make jitter "audibly" worse.

I think one of the big issues in "audio" and why digital has taken so long to achieve the qualities that are "expected" is that we virtually any specification we look at, THD, SNR, IM, etc. are all "average" measurements taken over a period of time. Transient realities may be much worse. I am concerned about the same issues with sample rate convertors.

Of course, on another thought. If I have a large FIFO, assume that the input clock is pretty close to 44.1 or 48 or 96, etc., and then do a simple 4x upsample, then all these problems go away. I could run the output at say 0.01% less speed (Audibly unnoticeable even to audiophiles), and be assured never to underflow or overflow if the FIFO is large enough. This may sound expensive, but it could easily be implemented in a $10 DSP.
 
Bob Katz

Bob Katz has an interesting article to be found at www. digido.com.

It is true that the signal output from a convertor will have minimal wordclock jitter. However, the jitter on the input clock that was not rejected by the digital PLL will be in the new data stream forever. Also, assuming that the incoming signal had no jitter, I am still sceptical about picking the nearest interpolated sample which in itself was generated by interpolation from very few stored coefficients. Mathematically, the signal is going to be different. This is something fundamentally different from oversampling, which is inherently a linear operation, with the only non-linearities being rounding errors. Let's hope the errors of sample rate converters are also beyond the 24th bit....
 
This is an interesting topic which I've been looking at for several years now. I've made numerous prior posts to this forum on the subject, so I won't re-hash all those posts... just a few points:

I'm generally skeptical of Bob Katz's writings. I do recall reading an article of his on ASRCs and jitter, and noticing several major fallacies and misleading statements. Perhaps he's reformed his ideas, i don't know - maybe i'll check out that site. But, i would take his words with a big grain of salt.

As far as I can determine, modern ASRCs are only susceptible to very low-frequency jitter, with a jitter sensetivity rolling off at a 2nd or 3rd order slope from a corner frequency of just a couple Hz. This means that jitter at frequencies beyond about 100Hz or so should have essentially no effect. Perhaps this can be outdone by a good PLL, but then we have to deal with the PLL's inherent jitter, which can easily exceed a fixed oscillator. PLL design is also a very complex topic. I doubt many hobbyists could build a PLL circuit to rival or outdo the jitter rejection of an ASRC... I might attempt it sometime in the next year, but I'm not getting my hopes up.

The audibility of low-frequency jitter is also debatable, leaving me with the sense that as far as preventing jitter from affecting the sample rate conversion modern ASRCs do an exceedingly good job. Certainly, as far as jitter over 100Hz or so is concerned, I am confident that modern ASRCs are totally immune. Given a transport with a stable local oscillator and a DAC with a nice clean, low-jitter oscillator, one would expect very little jitter in the 10-100Hz range, so an ASRC looks like a great solution.

Without yet commenting on the accuracy of filter coefficients, the ASRC algorithm employs a polyphase FIR filter, fundamentally identical to what you find inside of an oversampling interpolation filter. The only difference is that an ASRC will have a rotating phase, whereas an interpolation filter's FIR coefficient phase will remain constant. In the specific case of the AD189x devices, I am satisfied after a thorough review of the documentation available that the coefficents will be accurate enough to give an output so close to the original waveform that the original A-D may as well have simply been taking it's samples at a different rate. Given the amount of digital manipulation that even the purist recordings will have endured on the way to your DVD-A disc or hi-fi CD or whatever, the effect of this digital resampling filter is quite insignificant (IMHO). Several times that amount of "damage" will be subsequently done by your 8x oversampling filter or sigma-delta DAC. I'm also not convinced that transients present in the sampled waveform will have any effect whatsoever on the ASRC process (so long as our accumulators have sufficient word width to prevent clipping). If anything, I'd be far more concerned about the effect of transients on 1-bit and sigma-delta A-D and D-A converters, which appear to be much more common these days than ladder-type devices. Oh, and DSD streams like SACD as well... here we see just what alvaius refers to: average measurements which look fine, but is all the transient info really there?

Does this mean that i think ASRCs are the ultimate answer to jitter? Of course not. But, at present, they are a useful, inexpensive, and readily available tool, and I think that many people are afraid of them for no valid reason. I think there are other areas of the analog and digital audio chain which have a much greater effect.

There's one area where I find ASRCs indispensible, and that's when paired with a DSP processor. There's nothing like having all of your input data brought to a single, common sample rate and word width! You can mix'n match input sources freely, without worrying about synchronization or sample rate mismatches, and the added benefit of excellent jitter rejection is just icing on the cake!

alvaius:

There's one thing i'm really looking forward to (though not necessarily as a hobbyist who will have to deal with all this increasingly complex technology..): the proliferation of isochronous digital links with two-way communication between devices. Now we can (potentially) finally foget about interface jitter, and just worry about keeping buffers full and slaving transport devices to output/analog conversion devices.

But, as i mentioned, Firewire and the like are going to be painful for us hobbyists to deal with, especially since much of the technology will fall under licensed distribution, just like HDCD devices. Furthermore, Digital Rights Management is spreading like cancer, and that means encryption! Nasty encryption, with key revolkation measures for compromised device keys and so forth. That stuff is going to be a real hurdle. At least for now, we'll be able to tap into I2S busses and snatch our precious raw audio data right out of the DAC's mouth, but for how long?
 
The audibility of low-frequency jitter is also debatable, leaving me with the sense that as far as preventing jitter from affecting the sample rate conversion modern ASRCs do an exceedingly good job. Certainly, as far as jitter over 100Hz or so is concerned, I am confident that modern ASRCs are totally immune. Given a transport with a stable local oscillator and a DAC with a nice clean, low-jitter oscillator, one would expect very little jitter in the 10-100Hz range, so an ASRC looks like a great solution.
An excellant answer, IMHO. I think people often fail to discriminate between the effects of input jitter on a digital device such as AD1896 and a digital/analog interface like an ADC or DAC.

Correct me if I am wrong, but if we assume that an ASRC has excellant high-frequency jitter rejection then jitter of several nanoseconds with most of its spectrum >1kHz might actually be preferable to jitter of 500ps and significant low-frequency components.

For purely digital components the important statistic is the size of the data eye. Significant (nanosecond) phase jitter may be tolerated in the error margin. It is only at the interfaces to the analog world that we need be concerned with jitter that is orders of magnitude smaller than what is acceptable at 12MHz bitrates.

I'd certainly welcome any responses from those more knowledgable.
 
I beg to differ on the last statement. ASRCs are not exclusively "digital", in the same essence as a link from a transport to a DAC is "digital". From a purely mathematical standpoint, the input of an ASRC behaves very much like a PLL. A ASRC actually measures, with fine resolution, the timing of the input samples. If there is jitter on the input, then it can appear on the output. An article by Julian Dunn of Audio precision goes over this a bit. In this case, he found jitter showing up well beyond a few KHz. I will need to look over the AD1896 again. I am hitting ADI FAEs for a better response on just how good it will do at a measurement level.

I still think they are great and much much better than not using them at all. However, I am leaning towards using a DSP for SRC as my sources are of known frequency as are my output, and I don't mind buffering enough to ensure I don't get under/over flow. However, that is for my personal use. For professional use, that is not always an option for me.
 
Agreed, although it must be pointed out that an ASRC will not suffer from any of the problems typically associated with PLLs such as backlash error, phase noise in the VCXO, etc.. indeed, the ASRC does not utilize any kind of phase comparator as is found in almost all PLLs. So, I do care to draw a distinction between them, insofar as their real world performance will be affected by different factors, and will differ significantly. On page 18 of the AD1896 datasheet is a frequency response curve for the AD1896 sample rate ratio digital filter. I would presume that the jitter sensetivity would essentially match this curve. From this curve, we can deduce that jitter rejection should be 100dB at 1kHz. I imagine that a 20dB reduction in jitter levels from what might be expected in a typical data stream should be sufficient to sink the audible jitter effects below the threshold of human detection, particularly if the jitter is of a more random nature than a stressful lab test would produce. But, of course, i don't exactly have any hard data on that...

alvaius: i guess you're not concerned about the latency of running your audio through buffers large enough to absorb sample rate mismatches over a prolonged listening session? That's potentially quite a large buffer...

I do recall reading that audio precision paper by Julian Dunn some time ago... I'll have to fish it out and re-read it. Incidentally, Julian Dunn now has his own company ( http://www.nanophon.com/ ) where he has posted some excellent papers on digital audio. Of particular interest to anyone designing or using digital FIR filters of any sort is this one:

http://www.nanophon.com/audio/antialia.pdf

Perhaps the effect explained in the aforementioned paper will have a greater effect on the quality of output from the AD1896?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.