Jitter? Non Issue or have we just given in?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
[When the DAC is connected to audio equipment with high enough resolution for resolving 16 bits, jitter makes a day and night difference. It is the difference between synthetic "digital" unnatural sound that causes listening fatigue, and natural detailed music like we were used to get from analogue sources like tape or vinyl. It is the difference between a noisy background and a pitch black background.


Spot on !!!
 
I beg to differ. A small timing error on a small signal would give an unmeasurably tiny voltage error, so I see a great distinction between a 0.0003V voltage error which would be measurable at any time and the effect of a 12ps timing error which might not, depending on the signal.

Ah but the context in which I said that was not multibit DACs (where I'm in total agreement with you) but low-bit ones. Low-bit DACs always are playing out some signal, even if totally inaudible.

Just to summarise, I agree that the internal workings of certain DACs would make them sensitive to jitter within their own internal clocks when 'assembling' the output voltage, but the accuracy of the user's sample rate would (should) be unrelated to this.

I wasn't even on to considering the accuracy of the user's sample rate yet :) But yeah, I think that's a non-issue.
 
I beg to differ. A small timing error on a small signal would give an unmeasurably tiny voltage error, so I see a great distinction between a 0.0003V voltage error which would be measurable at any time and the effect of a 12ps timing error which might not, depending on the signal.

Sure, tiny. What is then "normal" for you? 8 bit audio will be good enough???
The jitter that influence the original accuracy of 16 bit is a bad jitter. "Tiny" or not...
 
The designer of the Anedio DAC has an interesting white paper on jitter on their website. James, the gentlemen mentioned, designed 20gb frequency dacs in the particle physics area. I think he knows what he is writing about. I suggest you read this paper. I am extremely pleased with his design, finding it a SOTA product. Regards
 
Last edited:
Sure, tiny. What is then "normal" for you? 8 bit audio will be good enough???
The jitter that influence the original accuracy of 16 bit is a bad jitter. "Tiny" or not...
The point being that the 'noise' associated with sample rate jitter scales with signal amplitude - I think. On low amplitude signals it is proportionately the same as at higher amplitudes in terms of amplitude relative to the output signal. If we reduce sample rate jitter so that it is inaudible on a full scale signal, it should remain inaudible at lower amplitudes, too. (This is different from quantisation error, where the effect becomes proportionately larger as signal amplitude decreases.) This suggests that even if we cannot reduce jitter to 1 LSB on a 24 bit full scale signal, we still get the benefit from higher bit depth at lower signal amplitudes. i.e. jitter doesn't negate the advantage of 24 bit over 16 bit, even if you can only reduce jitter noise to, say, 256 LSBs on a full scale 24 bit signal; when you're down below 16 bit amplitude the jitter noise isn't even 1 LSB of the 24 bit range. Maybe.
 
Do you mean this one:

Anedio Affordable High-End Audio : Multi-stage jitter reduction circuits that virtually eliminate jitter in the DAC output

However stage 3 looks like its taken from ESS materials and its gobbledigook to me. Is there anyone here who can explain it?

It sounds like total trash to me. 'Nearest neighbour' re-sampling of a sampled waveform combined with linear interpolation between sample transitions. You know it's total rubbish when he says

...a conventional sample-rate converter modifies every sample based on a constantly-changing estimate of the input sample rate, inevitably introducing audible artifacts

I'll bet you a fortune that that is just not true, and that people who know what they're doing can prove it, unlike this... person.
 
The designer of the Anedio DAC has an interesting white paper on jitter on their website. James, the gentlemen mentioned, designed 20gb frequency dacs in the particle physics area. I think he knows what he is writing about. I suggest you read this paper. I am extremely pleased with his design, finding it a SOTA product. Regards

I stop reading after this sentence:

At the first stage of jitter reduction are high-performance digital transformers for rejecting high-frequency common-mode noise from digital sources. These transformers, made by Scientific Conversion,...

:rolleyes:
 
Well if the ASRC did not modify every sample then it wouldn't be doing its job properly. But the 'iinevitably introducing...' bit is indeed forked-tongue speak.

That's what I meant - I should have highlighted in bold.

(I was discussing sample rate converters yesterday in another thread, and I was making the point that yes, they change the samples but in a mathematically almost perfect way).
 
........they change the samples but in a mathematically almost perfect way).

Yes, mathematically ALMOST perfect way!

Here's a potted explanation of ASRC from that 2004 thread
Input data arrives at an Fs_in rate. We need to store the most recent 64 input samples, in a RAM or FIFO, on our device. Of course, this memory gets updated frequently ... every time a new input sample comes along, we kick out an old one. Out with the old, in with the new !

Also stored on our device is a set, or more accurately, an ARRAY of numbers, called coefficients. These coefficients can be stored in ROM, because they will never change Now it's quite a LARGE array ... it has 64 columns, but approximately 1 MILLION rows. We call each row a "polyphase filter" ... so each polyphase filter has 64 coefficients, and there are a million different polyphase filters. (In reality, we might not really need a ROM this big, because there are some clever ways to calculate some polyphase filter rows from others. But it's absolutely fine to imagine ALL these polyphase filters, with 64 coefficients each, stored on our device).

Now what happens when an OUTPUT clock "tick", arriving at a totally asynchronous rate of Fs_out, comes along? Well, our device needs to CALCULATE an audio output sample. And it does this very simply : the device simply SELECTS one of the polyphase filters, which has 64 coefficients, and multiplies those 64 coefficients by the 64 input data samples stored in RAM. Then it adds up the result of all those multiplications, and provides that value as the output sample. And that's all there is to it
By the way, we often call this multiply/add operation a "convolution".

Simple, right? Indeed it is !!! There's only ONE missing piece. How does the device know which polyphase filter to select, from the one million available, when an output clock (Fs_out) edge comes along?

THIS IS THE WHOLE "TRICK" OF ASYNCHRONOUS SAMPLE RATE CONVERSION !!!! We have to figure out which of the very many polyphase filters to select, for multiplication/addition with input data, when an output clock edge comes along. And we shall address this issue in the next post ... and ultimately find the key to understanding how the ASRC responds to JITTER.
 
Last edited:
This gives a bit more meat to the mathematically ALMOST perfect - sorry if I appeared to jump all over it -
Fs_out/Fs_in = N/M = ratio of the sample ram

Where N is known, and M can be found by recognizing that, after a sufficiently long period of time, a counter clocked by Fs_out and incremented by M should equal a counter clocked by Fs_in and incremented by N. In other words, N*Fs_in = M*Fs_out (on average).

So, every so often our counter clocked by Fs_out and incremented by M is COMPARED to a counter clocked by Fs_in and incremented by N. The DIFFERENCE between these two counters is attenuated by some gain factor, and adjusts the estimate for M. That's all there is to it A simple feedback loop, operating in a similar fashion to a PLL that's measuring, and accumulating, the phase difference between two clocks in order to align their edges.

But of course, here we need to align "polyphase counts" in order to figure out how many polyphases "fit" between Fs_out edges. The loop will, of course, only perform this task in an AVERAGE sense ... it's got a pretty narrow bandwidth, given the precision required and the clocks available. How narrow? The bandwidth of the Polyphase Locked Loop in the AD1896 is about 3 Hertz.
 
Last edited:
This gives a bit more meat to the mathematically ALMOST perfect - sorry if I appeared to jump all over it -

(No problem!)

Intuitively I think I realise that all these things are basically equivalent: PLLs, sample rate converters and so on - but not bodged D-type linear interpolator-splicer re-samplers!

The best system of all has to be for the DAC to be clocked by its own almost perfect clock, drawing on a FIFO, and to to have flow control feedback from the FIFO to the source. No PLLs, no re-sampling, jitter determined only by a single crystal super-oscillator.

The problems seem to start when we connect a source to a DAC without any flow control feedback. i.e. allowing the source to dictate the sample rate then sending it over a jitter-prone interface. I therefore pronounce most outboard DACs to be silly.
 
I just wondered how perfect this is if it is selecting the polyphase filter to apply to the incoming sample but doing so in a AVERAGE sense? I presume this works better than it at first might appear because the input clocks & output clocks don't change much. Maybe some experts can answer about the accuracy attained here?

As another piece to chew on - an Audiophileo is measured at 4pS jitter on it's output (I think) - yet when it's USB power is substituted with an external 5V supply, it is reported to improve the sound. Even though it's SPDIF output is said to be galvanically isolated? Anybody care to guess why the improvement with cleaner PS?

Edit:
These data are from the 11.2896 MHz Audiophilleo1 clock which is used for the 176 kHz sampling rate. Measurements are taken at the BNC output using the complete hardware path used for SPDIF generation.

It shows just 3.8 ps RMS phase noise integrated from 1 Hz to 100 kHz taken with a Symmetricom 5120A.
ap1_sn029_lnr_noiso.gif
 
Last edited:
Hi,

The best system of all has to be for the DAC to be clocked by its own almost perfect clock, drawing on a FIFO, and to to have flow control feedback from the FIFO to the source. No PLLs, no re-sampling, jitter determined only by a single crystal super-oscillator.

An alternative is to place the flow control in the DAC, but to make it very SLOW.

In practice, the FIFO and Clock in AMR's DAC will update once every few minutes with the smallest adjustment step (fractions of 1ppm of the clock) once the system has attained lock. So, a wide range of source clocks and even their wander can be adapted to, yet any source jitter is blocked.

Of course, given the limits of current common logic technology, it is not easy to make a DAC (any DAC)with jitter that is much lower than 100pS, no matter how good the clock.

The problems seem to start when we connect a source to a DAC without any flow control feedback. i.e. allowing the source to dictate the sample rate then sending it over a jitter-prone interface. I therefore pronounce most outboard DACs to be silly.

Quite right too, quite right.

It is possible to do it differently.

I found the biggest issue is to achieve a sufficiently quick initial "lock" to prevent the FIFO from over/under-flowing...

Ciao T
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.