Jitter? Non Issue or have we just given in?

Status
Not open for further replies.
Well eh, I designed something that may be similar in 2008. I called it a frequency tracker. It uses a microcontroller (Microchip) to measure the exact frequency difference between source and masterclock (using on-chip hardware timers).

There's no such thing as 'exact frequency difference'. What you can measure with a period timer is how many cycles of your local clock occur between cycles of an incoming clock. Something quite different.

The difference is loaded into a 10 bit discrete R2R DAC using parallel 10-bit interface in order to minimize noise levels. This DAC in turn drives the VCXO through a low pass filter.

If you just do linear processing then you'll have something which to all intents and purposes acts like an analog PLL, but is digital. This solves nothing because the problem calls for a non-linear control solution. Can be done with software of course 🙂

Disclaimer - I have a patent (now expired) on a digital PLL. Its the only patent to my name 🙂
 
I have a patent (now expired) on a digital PLL

Hi abraxalito

>> Disclaimer - I have a patent (now expired) on a digital PLL. Its the only patent to my name

well I did about 1990 (a peronal project) something like this and I named this DACVCO 🙂

The DACVCO was also controled by a microchip 57.. using 5842 & PCM63....

What I did as mentioned from previous authors: The RF waves caused by each digital signal (the worst is the data stream itself) have to be considered into the implementation. I analised the this signals/waves using a HF spectrum analyzer (100k -0.5Ghz) using magnetic & electric nearfield probes.

My conclustion so far: As higher the waves are > 5Mhz as different the grounding & isolation have to be implementated. In other words: Finally it looked like a military havy equiment while each part has been isolated with thick copper ... The analog output part on my solution did not had any RF noise given from the digital part.... but the RF noise on the I/V part given from the DAC cannot be isolated (RF & intermodulated noise)... In other words: The RF given from the digital part do not stop on the DAC. The solve this the digital signal has to be more static. But this is anoher story to have 192khz & 24 bit with xx oversampling. The frist step would be to have a paralel data feed then a serial data feed driven to the DAC (gee: 24 bit data lines).

The final or dead to this prototype was: even playing a burn CD in master mode had a better sound then the orginal....

May I put some pictures the dead of this night mare...😀

Hp
 
Take into account that the max permissible difference between remote clock and local clock can be ~200ppm (a tent clock has +-100ppm).
This means that for 12.288MHz 200ppm is only 2457.6Hz.
So you plan to reject the jitter that has a 2.5kHz frequency or higher... Big deal, any SPDIF receiver can handle that. And any delta-sigma DAC. And by transport controller RAM (it has some 4KB already).
The problem not solved is low-frequency jitter - to go down to 10Hz you still need 0.2MB memory.
 
Hi,

So you plan to reject the jitter that has a 2.5kHz frequency or higher... Big deal, any SPDIF receiver can handle that. And any delta-sigma DAC.

First, it would seem that "Any SPDIF Receiver" excludes any SPDIF Receiver currently made by Asahi Kasai Micro, Burr Brown / TI, Cirrus Logic. They all feature ZERO rejection of Jitter at 2.5KHz.

And I would not say that ANY DS DAC can handle that either, based on measuring them with an AP2 in "added jitter mode". The ones I looked showed an appalling sensitivity to jitter.

Ciao T
 
Hi,

I'm really enjoying this entire thread, thanks.
Does anyone know how much "jitter (all effects lumped)" from the recording process are built into typical digital file?

This heavily depends.

IF (and that's a big IF) a high grade AD converter with a low jitter clock was used and the signal was kept in the digital domain, without asynchronous sample rate conversion etc, then the jitter will be that of the A2D Converter and the D2A converter on playback, as Jitter STRICTLY AND ONLY matters at the point of conversion, be it digital to analogue, analogue to digital or digital to digital.

Even wide ranges of digital signal manipulation and edits should not increase jitter. This really is one of the beauties of the digital recording process.

While I grew up with analogue film SLR cameras and analogue tape for recordings (and was able to get surprisingly good results using such primitive tech), seeing what is possible today using the best digital domain recording systems and digital SLR's coupled to the ease of both preserving fidelity and making edits where desired, I would never consider going back.

Ciao T
 
> It is a specific programmable oscillator, one with low audio band jitter...

Would that be Silabs' DSPLL, or something custom ?

And I would not say that ANY DS DAC can handle that either, based on measuring them with an AP2 in "added jitter mode". The ones I looked showed an appalling sensitivity to jitter.

Since jitter produces distortions which are proportional to the difference between two output samples, and DS DACs output lots of random-looking low-bit samples, they chould be more sensitive to jitter than multibits, but the frequency of the jitter that matters could also be very different.
 
I'm sure no one noticed, but I've taken Abraxalito's advice to not contribute anything for a day or two, and to keep quiet and learn something.

Here's what I have learned so far. (It could be a useful summary for someone new to the thread):

* Jitter makes people's ears bleed, and outboard DACs suffer from a paradoxical problem in that while they look pretty and aren't inside the same box as the transport, they can't eliminate jitter and perfectly match the source clock rate at the same time.
* The first CD players available were a closed system - the CD transport was slaved to a fixed frequency crystal-controlled DAC. They are still like that.
* Thirty years later, some of the best brains in the business are discussing how best to get close to this level of performance using a few digital thingummies that are top secret.
* The difference between their solutions is that the inferior one adjusts the playback frequency by 0.001% every 30 seconds, while the other does it by 0.0001% every 60 seconds. The audible comparison is "like night and day".
* A financial opportunity has been sensed - ravening hordes of audiophiles are expected to batter the laboratory doors down any minute now.
* In the meantime, asynchronous mode USB has apparently been invented which makes the whole exercise a bit moot - although it took hundreds of hours to make it work (why?!).
* It's important to tune your waveforms so they look nice while a 'scope probe is attached, and to isolate transport from DAC using transformers because TOSLINK is too slow.
* Any of the DIY-ers around here could modify a CD player to make a closed loop transport for their DAC of choice, or play with a PC to provide data on demand, but it would necessarily be a non-standard solution.

Does that capture the essence of it?
 
First, it would seem that "Any SPDIF Receiver" excludes any SPDIF Receiver currently made by Asahi Kasai Micro, Burr Brown / TI, Cirrus Logic. They all feature ZERO rejection of Jitter at 2.5KHz.
And I would not say that ANY DS DAC can handle that either, based on measuring them with an AP2 in "added jitter mode". The ones I looked showed an appalling sensitivity to jitter.
I see that you still like to create urban legends... whatever sells your stuff I guess.
For refference:
Jitter_performance_of_spdif_digital_interface_transceivers
Notably, the intrinsic jitter of the WM8805 is measured at 50ps, and the jitter rejection frequency of the onboard PLL is 100Hz. This can be directly compared with competitive S/PDIF solutions available today, with intrinsic jitter in the region of 150ps, and jitter rejection frequency greater than 20kHz.
Specifying_Jitter_Performance
 
Hi,

* Jitter makes people's ears bleed,

Not sure who said this. Jitter audibility and objectionability depends on spectrum of jitter and relative amplitudes.

and outboard DACs suffer from a paradoxical problem in that while they look pretty and aren't inside the same box as the transport, they can't eliminate jitter and perfectly match the source clock rate at the same time.

DAC's do not suffer from "paradoxical problems", but from very real and measurable problems.

Further, they CAN "eliminate jitter and perfectly match the source clock rate at the same time" if they are designed to do so. The majority of those available however makes no attempt to do so, as they merely use generic Receiver chips, that by design have ZERO rejection of source jitter below usually around 20KHz.

The reason for this is that by their principle of operation with a simple PLL and a very noisy oscillator, the wide PLL bandwidth is needed to educe the oscillators noise. It is instructive to slow down the PLL and see the random noise jitter levels creep up. Cirrus Logic has good paper on this, where they compare two modes of their CS8416 in terms of jitter. One mode was the "standard PLL" with no rejection of source jitter, the other was a mode operating based on using only the preamble, which improves the rejection of source jitter significantly, however as a result the jitter of the receiver is higher.

* The first CD players available were a closed system - the CD transport was slaved to a fixed frequency crystal-controlled DAC. They are still like that.

More precisely, DAC and transport are slaved to same clock.

* Thirty years later, some of the best brains in the business are discussing how best to get close to this level of performance using a few digital thingummies that are top secret.

Hardly.

I used to own a 1980's DAC that used a secondary "slow" PLL, many of the true High End DAC's in the 90's and early oughties contained similar means and they do work reasonably well (see also the Eindhoven Collective's DIY DAC), but are subject to limitations. For example the Pass D1 used a secondary PLL. There ae many reasons why these have fallen out of common use.

* The difference between their solutions is that the inferior one adjusts the playback frequency by 0.001% every 30 seconds, while the other does it by 0.0001% every 60 seconds.

That is incorrect. The difference is that one solution adjusts "on demand" and will not cause any adjustments to be affected except if a stict condition is met (namely the FIFO buffer is over- or under-filled to a specific degree), while the others are basically just digitally implemented PLL's.

The audible comparison is "like night and day".

I do not remember saying anything about audibility or making Day/Night comments. I do not really remember anyone else doing so either.

* A financial opportunity has been sensed - ravening hordes of audiophiles are expected to batter the laboratory doors down any minute now.

Actually, during product development the level of jitter rejection and hence the variability of sound quality with different sources was judges too large and a system was developed to solve this problem in a way that makes it "go away" instead of merely hiding it.

To be honest, the extra development time spend means the product is late to market and much of the actual opportunity this was meant to exploit has been lost to others. Another example of where an outstanding product is not enough, you can take lessons on that from Abraxalito.

* In the meantime, asynchronous mode USB has apparently been invented which makes the whole exercise a bit moot - although it took hundreds of hours to make it work (why?!).

Asynchronous USB only works works for computer sources. For a DIY enthusiast this may be perfect, in the commercial market USB only designs do not do too well on sales.

As for why did it take so long to make it work (it was a lot more than 100's of manhours), simply because no-one had done it before. To make it work in principle was actually rather quick. To make sure the system is resileint to all sorts of external conditions, is capable of locking quickly and so on is what took the time...

* It's important to tune your waveforms so they look nice while a 'scope probe is attached, and to isolate transport from DAC using transformers because TOSLINK is too slow.

Toslink has issues with speed which can impact reliable operation at quad speed (176.4KHz/192KHz). At lower speeds the output waveform is quite bad and can cause triggering problems.

Electrical SPDIF & AES/EBU tends to work better.

* Any of the DIY-ers around here could modify a CD player to make a closed loop transport for their DAC of choice, or play with a PC to provide data on demand, but it would necessarily be a non-standard solution.

Yes, any DIY Enthusiast can use a number of approaches in this. Many of these will be non-standard (which makes them unusable in the commercial market), others actually do not really work at all (asynchronous reclocking) or only "almost" and only under a extremely stringent external conditions.

As with any problems, there are many possible solutions that can mitigate the problem, many more that do nothing or make it worse, plus there are a few solutions that actually address the problem.

My solution so far (AFAIK) has only been attempted once (by Lavery) who actually failed to make it work and used an ASRC instead (there was a massive thread on that on his own forum that was later deleted).

It is obviously the "correct" solution, as it effectively restores the situation with a CD Transport, data is read into a FIFO (from the laser in case of the CD Transport, from the SPDIF receiver or USB subsystem in the DAC) and read out from the FIFO using a fixed, low jitter clock.

Ciao T
 
Hi,

I see that you still like to create urban legends... whatever sells your stuff I guess.
For refference:
Jitter_performance_of_spdif_digital_interface_transceivers

Specifying_Jitter_Performance

Note that I excluded Wolfson Micro from my list.

They make the only receiver right now which has a jitter rejection corner below 20KHz and they are very rarely used (mainly because their programming interface has more undocumented features than Microsoft Windows and you cannot make it work at 176.4KHz without using it in software mode).

AKM, BB/TI and CS which I mentioned all use essentially identical PLL designs with identical bandwidth and lock times etc., which is where the 20KHz (appx.) figure comes from.

So if anyone is crating an urban myth it is the person that claims:

"reject the jitter that has a 2.5kHz frequency or higher... Big deal, any SPDIF receiver can handle that."

Which is patently untrue, instead of saying:

"There is one manufacturer of SPDIF receivers that makes receivers that do that easily and is rarely used, while most receivers manufactured and more commonly used are by a factor 10 worse."

Ciao T
 
Further, they CAN "eliminate jitter and perfectly match the source clock rate at the same time" if they are designed to do so.

Could you explain how that works? As far as I can tell, the method of choice for jitter elimination seems to be to stream buffered audio from a FIFO at a rate chosen so as to keep the FIFO from under/overflowing. There will always be a mismatch between source and playback sample rate and the Big Idea seems to be to estimate and set the playback frequency as accurately as possible so that adjustments are not necessary, possibly for several minutes. That's not the same thing as perfectly matching the source clock rate, however.
 
Status
Not open for further replies.