Digital Receiver Chips

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
WM8804/05 has the same jitter (in specs) as DIR9001: 50ps.
CS8416 has 200ps in the specs.

It was a discution somewhere on this forum about the WM8804/05 and if it contains or not the buffer memory that was described in their white paper (buffer necessary for the proper reclocking/de-jittering). I did say there:
I think they did implement the fractional PLL, but I didn't see any reference in the 8804/05 to the absolutelly necessary FIFO buffer described in the white paper.
Actually, they say "S/PDIF recovered clock using PLL, or stand alone crystal derived clock generation."
The device described in the white paper would be having the word and there.
Also in the datasheet they don't say nothing about a buffer. External clock it just "helping" the PLL loop, but the loop is still specified as beeing locked on the incoming signal.
The buffer would allow the asyncron functionality of the receiver, decoupling the input PLL loop from the XTALL generated output.
 
Last edited:
Disabled Account
Joined 2002
DIR9001 really is better than CS8416 despite the same jitter specs. No doubt about that, just compare them in real life. I have a hard time deciphering both datasheets as they seem to be written to create confusion. I know of some designers that thought DIR9001 needs a low jitter clock to do reclocking just because the datasheet is so vague in that aspect. Also both datasheets describe jitter specs differently so don't trust specifications on paper. It turns out that adding a low jitter clock to DIR9001 does not make any difference as it locks onto the incoming SPDIF stream with its PLL just as all other chips do. Nothing new except maybe for tighter specs.

WM8805 does reclocking and dejittering when a low jitter clock is used and jitter is the same on paper as DIR9001 so why not try it ? It is the only one chip solution at this moment that can do that together with its brother WM8804. I have not used WM8805 yet but I ordered some ready built units with WM8805 and I hope it lives up to the expectations. When it functions as decribed and as told by those that tried it the chip is one of the very best that has been developed til now. I ca not help to think it does true reclocking reading the (also) complex datasheet.
 
Last edited:
Disabled Account
Joined 2002
It seems Crystal made their classic CS8412 very good, most of the chips that came after it are less good in performance. Don't take my word for it, I am just a simple hobby guy and by no means an expert with the right equipment to measure such specs.
 
Reclocking

In most RX chips, the crystal clock is only used to compare PLL to a referencs so that incoming rate can be displayed, or supply a clock if no digital in for the PLL. The incoming rate will never be exactly sunchronous with a crystal, so if reclocking occurs, some samples will have to be repeated or skipped. Unless SRC is implemented. Does anyone know if SRC is implemented on WM8805 RX chip, and does it outperforms AD1896?
 
Last edited:
Even a SRC doesn't eliminate the jitter, the output will be locked to the input signal via a PLL and a programmable divider.
The only way to eliminate the jitter (without skipping or repeating audio data) is to use a buffer memory, fill it half way with data - clocked with the incomming clock and read it with a stable local Xtall clock.
From what I know none of the simple receivers off-the-shelf are doing that.
The only way I have seen it done is with a DSP that usually uses some kind of DMA and memory to achieve that buffering.
 
The most important difference between DIR chips is not their intrinsic jitter specifications. Intrinsic jitter can be thought of as the jitter which would still remain even if the incoming signal contained absolutely no jitter. As has been noted, many DIRs show intrinsic jitter figures of 50ps. The more interesting specification is their jitter transfer mask profiles. The transfer mask is a curve telling how much and at what frequencies recieved signal jitter is suppressed. One of the big problems is that not all datasheets show this profile. This absence is particularly annoying with the DIR9001 because of it's hyping of SpAct. jitter reduction technology.

As far as ASRC is concerned, they are capable of tremendous jitter suppression, particularly so for the AD1896. Just compare the transfer mask profile of the AD1896 to that of any DIR chip, or to that of the CS2300. Before anyone informs me otherwise, the truth that ASRC does convert residual jitter to a amplitude error does nothing to diminish the effectiveness of this technology. Here's a secret, all residual jitter, whatever it's source, is converted to an amplitude error by the DAC chip.
 
Output clock for SRC chips is not related to input clock

Sonic_real_one :

You are mistaken. Look at the data sheet for AD1896. The output clock uses a totally seperate clock which has nothing to do with the data in clock. It can be something like the new Crystek clocks with ~ 1ps jitter.
 
Last edited:
As far as ASRC is concerned, they are capable of tremendous jitter suppression, particularly so for the AD1896. Just compare the transfer mask profile of the AD1896 to that of any DIR chip, or to that of the CS2300. Before anyone informs me otherwise, the truth that ASRC does convert residual jitter to a amplitude error does nothing to diminish the effectiveness of this technology. Here's a secret, all residual jitter, whatever it's source, is converted to an amplitude error by the DAC chip.

If SRC passes incoming jitter through as amplitude errors, is there any real improvement by reclocking?
 
You are mistaken. Look at the data sheet for AD1896. The output clock uses a totally seperate clock which has nothing to do with the data in clock. It can be something like the new Crystek clocks with ~ 1ps jitter.

That's right, Tom. ASRC enables two independent clock domains. The jitter at the output side of an ASRC chip will be determined by two factors. The residual jitter from the input side clock after strong suppression by the ASRC, and the intrinsic jitter of the output side clock generator.
 
If SRC passes incoming jitter through as amplitude errors, is there any real improvement by reclocking?

It depends on how you are generating the output side clock. If you are using the ASRC chip in master mode, where the ASRC itself is generating the word-clock and bit-clock signals going to the DAC, then, yes, you will want to synchronously re-clock those signals from a low jitter clock source. Which should be the same source serving as the master clock for the ASRC chip. If you instead are using the ASRC in slave mode, where you are dividing down a low jitter local clock source to generate word-clock and bit-clock, then the divider itself inherently serves to "re-clock" those signals for lowest intrinsic output side jitter.
 
Last edited:
That's right, Tom. ASRC enables two independent clock domains. The jitter at the output side of an ASRC chip will be determined by two factors. The residual jitter from the input side clock after strong suppression by the ASRC, and the intrinsic jitter of the output side clock generator.

I see. There IS real improvement because incoming jitter is suppressed before it's effect is imposed on the output. Of course the 1ps MCK has to be divided to get SCK and WDCK, so jitter goes back up, especially WDCK, which is divided most and also is only one that counts.

Does anybody have a high quality divide circuit that introduces low jitter?
 
It depends on how you are generating the output side clock. If you are using the ASRC chip in master mode, where the ASRC itself is generating the word-clock and bit-clock signals going to the DAC, then, yes, you will want to synchronously re-clock those signals from a low jitter clock source. Which should be the same source serving as the master clock for the ASRC chip. If you instead are using the ASRC in slave mode, where you are dividing down a low jitter local clock source to generate word-clock and bit-clock, then the divider itself inherently serves to "re-clock" those signals for lowest intrinsic output side jitter.

I don't believe AD1896 can operate in Master mode and have 192 KHz out. Other ASRC from TI and AKM can, but to do so, they skimped on the algorithm. AD1896 needs more computation clocks than are available if control clock is used to generate CKOUTs @ 192 KHz. I would think that AD engineers thought it was important enough to keep this implementation complication in order to achieve best sound.
 
Tom,

Just to be certain this is clear, it's only the residual jitter - that which remains after strong suppression - which ASRC converts to an amplitude error. This conversion of jitter (time-domain) error in to an amplitude (frequency-domain) error occurs with ALL residual jitter sources. With ASRC it occurs just before D/A conversion. With PLLs, FIFO based technologies, etc., it occurs after D/A conversion. Anyone who doubts that timing jitter manifests in the frequency-domain as an amplitude error should ask themselves exactly how FFT based jitter analyzers function.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.