Digital Receiver Chips

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Jitter is converted into amplitude errors @ reclocking / ASRC

Tom,

Just to be certain this clear, it's only the residual jitter - that which remains after strong suppression - which ASRC converts to an amplitude error. This conversion of jitter (time-domain) error in to an amplitude (frequency-domain) error occurs with ALL residual jitter sources. With ASRC it occurs just before D/A conversion. With PLLs, FIFO based technologies, etc., it occurs after D/A conversion. Anyone who doubts that timing jitter manifests in the frequency-domain as an amplitude error should ask themselves exactly how FFT based jitter analyzers function.

It's been a good day - I learned something new. I never thought about jitter being converted to amplitude errors when reclocking / ASRC; never thought about the phenomenon.:)
 
I don't believe AD1896 can operate in Master mode and have 192 KHz out. Other ASRC from TI and AKM can, but to do so, they skimped on the algorithm. AD1896 needs more computation clocks than are available if control clock is used to generate CKOUTs @ 192 KHz. I would think that AD engineers thought it was important enough to keep this implementation complication in order to achieve best sound.

That's correct. In audio-port master mode, you have to run the AD1896's master clock at 256FS. Which, for 192KHz sample rate is around 49MHz. Well beyond what the AD1896 will support at it's MCLK input.
 
Sonic_real_one :

You are mistaken. Look at the data sheet for AD1896. The output clock uses a totally seperate clock which has nothing to do with the data in clock. It can be something like the new Crystek clocks with ~ 1ps jitter.

I am not.
Without a serious buffer memory, either you are locked onto incoming clock (and jitter) or you are locked onto the external clock and you will skip and repeat samples - based on the instantaneous difference between incoming jittery clock and the precise one.
AD1896 USES that buffer memory - the FIFO buffer. It is just kind of small to attenuate low-frequency jitter. 512 words deep for both channels. How long would take for half of that to empty?
Anyway is better than zero :)
 
Last edited:
Does anybody have a high quality divide circuit that introduces low jitter?

If you want divide by 2, one of these ought to do a reasonable job. One of the main sources of jitter in digital logic is ground bounce, so the package size needs to be small to reduce lead inductance.

http://focus.ti.com/lit/ds/symlink/sn74lvc1g80.pdf

Why do you think the word clock is the one that counts? I think DAC chips these days tend to synchronise word clock with MCLK internally.
 
If you want divide by 2, one of these ought to do a reasonable job. One of the main sources of jitter in digital logic is ground bounce, so the package size needs to be small to reduce lead inductance.

http://focus.ti.com/lit/ds/symlink/sn74lvc1g80.pdf

Why do you think the word clock is the one that counts? I think DAC chips these days tend to synchronise word clock with MCLK internally.

Need to divide by 2 for BCK and by 128 for WDCK / LRCK. Currently using 2 '161s but there must be a better way to optimize the function.

PCM1704 K grade (whose overall implemented performance has not been surpassed) does not use MCK. Output timing is determined by LRCK and that is where jitter will manifest itself. Jitter on other clocks is irrelevant unless it is so severe as to cause datas errors. (Never happens).
 
I see. There IS real improvement because incoming jitter is suppressed before it's effect is imposed on the output. Of course the 1ps MCK has to be divided to get SCK and WDCK, so jitter goes back up, especially WDCK, which is divided most and also is only one that counts.

Does anybody have a high quality divide circuit that introduces low jitter?

I'm currently dividing MCLK, using 74VHC161 synchronous counters, to derive both BCLK and WDCK for an AD1896 based DAC of my own design. It's hard to know what jitter we are, or are not, ridding ourselves without the proper test equipment. Although, I have found certain improvements to be immediately audible. One of the tweaks I've found to make a big improvement is to damp all digital signal lines with series placed resistors of significant values (on the order of 100 to 470 ohms, exact values depend). There is much overshoot and undershoot (ringing) evident on such undamped signal lines that is visible with a good scope. My Tektronix DSO had read 4.5 Volts pk-pk on one particular line driven by a 3.3 Volt supply chip! I dampened it with 100 ohms. The improvement (a more at ease sound quality) was immediately audible.

One other comment, regarding the PCM1704. It seems that the critical clock signal for the PCM1704 may not be WDCK, but rather BCLK. I know, this is controversial, but it appears that while WDCK is used to latch each sample word in to the input register BCLK may be used to actually trigger the conversion. Such is vaguely suggested, but not explicitly stated, on page 7 of the datasheet under, "Stopped Clock Operation". I would give as much care to the handling of BCLK as to WDCK, just to be safe.
 
I have seen using Rs in all the signal lines to-from all the ICs before (some as high as 2K), and set up a proto board to have provision for them. Currently, I have 100 ohms out of the DF1706 to the PCM1704s ( AD1896 @ 192 out > DF1706 @ 4x @ 768 out > PCM1704. ) I could not hear any difference with/without them. I'm using AC161s, maybe I should try VHC161? Why would VHC be better choice?

I was not aware if the inference contained in "stopped clock operation". Thanks. Anyway, low jitter on all CKs would be good design practice.

I could definitely hear an impoivement when I installed one of the Crystek 1ps jitter clocks.
 
Last edited:
Hi Tom,

HC series are notoriously noisy. VHC series are supposed to be more quiet, although I haven't experimentally verified this. Perhaps, the 74HCs were too noisy to enable you to hear the improvement from those damping resistors. Give the VHCs a try and let us know.

Right page, wrong section of the PCM1704 datasheet. Following, is the quote I was alluding to, under "Basic Operation" on page 7: "The serial-to-parallel data transfer to the DAC occurs on the falling edge of WCLK." Just as we would expect. Then this: "The change in the output of the DAC occurs at the rising edge of the 2nd BCLK after the falling edge of WCLK." I'd say that last sentence rather clearly states that the instant of conversion is determined by the rising edge of BCLK - two whole cycles after the edge of WCLK .
 
Last edited:
Hi Tom,
"The serial-to-parallel data transfer to the DAC occurs on the falling edge of WCLK." Just as we would expect. Then this: "The change in the output of the DAC occurs at the rising edge of the 2nd BCLK after the falling edge of WCLK." I'd say that last sentence rather clearly states that the instant of conversion is determined by the rising edge of BCLK - two whole cycles after the edge of WCLK .
That means that the DATA is sampled on the rising edge of the 2nd BCLK. That is the IMPORTANT moment.
After that the DAC does the conversion and there is a wait period necessary for settling the internal switches. At the end of that period (mesured by 2 WCLK periods), on rising BCLK the converted data is send to the analog output.
In my opinion the BCLK is the one that is affected by jitter, WCLK is used just for internal timing of the settling, it could be as well an RC oscilator like on the TDA1451...
 
Last edited:
Actually I had skimmed over this part of the data sheet missing the full implications. Seems like this is good news, because BCK is less likely to have excess jitter, only being divided down by 2.

So long as you dedicate a chip just to that function. If you're using the sync 4-bit counters as the divide-by-2 function then they'll suffer from more ground bounce owing to having more outputs and more logic on the chip. So the div-2 output will have just as much jitter as the div-16.
 
What's nice about the PCM1704 conversion instant being determined by BCLK is that it presents the opportunity to drive BCLK (headed to the DAC chip) directly from the local master oscillator, without any intervening dividers. This enables the lowest jitter means of clocking the DAC chip. WCLK can be created by dividing BCLK (which is the local master clock) by 64. Of course, this assumes that your design doesn't require any clock signals faster than BCLK.
 
DF1706 needs MCK, so this optimization won't work for me.

While I am here, some observations:

I tried 96 KHz out of AD1896, then 8X @ DF1706
I tried 192 KHz out of AD1896, then 4X @ DF1706

By far, 192 KHz out of AD1896 and 4X @ DF1706 had superior high-freq clarity.
 
Last edited:
You can't do that. BCLK needs to be in sync with data, otherwise you will loose samples at some times and add samples at other times, periodically.

Wrong, of course you can do it. It's no trouble to keep BCLK, WCLK, and DATA synchronized if your D/A conversion circuit topology was planned with that in mind before you designed it. Seriously now, do you think about the consequences of making such flat declarations before posting them?
 
If the incoming SR to AD1896 is 192 and output is also 192, will there be increased distortion?

Statistically almost impossible for them to be precisely the same. Even if they were, I can't imagine why. I think if anything distortion would get lower but its already negligible.

If use bypass mode, then reclocking would be lost. Would it be better to use non-standard SR out, like use 24 MHz MCK instead of 24.576?

Benchmark use non-standard SR output, IIRC around 110kHz for all input SRs to their DAC. They chose it because the DAC chip (AD1853) works best at that frequency.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.