Asynchronous Sample Rate Conversion

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Nice work, werewolf!

Quick question: How does one optimally interface an ASRC to the DAC chip? I guess this can be rephrased as: Do the output data lines particularly BCLK, L/RCLK and MCLK have low jitter or would they benefit from going through some sort of buffer?

Petter
 
Thanks Petter! Well I hope some others will help answer your question as well, but let's look at the data sheet for the AD1896 (it's a bit easier to understand than ... some others ;) )

The 1896 needs a standard, external S/PDIF receiver connected to it's INPUT port ... like the CS8412/14. The AD evaluation board board uses the CS8414.

Now your question really pertains to the OUTPUT port, because that's where the DAC is connected. You will certainly supply an MCLK to the 1896 ... this will be derived from a local, clean crystal oscillator ... and that same MCLK (or a simple divided version) will be supplied to your DAC. You have a choice on the other two clocks associated with the output port ... SCLK, LRCLK. If you configure the 1896 output port as a MASTER, it will supply these two clocks to the DAC (through a simple internal divider from MCLK). OR, you can configure the output port as a SLAVE, in which case you must externally supply SCLK & LRCLK ... which you will probably generate by externally dividing down the MCLK.

Now I believe that configuring the output port as a SLAVE is the best choice. That way, YOU control all the output port timing, probably with the same clocks you supply to the DAC. In other words, you generate MCLK from a local, ultra-clean crystal oscillator. And you use some of this forum's favorite techniques :) for dividing down the MCLK (with fast logic) to generate LRCLK & SCLK ... which you send to the 1896 and the DAC. That's the beauty of ASRC ... YOU control the output port & DAC timing with local crystal oscillator-based clocks. No recovered clocks necessary for the output port & DAC.

Hope that makes sense? Users, PLEASE add your thoughts !!!
 
I must admit that this thread is way above my head. But I've gone through it, because I've recently acquired the AD1896 evaluation board. This board uses the Crystal 8414 input receiver, the AD1896ASRC, the AD1852 d/a, and also the Crystal 8404 output transmitter, if you want to simply upsample and send to another dac.

My question is this, I want to improve the board, mainly clock stuff and power supply I think. The board has a lot of flexibility for upsampling, bit lengthening, etc. I do notice that some schemes sound better than others. For example, having the AD1852 in 192kHz mode does not sound good, there is a weird "swirling" sound, while 96kHz is fine. I've also listened to it in 16 bits dithered, or 24 bits. The 24 bits has more air, but not as firm an image as 16 bits with dither.

Any suggestions would be welcome!

Ron
 
Hi werewolf,

Thank you for your great explanations. I would like to know something more about master clock requirements in ASCRs running in slave mode. As I understand it, when using both AD1896 and SCR4192 in slave mode, master clock can be asynchronous to both Fs_in and Fs_out since it clocks only internal rate estimation logic. The only requirement is that master clock is greater than 138 x min(Fs_in, Fs_out) in AD1896 and 128 x min(Fs_in, Fs_out) in SCR4192.

I would like to know how jitter in master clock affects precision of rate estimation logic and subsequently precision of output samples. Is there any advantage in running master clock any higher than minimum requirements?

Best Regards,

Jaka Racman
 
Jaka - my understanding of the AD1896 is that yes, the master clock can indeed be asynchronous to the input port clocks & output port clocks. In this case the master clock will be used for internal calculations, but jitter on this clock should not matter ... the important counters used for polyphase selection are clocked by the respective port (Fs_in, Fs_out) clocks. And even jitter on these clocks will be heavily filtered by the averaging process of the counters (although I don't think the filtering process is the same for Fs_in and Fs_out clocks).

However, this does raise an IMPORTANT issue, one to which we must pay close attention in any ASRC application. The issue is simply this ... by definition, the ASRC application will have ASYNCHRONOUS clocks running in our system. It's the job of the ASRC chip to manage the DIGITAL interface correctly ... but what about ANALOG circuit sensitivities? The problem is that any two asynchronous clocks are likely to have some high-order harmonics that differ in frequency by less than 20kHz ... and if these harmonics couple into analog circuitry, through electrical or magnetic coupling paths, those harmonics can cause ... through intermodulation ... beat frequencies in the audio band :(

So one needs to be careful to practise good noise management techniques :

1. At the board layout level, make sure analog receiving & digital transmitting loops are small in area to minimize magnetic interference : close local decoupling, good use of ground planes, etc.

2. At the design level, minimize the number of asynchronous clocks! In other words, although the AD1896 can support THREE (3) asynchronous clocks (two ports and one master), I would seriously consider using a master clock that is synchronized to the output port, and just minimize the amount of circuitry that is clocked by input port clocks ... work hard to make sure that most of your system is clocked by Fs_out-synchronized clocks!

This issue must really be added to the disadvantage column of ASRC. It can be managed, but just remember that analog circuits are not particularly fond of asynchronous clocks running around the system :)

Do you guys know that oversampled, delta-sigma type converters themselves are essentially ASYCHRONOUS oscillators ... even when clocked by perfectly clean, synchronous clocks? This is the reason for the EXTREME sensitivity of the VREF pin to external noise coupling ... perhaps a topic for yet ANOTHER thread :)
 
Hi werewolf,

from your answer it seems that SCR4192 is more suitable chip when used in slave mode and with 192kHz Fs_out. One can use 24,576Mhz system clock which is 128 x Fs_out, while AD1896 would require 27Mhz crystal used with internal oscillator. Now I see the reason why internal oscillator is omited in SCR4192. It also seems that there is no advantage in using 256 x Fs_out master clock.

Thank you for your explanation.

Best regards,

Jaka Racman
 
Werewolf, I really appreciate your effort! Now I'm able to back up my subjective impressions with maths, which helps in discussions with narrow-minded people.

For best jitter performance in an ASRC environment, it is important to know which signal actually "switches" the DAC's analogue output to the next sample. This very signal should be as clean as possible.

For the Burr Brown PCM1704, it is the bit clock signal. This comes handy, as we can use it as the master clock for the ASRC, too. Thus, the ASRC can run in master mode without performance degradation and no external dividers are necessary. The clean, local clock should be physically close to the DACs (shielded of course and fed by a separate supply) and drive their bit clock inputs directly.

One caveat, though: The DAC bit clock might have to be inverted to the ASRC's master clock. The Kwak clock, with a high speed comparator in the output, provides both inverted and non-inverted clock signals.

A (now outdated) sample scheme incorporating CS8420, DF1704 and PCM1704:
 

Attachments

  • dac-clock.zip
    23.5 KB · Views: 394
werewolf; this has been an impressive thread.

As for as PLL are concerned, the implemented in the 8412/14 or just barely acceptable. Normally we think of a PLL as having some jitter filtering capabilities; however, this is easily done with the long time constant and slow loop response of this parts. Therefore, very precise clocks were designed to over come the problems that were not resolved in the PLL. Of course, some of the problems are caused by transmission line mismatches.

An improvement in the interface and the clock recovery and PLL method is needed to address these problems. I would think the one such improvement would be to implement
a dual PLL with high divide by N flops to reduce system jitter. Then you would salve your ASRC to the second PLL.

The dual PLL’s would have an improved phase detector, filter and VCO’s. For example, the first PLL would have a wide capture range with a slow response. it would use an VCO that was tuned with a cap having a low Q, but not to low. The second PLL having narrow capture range with a high Q and VCXO loop would divide the clock by 8 or more. This would improve system jitter by a bunch.

We had implemented a system that had a similar PLL and data clock recovery for BIPHASE Mark encoded data at TI. We could put well over 60 systems with these interfaces running in series together with out significant build up of jitter.

Also, what or the problems when cascading multiple ASRC.

:)
 
AMT-freak said:
For the Burr Brown PCM1704, it is the bit clock signal.


This might not be entirely correct. When I spoke with one of the key people at Analog Devices a few years back (subject: AD1853) I assumed that jitter was only interesting on MCLK.

To my surprise - I was challenged on that and the phrase jitter rejection or jitter sensitivity on all input lines (MCLK, L/RCLK, BCLK, Data) was pulled out of a hat.

Thus I believe it would be unwise to assume that MCLK is the only DAC input sensitive to jitter

Petter
 
From the PCM1704 datasheet:
The audio interface of the PCM1704 accepts TTL-compatible input levels. The data format at the DATA input of the PCM1704 is Binary Two's Complement, with the most significant bit (MSB) being first in the serial input bit steam. [...] Any number of bits can precede the 24 bits to be loaded since only the last 24 bits will be transferred to the parallel DAC register after WCLK has gone LOW. Audio data is supplied to the DATA input. The bit clock is used to shift data into the PCM1704 and is supplied to BCLK. All DAC serial input data bits are latched into the serial input register on the rising edge of BCLK. The serial-to-parallel data transfer to the DAC occurs on the falling edge of WCLK. The change in the output of the DAC occurs at the rising edge of the 2nd BCLK after the falling edge of WCLK.

The last sentence is important. As long as BCLK comes from a local low-jitter master clock and the data stream is slaved to it (by clocking the transport from this clock or by using an ASRC), we have no problems with jitter.

The PCM1704 datasheet clearly states that the BCLK line is used to clock a new sample out. I don't see how the jitter on the other clock or data lines could affect the timing of the analog output, as they are only used to internally collect the serial data bits and transfer them to/from parallel registers. In fact these signals can be really jittery, it won't affect the overall performance as long as they are clean enough not to corrupt the data (something must really go wrong before this happens).

Things might be different for AD DACs, I didn't check the data sheets as I don't use them.

jewilson, I don't get your point?
 
AMT-freak,
Whilst the section of the PCM1704 datasheet you quote backs your position on bitclock, the paragraph on "Stopped Clock" operation undermines it. It clearly states, and experience shows, that bitclock need not run for longer than the duration required to clock the data into the serial register.

ray.
 
In the PCM1704 the BCLK line is the only clock that matter wrt jitter. WCLK is simply used to enable the parallel latching of data.

To get technical about it only one BCLK edge per sample is critical for jitter - the aforementioned second leading edge after WCLK goes high. All other clock edges are just used for reading in data.

You can actually tell from the timing diagram in figure 2 of the PCM1704 datasheet that BCLK is the true clock of the chip. Note that the timing for WCLK and DATA are all setup/hold times relative to BCLK. This implies that only BCLK drives edge-sensitive devices.
 
ATM
jewilson, I don't get your point?

The point I was making is when you have jitter on the SPDIF clock-data stream, which is you master clock + data. The receiver and PLL will not remove it all. Therefore, we have jitter on the BITCLK, DATA and WCLK. The change in the output of the DAC occurs at the rising edge of the 2nd BCLK after the falling edge of WCLK. So when rising edge of the Bit Clock moves from jitter, it will cause the data to jitter some small amount.

The cause of the jitter can be transport clock, the poor interface design, crummy logic, an impedance missmatch, noise or just a sloppy PLL that can't filter jitter.
 
The point I was making is when you have jitter on the SPDIF clock-data stream, which is you master clock + data. The receiver and PLL will not remove it all. Therefore, we have jitter on the BITCLK, DATA and WCLK. The change in the output of the DAC occurs at the rising edge of the 2nd BCLK after the falling edge of WCLK. So when rising edge of the Bit Clock moves from jitter, it will cause the data to jitter some small amount. The cause of the jitter can be transport clock, the poor interface design, crummy logic, an impedance missmatch, noise or just a sloppy PLL that can't filter jitter.

jewilson, we have some misunderstanding here. I was talking about having a local low-jitter clock in the DAC and a setup in which the incoming data stream is slaved to this local master clock. A setup like a slaved transport or a DAC with ASRC.

Whilst the section of the PCM1704 datasheet you quote backs your position on bitclock, the paragraph on "Stopped Clock" operation undermines it. It clearly states, and experience shows, that bitclock need not run for longer than the duration required to clock the data into the serial register.

rfbrw, I overlooked this. If it is true, then my above statement about the PCM1704 is definitely wrong. I will have a closer look. The rest is still valid, only the clock lines which affect the analog output's timing are relevant for jitter performance.

Lets stop polluting this highly informational thread with chip-specific debates ;)
 
mikewu99 said:
AMT-freak:

The stopped-clock paragraph in no way undermines your statement regarding BCLK. The PCM1704 is not an oversampling DAC - if you stop the clock, the DAC value just stays constant.

Alas, mikewu99, despite the datasheets,you clearly do not understand "Stopped Clock" operation.
DACs with built-in OS filters are NOT fully static. The PCM1704, like its ancestors, has no MCLK or built-in OS filter and IS fully static. If you were so inclined you could single-step it. The DF1700/SM5813 only operates in, and the SM5842/3/7 all support, the "stopped clock" or burst clock mode as NPC call it. Analog Devices also use this mode, in AN207, to connect the SAA7220 to the AD1856. Even the TDA1541 seems support it and I've used it to bypass the DF1700 in my AN DAC3.
Its very simple. BCLK loads the serial input register and it stops.

ray
 
Ray, your points are valid, however I don't think they are relevant in the setup I described.

The PCM1704 is definitely a fully static device. That means, the change in the analogue output signal will be initiated by a change of the clock line's logic levels, no matter which mode it is working in. There is no delay except the unavoidable propagation delay of the chip's internal logic.

The PCM1704 is controlled by what you could call an "internal state machine". One output of this state machine advances the analogue output to the next sample. The state machine is driven from outside only by the clock lines. In "stopped clock", or burst mode, WCLK going low steps the output while BCLK remains low. In "normal mode", the second rising edge of BCLK (after WCLK has gone low) steps the output. Thus, there is only ONE clock line which actually steps the output, the other having remained static for some moments of time before this event. (Things would be different if they were simultaneously changing their level to step the output.) There is only ONE clock line which matters for jitter performance.

In the setup I described I used a DF1704 which may or may not work in burst mode, honestly I don't know and I'm too lazy to look it up. However, the PCM1704 is getting a continous BCLK directly from the master clock, thus I'm running it in "mormal" mode and BCLK is the only signal which must be as jitter-free as possible. I don't see anything wrong with my posts.

I suggest the moderators split and move the last posts to a new thread. I really don't want to pollute this valuable information here.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.