Building a proper input & oversampler (i.e. front-end) for PCM1704

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
So I've done some reading and have decided to go the oversampling route, please don't try to convince me otherwise :D

That said, I wanted to use the DIR9001 followed by the DF1704, but I don't want to use any of the clk signals they generate to avoid jitter. I want to take the data from the DIR and generate new clean SCKO,BCKO,LRCKO signals, feed that into the DF1704 and then generate new clean BCKO and WCKO singals that feed into the dac chips (PCM1704).

I only would support fs of 44.1 and 96k, I don't care about other input fs. How hard would it be to do this and what would be involved?

I can easily get a clean clock that generates 44.1khz for example (which would be my new LRCKO) then use a precise PLL to generate SCKO and BCKO but how would I synchronize that with the LRCKO generated by the DIR? (Similarly the same process for clks out of the DF1704, but again synch would be the issue)

Thanks for your feedback

P.S. I don't have FPGA experience so I don't want to go down that route, hence why I want to use DIR and DF
 
Last edited:
Your 'clean' clock will only be as clean as what the DIR9001 gives you from its on-chip PLL. Not very clean in my experience. To really clean up jitter you'd need to implement a secondary PLL. This could either be an analog or a digital one. Its fairly pointless to clean up the DF1704's jitter without addressing the jitter from the DIR9001.
 
If you're going to use an external clock then your transport ( or whatever you're using as your source) will also need to be slaved to the same clock. Is this going to be the case?

Not necessarily because my idea is to have the DIR deal with the source, extracting the data and from there use cleaner clocks then what the DIR puts out (same clk frequencies as whatever the DIR outputs, just cleaner). That said, I may need to store the data in memory then clock out, but there's got to be a way just to replace the clks the DIR is outputting with new ones at same freq and synched (the synched part is where I don't know what to do)
 
You'll need some way to regenerate the original clock though, even if you store the data in memory and then clock it out. Having memory means you don't need to get so close to the original clock frequency (you get some slack by virtue of the storage), but you'll still need some way to approximate to it. That's the function of a PLL - with a memory buffer you'll probably be implementing some kind of digital PLL.

<edit> What you want to do is something like what's described on the last page of this paper by Dan Lavry : http://www.lavryengineering.com/white_papers/jitter.pdf
 
Last edited:
The only way to reconstruct the signal and get rid of the jitter is to have a buffer memory between the incoming signal and the DAC clock. That memory will require some DSP for the read/writes. Some proper upsampling+oversampling with dithering, linear interpolation (or other) is possible at this stage.

Denon does it with "AL24 Processing Plus", Harman Kardon does it with "Real-time Linear Smoothing" (RLS III, IV), CA in Azur line calls it ATF (licensed from Anagram) and Q5... usually done with something from Analog Devices Black Fin DSP family or TI TMS320 DSP family.
 
Last edited:
You'll need some way to regenerate the original clock though, even if you store the data in memory and then clock it out. Having memory means you don't need to get so close to the original clock frequency (you get some slack by virtue of the storage), but you'll still need some way to approximate to it. That's the function of a PLL - with a memory buffer you'll probably be implementing some kind of digital PLL.

<edit> What you want to do is something like what's described on the last page of this paper by Dan Lavry : http://www.lavryengineering.com/white_papers/jitter.pdf

If I use a pure memory solution, I would probably set flag bit(s) along with the data to indicate what the original fs was, but that would involve DRAM (or some other high speed memory) along with at least a micro-controller to control the memory timings, addressing, etc, thats getting too complicated. So, I'm after simply splicing/replacing the clock outputs that the DIR generates and outputs (three clks, LRCKO which is nothing more then fs extracted from the SPDIF signal, SCKO and BCKO, the last two can be generated via a pll off of the new stable fs that generates LRCKO)

by the way, I appreciate everyone's input and feed back!
 
The olny way to reconstruct the signal and get rid of the jitter is to have a buffer memory between the incoming signal and the DAC clock. That memory will require some DSP for the read/writes.

DSP isn't necessary - no processing (the 'P' in DSP) of the data is called for. A fairly simple microcontroller could handle it. Even a PIC or AVR could do it, given a big enough buffer memory size and appropriate I/O interfaces (I2S or similar).

Some proper upsampling+oversampling with dithering, linear interpolation (or other) is possible at this stage.

That kind of stuff would probably call for a DSP or DSC.
 
...A fairly simple microcontroller could handle it. Even a PIC or AVR could do it, given a big enough buffer memory size and appropriate I/O interfaces (I2S or similar).
True, but I have the feeling that an uzual ucontroller won't have enough horsepower to do all that "cleaning", upsampling and oversampling... What's the point of reclocking and buffering if you don't do some linear/bicubic interpolation and/or dithering? Go from 44.1kHz/16bit to 192kHz/24bit in proper way.
Otherwise, the oversampling will be done anyway in DAC - with no linear or bicubic interpolation... the cheap and dirthy way.

If I use a pure memory solution, I would So, I'm after simply splicing/replacing the clock outputs that the DIR generates and outputs (three clks, LRCKO which is nothing more then fs extracted from the SPDIF signal, SCKO and BCKO, the last two can be generated via a pll off of the new stable fs that generates LRCKO)
You won't do nothing with that. DIR has already an advanced PLL (50ps). Trying to lock a new clock to the incoming singals will lock your "new" PLL to the incoming jitter - it is inevitable, regadless how stable and jitter free is "free run" oscillation. If you don't lock the clocks, you will loose samples periodically - that will generate more distortion than the jitter that you try to remove.
 
Last edited:
True, but I have the feeling that an uzual ucontroller won't have enough horsepower to do all that "cleaning", upsampling and oversampling...

Sure, if you feel its necessary, go for it :p You're quite right, a normal uC doesn't have that kind of horsepower. Just I think there's value in simply re-clocking without any processing. You know, 'bit perfect' re-clocking:D

What's the point of reclocking and buffering if you don't do some linear/bicubic interpolation and/or dithering? Go from 44.1kHz/16bit to 192kHz/24bit in proper way.

We all have our own idea of what's 'proper' - to me linear interpolation doesn't measure up. No real-world signal that I've seen goes in a straight line.:eek: Can see some value in upsampling though (thus bypassing any nasties that will be lurking in the off-the-shelf DF).

Otherwise, the oversampling will be done anyway in DAC - with no linear or bicubic interpolation... the cheap and dirthy way.

Actually linear interpolation is the cheap way (in terms of gates, not price as it ain't mass produced), a proper FIR filter is far more computationally intensive. Nothing dirty about doing FIR properly, off the shelf chips don't usually do it properly, they compromise to make it cheaper.
 
I think this is getting off topic slightly. I don't want to do any FIR filtering, upsampling or anything like that. Just DIR9001->DF1704. The DIR handles extracting the data and the DF1704 handles oversampling.

Lets start with the DIR9001 and ignore the DF1704 for the moment. The DIR generates 3 clk signals, LRCKO, SCKO and BCKO. So say fs is 44.1khz, that means LRCKO out of DIR is 44.1k but is 'dirty' and jittery. So why can't I simply get a very accurate VXCO at 44.1khz, and send it straight to the DF1704 (ie getting rid of LRCKO from the DIR altogether). All I would have to do is make sure the first rising edge of LRCKO lines up with the first rising edge of my VXCO, there after I don't care what LRCKO is doing because I'm not tracking it. Now if LRCKO stops (ie the music stream stops, I just shut off my VXCO).

Does this make sense? Why won't this work? The only issue that I can foresee is the data being shifted because it is based on the jittery DIR LRCKO output, but because of long setup and hold times, there might be enough slack not to worry about.
 
Last edited:
Lets start with the DIR9001 and ignore the DF1704 for the moment. The DIR generates 3 clk signals, LRCKO, SCKO and BCKO. So say fs is 44.1khz, that means LRCKO out of DIR is 44.1k but is 'dirty' and jittery.

Yes, although most DACs these days don't take their timing from that signal, but from the master clock which let's say will be at 11.2896MHz. That will be dirty too.

So why can't I simply get a very accurate VXCO at 44.1khz, and send it straight to the DF1704 (ie getting rid of LRCKO from the DIR altogether).

Its a good question. The answer is that practically no sources use very accurate clocks - there's really no need. A few hundred ppm difference means a very cheap XTAL osc can be used and the pitch difference isn't a worry, nobody can hear that small a change.

All I would have to do is make sure the first rising edge of LRCKO lines up with the first rising edge of my VXCO, there after I don't care what LRCKO is doing because I'm not tracking it. Now if LRCKO stops (ie the music stream stops, I just shut off my VXCO).

Input clocks drift with time. So even if you lined it up and had an incredibly stable master oscillator, your source frequency will drift around somewhat. You don't want to drop samples or double up (repeat a sample) as a result as this sounds terrible. Hence PLLs are used which track these small changes in input frequency.
 
Yes, although most DACs these days don't take their timing from that signal, but from the master clock which let's say will be at 11.2896MHz. That will be dirty too.

I would replace that too, I just wanted to make the example simpler ;)

Its a good question. The answer is that practically no sources use very accurate clocks - there's really no need. A few hundred ppm difference means a very cheap XTAL osc can be used and the pitch difference isn't a worry, nobody can hear that small a change.

So are you saying just live with what ever clks are generated from DIR? How do companies improve their front-ends then, I can't image they just accept DIR9001->DF1704 and call it good.

Input clocks drift with time. So even if you lined it up and had an incredibly stable master oscillator, your source frequency will drift around somewhat. You don't want to drop samples or double up (repeat a sample) as a result as this sounds terrible. Hence PLLs are used which track these small changes in input frequency.

Yes you are right, I was hoping the drift wouldn't be so bad that it would be out of synch. I guess it would be at some point.


So at this point are you suggesting some kind of memory based solution that stores all the data from the DIR, and clocks it out independently/asynchronously with clean clocks? Sorry I'm just trying to understand everything before I tackle this project.

Thanks
 
So are you saying just live with what ever clks are generated from DIR? How do companies improve their front-ends then, I can't image they just accept DIR9001->DF1704 and call it good.

To improve the front-end, its important to know what the main sound quality issues are. Turns out that jitter isn't the biggie - common-mode RF interference is and should be addressed with sufficient isolation/filtering. Jitter is the second one - to address this companies with pretentions to the highest quality implement their own PLLs (either analog or digital).

So at this point are you suggesting some kind of memory based solution that stores all the data from the DIR, and clocks it out independently/asynchronously with clean clocks?

Yeah, Dan Lavry describes such a technique in the paper I cited. Turns out that not very much memory is needed, the regenerated clock is controlled by a uC feeding a DAC into a VCXO via a LPF.

Sorry I'm just trying to understand everything before I tackle this project.

Apologies not required, its very sensible to get some kind of clarity about what you're trying to achieve before starting on building it. :)
 
You forget that not only the clocks are jittery, but the whole stream, including DATA - because it comes from the fluctuations in the chain.
The clock carries the timing information in regard to that data, you cannot just remove it and replace it with something else - I said already that at the best you will "loose" periodically data (and repeat it other times). And the jitter will still be in the DATA.
You need that cache/buffer at the minimum for a proper jitter elimination.

If it was easy/cheap, it would be done in off the shelf chips already... in the 30 years of digital audio. And companies won't be making serious money with this:
Key to the abilities of the 740C is ATF™ (Adaptive Time Filtering) an upsampling process developed in conjunction with Anagram Technologies of Switzerland. This system intelligently interpolates 16-bit/44.1 kHz CD (or other) audio data to 24-bit/384 kHz through the use of a 32-bit Analog Devices Black Fin DSP (Digital Signal Processor) for the very best sound quality. The ATF system applies sophisticated polynomial curve fitting interpolation and incorporates a time domain model which allows data buffering and re-clocking. This almost completely eradicates digital jitter.
Or: http://www.stereophile.com/content/genesis-technologies-digital-lens-page-2
The Lens achieves its jitter rejection by putting the audio data through the half megabyte of buffer memory. The clock recovered from the transport clocks the audio data into the memory, but the output clock that feeds the data to your digital processor is generated by a precise, carefully realized clocking circuit in the Lens. This technique totally isolates your digital processor from the transport's clock—and its jitter.
 
Last edited:
I think this is getting off topic slightly. I don't want to do any FIR filtering, upsampling or anything like that. Just DIR9001->DF1704. The DIR handles extracting the data and the DF1704 handles oversampling.

Lets start with the DIR9001 and ignore the DF1704 for the moment. The DIR generates 3 clk signals, LRCKO, SCKO and BCKO. So say fs is 44.1khz, that means LRCKO out of DIR is 44.1k but is 'dirty' and jittery. So why can't I simply get a very accurate VXCO at 44.1khz, and send it straight to the DF1704 (ie getting rid of LRCKO from the DIR altogether). All I would have to do is make sure the first rising edge of LRCKO lines up with the first rising edge of my VXCO, there after I don't care what LRCKO is doing because I'm not tracking it. Now if LRCKO stops (ie the music stream stops, I just shut off my VXCO).

Does this make sense? Why won't this work? The only issue that I can foresee is the data being shifted because it is based on the jittery DIR LRCKO output, but because of long setup and hold times, there might be enough slack not to worry about.

THE PROBLEM:
You can't get to where you are trying to go via the road you want to take. Any analog PLL solution (VCO, VCXO, etc.) will still need to track the incoming clock of the DIR, and therefore will have to pass low-frequency jitter - which some feel is at the root of most audible jitter related affects. Implementing a digital-PLL using semiconductor memory and microcontroller based pointer management would require some non-trivial software design and PLL design expertise on your part and would STILL have to track the incoming clock from the DIR. Any such tracking inherently provides a door through which jitter of some low-frequency can pass.

OPTIONS:
You could simply free run the local (DAC resident) clock generator, without regard to source synchronization. This is called asynchronous re-clocking and is recommended by Kunosoki himself, but requires a very high re-clock rate (as high as 100MHz) and will INCREASE total jitter, not reduce it. Advocates claim that although increased, the additional jitter created by asynchronous re-clocking is less annoying to the ear. I have my doubts about that. Or, you could reduce your performance goals a little bit and utilize a newer DIR chip.

RECOMMENDATION:
Therefore, I suggest the following simple solution for you. Toss the Burr-Brown DIR9001 and use instead the Wolfson WM8804. The WM8804 features a memory (FIFO) based PLL that will filter source jitter down to near 100Hz, which is much lower than for the analog PLLs of any other DIR chip of which I'm aware. You could even add a second stage of strong jitter cleaning by inserting the Cirrus CS2300 (another memory based PLL) in the bit-clock signal, between the WM8804 and the PCM1704s. That's about as much source jitter-rejection as you are going to obtain using off-the-shelf components or significantly increasing the design effort required to realize your DAC.
 
That new (2009) CS2300 is an interesting cip... didn't see it before.
Down to 1Hz PLL filter sounds interesting. And it can take XTALL inputs too (I am thinking about cleaning some existing Xtal/PLL divider jitter).
As for DIR9001 - TI claims 50ps recovered jitter. WM clamis for 8804 the same 50ps... I know that WM says that their part goes down to 100Hz and the "competitor" just to 10kHz... but I don't know if that is refering to TI part :)
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.