I've seen a number of claims on this forum that feeding the bitclock directly from the master clock into a PCM1704 produces a superior product to feeding the bitclock into the DAC from the DF1704 serial port. They say ignorance is bliss, but I see no reason for this scheme to actually work. As the relationship between BCLK, WCLK, and DATA is completely undefined, the DAC may be:
1) Sampling DATA while DATA is in transition; or
2) Shifting 23 or 25 bits per word instead of 24
It seems to me that the supposed improvement in jitter performance (never quantified) would not make up for the potential bit errors, which are quite major. If BCLK rising edge occurs during DATA transition, the shifted bit will be who-knows-what. Setup and hold requirements are both 10ns on this chip and if you are operating with the maximum 24.576MHz (96kHz input to DF1704) that leaves only 600 picoseconds timing margin.
Basically I see no reason why this idea, popularized by the Broadhurst DAC, should be taken seriously. Can anyone explain why the relationship between BCLK, WCLK, and DATA is maintained?
1) Sampling DATA while DATA is in transition; or
2) Shifting 23 or 25 bits per word instead of 24
It seems to me that the supposed improvement in jitter performance (never quantified) would not make up for the potential bit errors, which are quite major. If BCLK rising edge occurs during DATA transition, the shifted bit will be who-knows-what. Setup and hold requirements are both 10ns on this chip and if you are operating with the maximum 24.576MHz (96kHz input to DF1704) that leaves only 600 picoseconds timing margin.
Basically I see no reason why this idea, popularized by the Broadhurst DAC, should be taken seriously. Can anyone explain why the relationship between BCLK, WCLK, and DATA is maintained?