Synchronised AD/DA convertors and Phase Noise

I know some high-specced DACs where phase noise from clocking is dominating. AFAIK these use PLLs.

To me these types of measurements are not about audibility but just one way to assess the implementation quality. It's a pity ASR does not make such measurements as the phase noise top-20 list would probably look quite different than SINAD list.
 
Yea, some sites forget to look to the left of the test tone - claimed distortion to better than -130 down but I see one at -92. That's also distortion - no?

dist.jpg


//
 
If the timing noise is random, I do not see how the deviations could be correlated when sampling the generated signal passed through DUT (non-negligible delay from MCLK pulse POW). But I have no solid analysis behind my "feeling".
 
But if you run both ADC and DAC at the same "faulty" clock with close in phase noise, won't these errors cancel as both ADC and DAC are seeing the same timing inconsistencies?
There may be some noise cancellation but since DAC and ADC are working independently and have their own noise sources I would not expect substantial cancellation. And as some members have already pointed out even with synchronous clocks DAC and ADC do not have synchronous processing and unequal clock wire/trace lengths cause timing skew. Moreover as I already said asynchronous clocks cause spectral leakage which can be only reduced but not removed with FFT windowing. IME these types of phase noise measurements do not make much sense with asynchronous clocks.
 
But if you run both ADC and DAC at the same "faulty" clock with close in phase noise, won't these errors cancel as both ADC and DAC are seeing the same timing inconsistencies?
Not exactly. If you draw a graph of a music waveform, then put timing marks on the x-axis at the timing intervals where the samples are supposed to be taken. Say the top waveform is the dac, and the bottom waveform is the ADC. You can decide if the dac it is outputting a waveform first then it goes into a DUT, from there it goes back into the ADC. In that case the waveform going into the ADC will be displaced in time relative to the dac waveform, since there is a time delay going through the dac output stage, the DUT, and the ADC input filter. If the clock is going at 45Mhz, you don't need much delay to get off by one sample.

Now mark the graph with actual sample times randomly offset a bit from the ideal sample times. You can see what part of the dac waveform is getting sampled, and how much later the ADC is getting that analog data and starts digitizing it. You can also see at that point in time, the random clock error is a different one from when the dac outputted that sample.

This thought experiment can be done on paper or in your head, whichever you prefer. Having worked through it, so you think the DAC and the ADC error cancel?

Also, some additional timing error/phase-noise can be added to the dac and the ADC because of random fluctuation in their clock input aperture window threshold, which is known to occur due to 1/f noise in the receiver.
 
Last edited:
OK I hear you. Don't want to derail the excellent project thread by the OP, just one last question: We're talking about close in phase noise here, aka the phase noise components we're interested in are <10Hz. Meaning a time constant of ~100ms. How would trace length deviations, which introduce time delays in the nS range, be of any concern?

And - maybe - if the main contributor to these noise skirts around the fundamental frequency is indeed more much more related to Vref amplitude fluctuations rather than clock phase noise, then the synchronous clock approach would still have its benefits enabling us to look deeper into these Vref induced skirts.

EDIT: if you talk about time delay introduced by the inherent LPF nature of any analog stage, maybe there will be time delays introduced in the uS range. Still orders of magnitude aways from the mentioned 100mS range.

EDIT2: I realize that the SI abbreviation for a second is not S but rather s. My bad. Will leave previous content unchanged in this regard an be more careful next time.
 
There are still some benefits from synchronous clocks though. Timing errors between dac and ADC will exist, but they may still be smallish in terms of what is needed for music reproduction. Humans are most sensitive to timing errors for ITD localization, which may be related to timing errors between stereo channels of as little as a small few microseconds (according to some research, anyway). Given the claimed numbers, it doesn't offhand seem likes a few nanoseconds of skew between chips should be a problem. That said, using better and better clocks always seems to pay some audible dividends but only if other problems are addressed first. It seems to me that very small timing errors that fold down into the audio band are easily masked by other problems, so go after the bigger problems first. That includes grounding and shielding type of concerns, clean power, etc.; its not just what is seen on a schematic.

As an aside: Sometimes people PM me for help with projects. A relative newbie told me he was trying to connect a Chinese AK4137 board to a SPDIF transmitter board, and then to his dac (the reasons were complicated, I won't go into them here). He said it didn't sound right but the distortion measured very low. I said distortion isn't your problem. He said, "I know. Its noise." It surprised me that a newbie would figure that out. So I asked how he knew that. He said when he moved the boards around relative to each other the noise floor changed. Wow, he's and experimentalist! That's great. So we talked about fixing that noise problem and noise problems in the power supplies.
 
Last edited:
Not exactly. If you draw a graph of a music waveform, then put timing marks on the x-axis at the timing intervals where the samples are supposed to be taken. Say the top waveform is the dac, and the bottom waveform is the ADC. You can decide if the dac it is outputting a waveform first then it goes into a DUT, from there it goes back into the ADC. In that case the waveform going into the ADC will be displaced in time relative to the dac waveform, since there is a time delay going through the dac output stage, the DUT, and the ADC input filter. If the clock is going at 45Mhz, you don't need much delay to get off by one sample.

Now mark the graph with actual sample times randomly offset a bit from the ideal sample times. You can see what part of the dac waveform is getting sampled, and how much later the ADC is getting that analog data and starts digitizing it. You can also see at that point in time, the random clock error is a different one from when the dac outputted that sample.

This thought experiment can be done on paper or in your head, whichever you prefer. Having worked through it, so you think the DAC and the ADC error cancel?

Also, some additional timing error/phase-noise can be added to the dac and the ADC because of random fluctuation in their clock input aperture window threshold, which is known to occur due to 1/f noise in the receiver.

I would agree if we were discussing the phase noise floor, but the question was about close-in phase noise, so the phase fluctuations of interest are very slow compared to the clock frequency. That is, you have hundreds of thousands of your 45 MHz clock cycles in a row all displaced by about the same amount.
 
  • Like
Reactions: Tfive
Add the mechanical vibration situation in a typical built DAC and your -117,6dBc/Hz at 1Hz clock has gone down the drain - in a quiet lab - sure... when you play music at home reaching say 80 dBA at favourite chair, everything in that room vibrates violently compared... If you live in a busy town... well...

I was a believer but not so much any more. Due to some own experiments but also some pondering of the general state of affairs...

Monotonicity, low signal correlated noise, great att. at Fs/2 and reasonably low distortion is probably what really matters.

//
 
...have hundreds of thousands of your 45 MHz clock cycles in a row all displaced by about the same amount.
If the clock were simply phase modulated with a something like a smooth sine wave phase deviation over time, then I could see that. However, if the phase deviation over time is sort of phase modulated at LF, but with lots various higher rate phase deviations (more like a complex music waveform), then its not so clear what the effects would be (IOW, its not clear that its a linear system so that the phase deviation waveform frequencies can be taken as separate signals). The clock phase deviation could bridge the whole range of close-in and far out phase noise in less than one cycle of the lowest frequency of phase deviation, or it could be very smooth, or sometimes it could be one or the other. Makes me think what we really want to know is not what we can measure. We can measure the amount of time the clock spends on average at a given offset, but we can't derive the time-domain waveform of phase deviation from that.
 
...is probably what really matters.
Yeah. I have said that the clock is one of the last things you that needs fixing, so fix all the other stuff first. Once you do that, then go back and try a better clock. The masking sources should be gone and an SOA clock can then be appreciated. Also, DSD dacs are more sensitive to clock jitter than PCM dacs simply because the step size is much larger; so a given clock timing error produces a bigger area under the curve of the output pulse that will be LP filtered to produce the audio. For PCM the amplitude steps are small, so the timing error only integrates out to a small error at the dac output.
 
So you guys know, i reported my first post here kindly asking moderators to splits this discussion into it's own thread. I'll probably have some more questions about this really interesting topic, but I hope we can move this to a new place and not pollute ska's very interesting thread any more
 
  • Like
Reactions: MarcelvdG