Here is a sim of the glue logic board I designed for the LTC2642 DAC (16bit, Vout, Binary Offset input).
The logic board, built like the sim, did not work as intended but I managed to fix it.
As is, the sine produced by the DAC with this logic board looked something liked this:
The fix was to add 1 extra bit of delay to the data, easily done as the neighbouring shift register output was unused on the board.
I presume the shift was causing the 2nd data bit to be written to the MSB of the DAC and a zero padding bit became the new LSB, resulting in something like the above waveform
e.g 1 1 1 1 0 0 0 0 0
All good but I want to understand why the data needed to be delayed by an additional clock cycle.
My arrangement for the shift registers is like so:
A single shift register delays the data by half BCLK cycle, then the inverted BCLK clocks the data into and through the preceding shift registers after another half cycle...
This was to avoid the problem with traditional DAC glue logic circuits where data is shifted through the delay registers and clocked into the DAC on the same clock edge, and relies on propagation delay to work at all.
When I sim trad glue logic, where the data is clocked into and through the shift registers with normal BCLK, I see what I would expect:
The LSB is a half cycle ahead of the latch signal but presumably with enough delay in the real world that it gets clocked into the DAC on rising edge of BCLK at its tail end, before the latch:
The sim for my logic board, things are aligned more appropiately and in time for the latch:
I cant understand how my board needed the data delayed by an extra clock cycle
Hopefully I gave a clear enough explanation of the setup and the issue that someone might be able to shed some light on this.
The logic board, built like the sim, did not work as intended but I managed to fix it.
As is, the sine produced by the DAC with this logic board looked something liked this:
The fix was to add 1 extra bit of delay to the data, easily done as the neighbouring shift register output was unused on the board.
I presume the shift was causing the 2nd data bit to be written to the MSB of the DAC and a zero padding bit became the new LSB, resulting in something like the above waveform
e.g 1 1 1 1 0 0 0 0 0
All good but I want to understand why the data needed to be delayed by an additional clock cycle.
My arrangement for the shift registers is like so:
A single shift register delays the data by half BCLK cycle, then the inverted BCLK clocks the data into and through the preceding shift registers after another half cycle...
This was to avoid the problem with traditional DAC glue logic circuits where data is shifted through the delay registers and clocked into the DAC on the same clock edge, and relies on propagation delay to work at all.
When I sim trad glue logic, where the data is clocked into and through the shift registers with normal BCLK, I see what I would expect:
The LSB is a half cycle ahead of the latch signal but presumably with enough delay in the real world that it gets clocked into the DAC on rising edge of BCLK at its tail end, before the latch:
The sim for my logic board, things are aligned more appropiately and in time for the latch:
I cant understand how my board needed the data delayed by an extra clock cycle
Hopefully I gave a clear enough explanation of the setup and the issue that someone might be able to shed some light on this.
Last edited: