Preprocessing data while driving a DAC.

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Suppose a DAC is driven in such a way as the following:

Since a DAC essentially converts a binary string into a voltage or current, and this is done every time a binary string is presented at its input, it makes sense to hold all digital inputs at a logic 1 if the succeeding bit is also 1. This should avoid having the corresponding bit going down and up at the arrival of the next binary string. So, it goes something like this:

If the present bit is 1 and succeeding bit is 1, hold the input bit at 1. If the next bit is 0, drop the input to 0 normally. The same holds for successive 0 bits. This, according to my logic, should minimise some noise issues at the output.
 
Markw4 said:
Is the question about the I2S/DSD data going into a dac, or where the actual conversion from digital to analog is being done?
What I am writing about, applies to what happens inside the DAC chip. One has to have access to the individual bits of the actual digital to analog circuit.

Returning back to what I wrote in the OP, having non-identical rise and fall times is definitely a problem, but it can be resolved. A short delay should allow outputs from zero bits to fall sufficiently not to cause issues when the next full bit string is applied.

This is an algorithm that may address the issue of different rise and fall times.
Code:
1) are the bits of current and next bit string the same?
2) if yes, do nothing to the bits.
3) if No check whether the latest bit is a 1 or 0.
4) apply the new zeros
5) pause processing for enough to let falling bit outputs reach a low value
   suitable for activating the new 1s
6) apply the new 1s.
7) repeat from 1 for the next bit string
 
If the present bit is 1 and succeeding bit is 1, hold the input bit at 1. If the next bit is 0, drop the input to 0 normally. The same holds for successive 0 bits. This, according to my logic, should minimise some noise issues at the output.

This will give you noise that depends on the digital bit values and the number of transitions, in other words signal-dependent noise, or nonharmonic distortion.

On the other hand, if you output each bit normally, then it is sampled by the DAC, then invert each bit for a half period of whatever your sample clock is, then you get constant, uniform noise that is not signal dependent.

This also applies to noise radiated by traces carrying digital data signals, btw.
 
What exactly is this idea meant to convert as it is pretty much useless for standard digital audio of 16bit and up ?

IIRC, it was something that Rob Watts mentioned. My intuitive take on it would might be something like: For Sigma-Delta dacs HF pulses are integrated in output stage LP filters, and thus transformed into a continuous signal. The accuracy of that signal is in part a function of the energy per pulse being integrated. In other words, current is charge passing by a point over time. The area under the pulse curve is proportional to charge transferred.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.