Building the ultimate NOS DAC using TDA1541A

I am not sure you have been reading this thread very thoroughly, as there has been much talk about the benefits of stopped clock operation and of how many things influences the performance of the TDA1541A. Jitter is not only a a matter of clock entering the TDA1541 is low jitter, but the uncertainty (timewise) by witch TDA1541A converts the stored data to the analog out put.
Here is just a few comments from ECdesign on these matters, and I recommend you read much more of the thread to look into his observations and measurements.
Post 6101 there is a little about the noise when the clock is active
Post 6126 you can see the timing of the stopped clock operation.
It has nothing to do with sampling frequency. The timing is the same even though the CLK is 384 Khz. The main feature is that at the time the LE goes high, there is NO activity on any of the DATA or CLK inputs.
I am not the right one to explain it correctly, look for ECdesigns explanations.

I looked at those posts. Thank you for referencing them. Since this thread is over 600 pages it's hard to read or remember it all.

Stopped clock will not function at an Fs of 384kHz, as that means the bit clock runs at 6.144Mhz. There is no time to stop the clock. If you look at my logic analyzer waveform, the latch enable occurs when the clock and the data are unchanged, on the falling edge of the clock and is reasserted with the next rising edge.

I can't control internal delays that might affect the exact time data is latched, but by giving it this quick, precise pulse, it should be minimized as much as possible. The only extrinsic source of controllable jitter is from the i2s bit clock input in my design as data and latch signals are based off of it.

I will have to get my full setup on a scope and look at the switching noise. If I am making an error here, please enlighten me as to what could be improved.
 
Last edited:
I looked at those posts. Thank you for referencing them. Since this thread is over 600 pages it's hard to read or remember it all.

Stopped clock will not function at an Fs of 384kHz, as that means the bit clock runs at 6.144Mhz. There is no time to stop the clock. If you look at my logic analyzer waveform, the latch enable occurs when the clock and the data are unchanged, on the falling edge of the clock and is reasserted with the next rising edge.

I can't control internal delays that might affect the exact time data is latched, but by giving it this quick, precise pulse, it should be minimized as much as possible. The only extrinsic source of controllable jitter is from the i2s bit clock input in my design as data and latch signals are based off of it.

I will have to get my full setup on a scope and look at the switching noise. If I am making an error here, please enlighten me as to what could be improved.

Your right concerning 384 Khz but at any other sampling frequency the transition from data register to output is when the CLK and DATA is silent- Th
 
If I am making an error here, please enlighten me as to what could be improved.

You could unhook the latch enable from the bitclock, feeding it directly from a low phase noise oscillator divided down in respect to the sample rate.
FPGA should be slaved to the latch enable, providing bit clock and data when requested and and after you could stop the bit clock.
 
I am very interested in ways to improve my implementation so I apologize in advance for being contrary. It is only for the sake of learning more about this subject and to posit questions more directly in hopes to spurn accurate discussion on the matter.

You could unhook the latch enable from the bitclock, feeding it directly from a low phase noise oscillator divided down in respect to the sample rate.
FPGA should be slaved to the latch enable, providing bit clock and data when requested and and after you could stop the bit clock.

How would I synchronize the latch to the bit clocked datastream in this case? How would this improve on a low phase noise oscillator used to control the incoming bitclock and then having the latch enable based upon that? In my current implementation, it assumes only a low phase noise BCLK input and times everything off of that. I2S only has a word clock, not a specific latch enable. The FPGA could be arranged to time itself off the word clock, but that would require internal PLL reclocking which I don't think would be as efficient or accurate of an implementation as allowing the user to supply their own low phase noise bit clock (or MCLK) signal.

Your right concerning 384 Khz but at any other sampling frequency the transition from data register to output is when the CLK and DATA is silent- Th

Does that transition need to be silent after the latch enable is finished toggling? As it is currently implemented, there is no transition during LE High. I'm not sure on the internal state of the TDA1541, but the LE signal should be internally delayed by a few ns as it requires no setup time according to the datasheet. As far as I can assume from my own experience, the output data will be held until the next latch signal, and any noise during this time shouldn't interrupt the states of any bits unless that noise is of sufficient level to affect the registers themselves (which would be insane noise levels); ground bounce and other analog issues that could affect the output notwithstanding.

One must hold the output during clocking signals at some point, as data doesn't magically appear - so there will be a burst of noise at some point during the supposed quiet time as the next word is toggled in. I'm not sure what having this burst during a small period of time between LE ticks buys over spreading it out over the entire inter-LE time. If you have a reference to a previous post or more information, I'd really like to know more about this phenomenon.
 
I'm not sure if that is feasible at 384kHz.

It is not. Not at any frequency when you have 16 bit data and 16 bit frames. That is as slow as bitclock can be relative to sample frequency. You cannot stop the clock when it takes the entire sample period to load the dac. For stopped clock operation the frame length has to exceed the audio data length.
 
I am very interested in ways to improve my implementation so I apologize in advance for being contrary. It is only for the sake of learning more about this subject and to posit questions more directly in hopes to spurn accurate discussion on the matter.



How would I synchronize the latch to the bit clocked datastream in this case? How would this improve on a low phase noise oscillator used to control the incoming bitclock and then having the latch enable based upon that? In my current implementation, it assumes only a low phase noise BCLK input and times everything off of that. I2S only has a word clock, not a specific latch enable. The FPGA could be arranged to time itself off the word clock, but that would require internal PLL reclocking which I don't think would be as efficient or accurate of an implementation as allowing the user to supply their own low phase noise bit clock (or MCLK) signal.

The bit clock does not worth much, it's not the most important signal, you only have to care that data are synced with it when loading the registers of the DAC. The crucial signal of an R2R DAC is latch enable, when data is presented to the swithes that drive the ladder network. To avoid any interference the bitclock should be stopped before the latch occurs.

You don't need any PLL, simply the FPGA should not provide the latch enable to the DAC. If you use the TDA1541A in simultaneous mode pin 1 is the latch enable, so you need to provide this signal absolutely free of jitter.

To do this you have to feed the latch enable directly from a very low phase noise (in frequency domain) oscillator. To reach the best phase noise performance of the oscillator you should use a crystal around 5 MHz, where the Q of the crystal is maximized. Let say 5.6448 MHz and 6.144 MHz to cover both sample rate families. But you need specific clock for each sample rate, from 44.1 kHz up to 192 kHz (or 384 kHz), so you have to divide down from the master clock to the specific requested frequency, 16 to 128 times. Obviously, also the divider has to be jitter free in time domain (the best phase noise performance as possible in frequency).

FPGA has to be slaved to the divider that provides the latch enable starting from the external oscillator (master clock).
Moreover, the FPGA should provide the signal to switch between two oscillators, one for each sample rate families, and it should tell the divider how many times it has to divide.

In other word, FPGA should receive both master clock and latch enable clock. On the falling edge of the latch enable the FPGA should start to load the registers of the DAC (16 bit parallel data for both channel) synced to the master clock provided from the external oscillator. After the registers of the DAC are fully loaded the bit clock should be stopped.

This way you provide a very clean and jitter free latch enable to the DAC without interference incoming from other signal like the bit clock.
 
Last edited:
I looked at those posts. Thank you for referencing them. Since this thread is over 600 pages it's hard to read or remember it all.

Stopped clock will not function at an Fs of 384kHz, as that means the bit clock runs at 6.144Mhz. There is no time to stop the clock. If you look at my logic analyzer waveform, the latch enable occurs when the clock and the data are unchanged, on the falling edge of the clock and is reasserted with the next rising edge.

I can't control internal delays that might affect the exact time data is latched, but by giving it this quick, precise pulse, it should be minimized as much as possible. The only extrinsic source of controllable jitter is from the i2s bit clock input in my design as data and latch signals are based off of it.

I will have to get my full setup on a scope and look at the switching noise. If I am making an error here, please enlighten me as to what could be improved.
Still I am not the expert here, so I cannot give you the right answers. And there is no one single post that has the answers, so I am afraid you will have to skim through the last 1000 posts at least to find all the answers. Look for posts by ecdesigns, that will narrow it in a bit..I think John has found that the latch process is safest (least jitter) when there has been no other activity on the TDA1541A chip for some time .
Look at post 6069 , 6077,78,79,
In post 6095 he talks about the problems with running DEM at high frequencies because of the "ground bounce".
In 6098 some measurements aso....
Try to look for yourself, it will be rewarding I think.
By the way he also thinks it will be very hard to avoid the noise from a FPGA. The glue logic way of doing it requires only the bit clk as the highest frequency in the hole process, and that is easier to decouple than higher freqs.
 
You don't need any PLL, simply the FPGA should not provide the latch enable to the DAC. If you use the TDA1541A in simultaneous mode pin 1 is the latch enable, so you need to provide this signal absolutely free of jitter.

To do this you have to feed the latch enable directly from a very low phase noise (in frequency domain) oscillator.
...
This way you provide a very clean and jitter free latch enable to the DAC without interference incoming from other signal like the bit clock.
I am using a reclocker for the LE signal, composed of a SN74S74N Dual D-type flip-flop. The two flip-flops are connected in series in order to prevent metastability (LE -> D1, Q1 -> D2, Q2 -> TDA1541A pin1), parallel clocked by a low-jitter master clock that the LE is derived from. The master clock is 11.2896 MHz in my case. This arrangement will clean up the LE pulse. Here is the data sheet of the dual D-flip-flop:

http://www.ti.com/lit/ds/sdls119/sdls119.pdf
 
Still for sale, 40euro a piece. Really need to money to cover some bills.

Hi,
Sorry but maybe they are a little bit too expensive - they go for around 25 USD on this forum. For higher prices try e-Bay.
And most people need one or two, not 12. On the other hand there may be many people who would take two, so finding 6 people shouldn't be too much of a problem.
Best Regards
 
Hi Xaled,

How is testing going with the I2S board ??
Hi Sven,

Tests are not finished as I have some family things to do in the next few weeks/month.

If anyone is interested I can send the current PCBs (I have 8 to give) for free to play with as they are not finally tested. You would have to provide feedback ;)


Be aware that some parts are really small and I had to use USB microscope to solder them and still had some shorts. I could not trace all mistakes yet that is why the tests are not finished.

Let me know if anyone is interested.
 
Hi Sven,

Tests are not finished as I have some family things to do in the next few weeks/month.

If anyone is interested I can send the current PCBs (I have 8 to give) for free to play with as they are not finally tested. You would have to provide feedback ;)


Be aware that some parts are really small and I had to use USB microscope to solder them and still had some shorts. I could not trace all mistakes yet that is why the tests are not finished.

Let me know if anyone is interested.
You'v got PM.
 
Account Closed
Joined 2010
As it is very difficult for me to read all the pages on this subject i want to ask a question related to a newly acquired Sony CDP750 which i found to have no low pass output filter just 4x oversampling and a simple i/v stage but there's one aspect i don't really understand about it:


it's using 22nf dividing capacitors on tda1541 , instead of 100 or 220 as many other players based on the same dac. Is it right or wrong?


What would be the ideal values for those capacitors.Is this causing the fact that it needs no low pass filter? Distortion wise, it seems pretty low distortion toy...I don't know if there's a relationship between the capacitors values and the lack of low pass filter or it's the oversampling chip that solve the problem and the capacitors should simply be higher in value for a better precision.


I also have the impression that i might mod it to look like a Nakamichi cdp-2 as it is using the same dac, oversampling and memory kit chips.
Unfortunately i have no detailed schematic of the dac-i/v-filter section in the schematic manual.
Could someone also help me with a detailed schematic of the Nakamichi cdp-2a(e)?
 
Last edited: