Post DAC stages should approximate cumulative THD+N around 0.0003% at 0dBF ~110dB across the audio pass-band to guarantee 16 bit res.
Although one needs and wants to follow the own targets and desires; if low HD is the target from entire chain, by all means proceed it. And I support the individual/group decision. In essence, the beauty of this own hobby is the individual targets. But I and several people here have a different experience, so I'm compelled to share here....
This because people know that is not quite about "resolution" per se, is more complicated in fact... even with a say 0.1% THD 2H only stage but with low noise, you can see all the non-monothonic behaviour of the multibit DAC's; in not "hidden" even in the FFT. And can see (and perhaps hear) ahead 16bits due to low noise. The good behaviour from a multibit DAC is a necessity due to character of it. Is not wasted.
Is like comparing apples with oranges due to nature of the produced HD (DAC HD vs simple circuit HD).
But, since is not a dialysis and is a hobby, one can deviate from some "standard" defined targets since are playing (for fun, amusement, etc) free with circuits that perhaps not provides low low order HD... in fact, some good simple circuit with non-zero low order HD can provide same high order HD or even better than these crushed low order HD circuits, and is clearly visible the others messes from the DAC chips.
In fact, far time ago I changed my system to all tube, guess what: the importance of a clean DAC remained (in the sense of jitter, non-monotonocity etc). Good DAC's like the Sabre ones, the PCM1704 and TDA1541 remained the king in my systems. Even the SE amplifiers with it HD ones is possible to discriminate between it due to different harmonic spectra behaviour (in this case, differentiate between those mundane integrated DAC in complete products vs one well-executed dedicated).
For another example, is impossible to tame for eg. a multibit DAC with LC78820 using low order HD for masking; it are more complicated in this behaviour. Is a 18 bit DAC with a monotonocity of perhaps 14 bits guaranteed at best in field/practice (a heroic Sanyo trying to use a CMOS-only single supply IC for a multibit DAC). THe HD in absolute numbers is so close to some tube circuts, but the nature of the HD is completely different, and the sound (I listened to it some years ago).
So, not worry, all those TDA1541 are NOT being wasted by any sense, unless one are using with bad layout and not using any of the guidelines presented here (routing, stacked supplies, smoothed input signals etc). And other people WILL follow another end targets.
This is the old know can of worms I know, but....It is VERY difficult/elaborative/expensive to achieve this with transformers at lower band of spectrum.
Ofcourse one has the choice of neglecting this . . . . Many designs do.☹
TDA1541A is such an incredible sounding DAC that it would be a shame to throw away resolution.
Cheers
Although one needs and wants to follow the own targets and desires; if low HD is the target from entire chain, by all means proceed it. And I support the individual/group decision. In essence, the beauty of this own hobby is the individual targets. But I and several people here have a different experience, so I'm compelled to share here....
This because people know that is not quite about "resolution" per se, is more complicated in fact... even with a say 0.1% THD 2H only stage but with low noise, you can see all the non-monothonic behaviour of the multibit DAC's; in not "hidden" even in the FFT. And can see (and perhaps hear) ahead 16bits due to low noise. The good behaviour from a multibit DAC is a necessity due to character of it. Is not wasted.
Is like comparing apples with oranges due to nature of the produced HD (DAC HD vs simple circuit HD).
But, since is not a dialysis and is a hobby, one can deviate from some "standard" defined targets since are playing (for fun, amusement, etc) free with circuits that perhaps not provides low low order HD... in fact, some good simple circuit with non-zero low order HD can provide same high order HD or even better than these crushed low order HD circuits, and is clearly visible the others messes from the DAC chips.
In fact, far time ago I changed my system to all tube, guess what: the importance of a clean DAC remained (in the sense of jitter, non-monotonocity etc). Good DAC's like the Sabre ones, the PCM1704 and TDA1541 remained the king in my systems. Even the SE amplifiers with it HD ones is possible to discriminate between it due to different harmonic spectra behaviour (in this case, differentiate between those mundane integrated DAC in complete products vs one well-executed dedicated).
For another example, is impossible to tame for eg. a multibit DAC with LC78820 using low order HD for masking; it are more complicated in this behaviour. Is a 18 bit DAC with a monotonocity of perhaps 14 bits guaranteed at best in field/practice (a heroic Sanyo trying to use a CMOS-only single supply IC for a multibit DAC). THe HD in absolute numbers is so close to some tube circuts, but the nature of the HD is completely different, and the sound (I listened to it some years ago).
So, not worry, all those TDA1541 are NOT being wasted by any sense, unless one are using with bad layout and not using any of the guidelines presented here (routing, stacked supplies, smoothed input signals etc). And other people WILL follow another end targets.
Last edited:
Great post.
Attached below is the full article that I mentioned a few pages back.
Attached below is the full article that I mentioned a few pages back.
Fig. 9 shows a photograph of the most difficult bit transition (from 0111.1 to 1000.0) where the largest glitch occurs. The output current of the converter is directly fed into the 50 ohm 1-GHz CRT input. Total glitch charge is within 0.4 picoCoulombs. The contribution of the glitch current to the output current of the D/A converter is 0.25 I(LSB)
Attachments
Last edited:
384 kHz DEM clock divided from BCK by ÷4 works fine. For me this is the optimaly synchronous DEM clock frequency. 768 kHz increased the low level distortion, it is unusable.... do you want me measure the THD with a synchronous DEM clock >200 kHz? If not free running, the THD won't increase, and we could use smaller filter capacitors (or better filtering with the standard 100nF), is that the idea? I can try 352.8 kHz, one divide-by-2 flip-flop less.
384 kHz DEM clock divided from BCK by ÷4 works fine. For me this is the optimaly synchronous DEM clock frequency. 768 kHz increased the low level distortion, it is unusable.
Interesting, given that I have seen a lot of uses with higher FDEM.
You find 384k better than 192k?
So it suggests to divide MCK to 352.8/384kHz for DEM for best performance.
And based on the results in the other thread, add a DC offset of ~ +45mV on the outputs and a 3.3...10nF Capacitor (IMHO to DGND - [dirty ground] not ADNG) for essentially free reduction of signal and/or sample rate correlated noise.
Thor
Last edited:
768 kHz DEM clock worked also, and there was no difference in THD compared to 384 kHz. However, the -60 dB distortion increased from ~0.25% to 1% or more.
Most tests I have seen are done at full scale, that is not very revealing. I prefer testing at low level (must be dithered sine) FFT spectrum, and with 100x amplified +/- 1 LSB step signal visualized on the oscilloscope. This way we can magnify the low level behavior.
Most tests I have seen are done at full scale, that is not very revealing. I prefer testing at low level (must be dithered sine) FFT spectrum, and with 100x amplified +/- 1 LSB step signal visualized on the oscilloscope. This way we can magnify the low level behavior.
Yes the above resulted in best performance in my setup.
Ok, excellent work.
Thanks for bringing up the Nakamichi patent, it also gives some improvement for almost free (2 resistors divided from +5 V, maybe add a capacitor too).
If we have 15V, a divider to 47mV, 1k/100uF for noise filtering, then 47R +14k will give very low noise.
As said, the Cap is more to sink HF/VHF noise before it reaches the I/U conversion OPA. So it belongs at (under) the TDA1541.
We might add a 10...33R resistor in series with the input. This will not cause Distortion to rise materially but isolate the OPA from high frequency noise that may upset it's feedback loop. With 10nF/10R we create a 1.6MHz lowpass, with 33R 482kHz lowpass.
This capacitor also allows the compensation cap on the NE5534 to be removed or to use a decompensated Op-Amp as it sets a noise gain of ~ 5.5 (assuming 10nF Chf, 1.5...18k RFB I/U and 2.2nF CFB I/U.
A these frequencies the Op-Amp is a pure integrator (the feedback capacitor is dominant) so the open loop output impedance, open loop gain and respective closed loop gain dominate.
For NE5534 it looks like 45 Ohm, which is consistent for ~ 1mA output stage bias may be lowered by applying extra bias to the emitter follower. If (say) we draw an extra ~5mA (3K from out to -15V) open loop output impedance will drop to 20 Ohm. The NPN Output on the 5534 should handle many MHz as pure open loop emitter follower.
At (say) 10MHz Open loop gain is Unity (with compensation), closed loop gain is ~ -30dB, noise gain ~ 5.5 (assuming 10n/2.2nF), all of which allows pretty effective operation of the lowpass (as the 2.2nF is "virtually" increased to ~ 75nF terminating into ~ 1.5 Ohm).
An LT1028 may be a better choice for I/U conversion, the LM318 is a bit on the noisy side, but with 2.3uV noise on a 2V+ output we still get not too far outside 120dB SND for the analogue stage. FWIW, the LM118 remains available in a Metal Can. OPA637 is another excellent choice, also still available in Metal Can at Mouser.
I would stay away from most modern Op-Amp's with rail to rail outputs (e.g. OPA165X and more recent TI Audio Op-Amp's).
Thor
From the data sheet: "The OPA627 is unity-gain stable. The OPA637 is stable in gains ≥ 5"
What gain can we calculate with in an inverting I/V converter application? Sorry for my ignorance.
What gain can we calculate with in an inverting I/V converter application? Sorry for my ignorance.
768 kHz DEM clock worked also, and there was no difference in THD compared to 384 kHz. However, the -60 dB distortion increased from ~0.25% to 1% or more.
Most tests I have seen are done at full scale, that is not very revealing. I prefer testing at low level (must be dithered sine) FFT spectrum, and with 100x amplified +/- 1 LSB step signal visualized on the oscilloscope. This way we can magnify the low level behavior.
Excellent. I'd be curious to see the FFT, but don't go out of your way,
Thor
From the data sheet: "The OPA627 is unity-gain stable. The OPA637 is stable in gains ≥ 5"
What gain can we calculate with in an inverting I/V converter application? Sorry for my ignorance.
Stability is determined by noise gain.
The output impedance of the TDA1541 I estimate as 160pF//some MOhm.
So closed loop gain with 1.5k//2.2nF will be a very large negative gain.
Without the 10nF Capacitor from output to ground, OPA637 is not stable. The 10nF capacitor with a 2.2nF feedback capacitor sets noise gain to ~ 5.5 so it will be stable. This is originally shown here, though the stability issue is not mentioned, it applies:
In this case noise gain is actually very high at almost 30dB, but the turnover frequency is very high at 636kHz.
Anyway, if you have a compensation capacitor at the NE5534, with 10nF at the TDA1541, you can remove this.
Thor
I found on the computer some deglitch circuit from Nakamichi 1989.
It seems that concerns TDA1541A DAC in description.
TDA1541 doesn't need a deglitcher.
The "glitch" addressed in the patent is a consequence of the bit-switches that have a small amount of charge injection.
It's shown here. The MSB's using these "uncompensated" switches cause these "glitches".
Here a "compensated" switch as used on the LSB's:
The idea is to cancel this using an additional externally generated "counter glitch".
As this "glitch" is an extremely narrow needle pulse I'm dubious on the practical success
Thor
Last edited:
It is partial solution. Because it is about the code it self and transition within MSB.As this "glitch" is an extremely narrow needle pulse I'm dubious on the practical success
Other coding method should be used to solve this. But it is another topic,
.
With this Nakamichi solution problem is implementation, beacouse it deserves parallel BIT data stream.
And we have serial datas.
It can be done with serial to paralell to serial glue logic or cpld/mcu unit, but it complicates circuit with questioned results...
.
It could be interesting for alredy present parallel bit datas for some diskreat DAC or some IC dac with parallel bits DATA.
Hi Did You maybe measured thd with 1/2 of 38KHz = 192 KHz DEM Fo ?768 kHz DEM clock worked also, and there was no difference in THD compared to 384 kHz. However, the -60 dB distortion increased from ~0.25% to 1% or more.
Most tests I have seen are done at full scale, that is not very revealing. I prefer testing at low level (must be dithered sine) FFT spectrum, and with 100x amplified +/- 1 LSB step signal visualized on the oscilloscope. This way we can magnify the low level behavior.
The "glitch" addressed in the patent is a consequence of the bit-switches that have a small amount of charge injection.
Just to be clear what happens there.
The Bit Switches work by turning on/off one transistor (outlined in Orange):
Vref must be more positive than AGND.
Iout is stated to be +/-25mV of AGND (in practice more is possible).
The node outlined in purple is where the switch forks.
Below that switch are two sets of cascodes, the DEM Filter and the DEM current SINK (not source).
Ibit sinks whatever is the bit current.
When the switch transistor is on, the current into the sink flows from +5V via Vref (internal rail) into the current sink and then into the -15V Rail.
The purple node is biased at a Voltage that reliably turns off the diode connected Transistor connected to Iout off. Iout is close to zero (a few nanoAmpere). This voltage must be slightly more positive than the voltage at which the transistor would conduct.
Now we turn off the Transistor to turn on the bit current. This is illustrated in the diagram, incidentally.
Stored charges in the transistor are rapidly drained by the driver circuit, the Base and thus the Emitter (which follows) drop in voltage until the current in the circuit no longer flows in the transistor and now flows from Iout through the diode connected transistor and into the node outined in purple.
In order to turn on this transistor, the voltage on on it's emitter must drop a little, this is actually not the real problem. It is at the leading edge of current step up. All that happens is that the edge is slowed and the DAC settles a little slower:
This Photo is from @lcsaszar's post in the other thread - remember this is after the inverting I/U conversion, so a negative current step (turn on a bit current) will show as positive and a positive step (turn off a bit current) as negative, a positive glitch will actually show up negative, we see here a +/-1LSB Step, around digital zero I presume.
Now we see a narrow Dirac or Needle Pulse when we turn off the MSB and return to digital silence. The pulse is appx. 2 LSB tall but of extremely short duration.
It happens when the Diode switch turns off. The step function at the emitter of the diode connected transistor is conducted to the output, while the capacitance in the transistor discharges. Then everything is "Off".
DC offset can shift around the "null" point of the glitch by applying a DC offset of +45...55mV to the TDA1541 Output. Here again a 'scope shot from @lcsaszar:
What we have done here is to raise the Iout node by 45mV and now we get positive and negative going spikes that are the opposite of the direction of the switching. Here is how Nakamichi (where this originates) explains it:
Without the DC offset to compensate the glitches, the glitches create even and odd order harmonics, compensating them removes the even order components but not odd order. Still, it seems a simple and worthwhile measure.
The rather complex added pulse DA Nakamichi proposes to create a "counter glitch" is questionable IMNSHO. Given the pulse is rather narrow, in the region of a 48kHz FS IIS BCK pulse width (~ 160nS) simple RC filtering should do.
I would propose something like this (I found some Gold plated lid ceramic case OPA620SG [does SG stand for Solid Gold?] and decided I like them) as I/U conversion and analogue stage:
The 10R/10n combo keeps the AC component at the TDA1541 to less than +/-25mV PP with 48mV DC offset and forms a 1.6MHz lowpass.
So the now symmetrical 1/2 LSB pulses equal to around a cycle of 3Mhz are attenuated, well, around 8dB or 2.5 times passively before reaching I/U conversion. That should clean up everything significantly.
Even an NE5534 should improve considerably. If using NE5534 or LT1028/1115 of course +/-15V can be used. I'd be curious if @lcsaszar wants to give this a try on his test board.
The rest is a simple I/U circuit with unequal -2.6V/+7.8V (equivalent to +/-5.2V) supplies. That makes sure all inputs remain within their common mode range and allows ~ 0....6V PP output swing, while not exceeding the +/-6V PSU limit of the OPA620SG. The OPA620SG Rails decouple to +5V DAC, so the analogue current returns to the DAC rail.
The output stage is running in SE Class A and the output always remains at a positive voltage never crossing "zero" (mA). The OPA620SG is a chunky Op-Amp, meant to drive 2V RMS into 50Ohm at 10MHz or so, so there is also a lot of quiescent current in the output stage anyway.
The RLC filter and DC Blocker Cap (Nichicon Muse Bipolar) give a build out element, 2nd order lowpass and DC removal. If not using an Op-Amp that cruises driving ~ 300 Ohm loads, better scale the Filter to something like 10mH/1nF/47uF/2.4k.
Thor
It is partial solution. Because it is about the code it self and transition within MSB.
Maybe, maybe not. It looks like something not directly related to MSB switching, but the nature of bit switch circuit.
It's seems to me more a capacitive phenomenon that is best left to capacitive filtering to attenuate.
Thor
Interesting explanation, would a differential dac setup neutralize the glitch, assuming 'matched' TDA's? or is the likelihood of catching the spike timing just right near impossible, which I understand is one of the problems in the complex Nakamichi patent solution.
Interesting explanation, would a differential dac setup neutralize the glitch
I already addressed this before.
The two DAC's will switch into opposite directions, so the glitches will add, nor subtract.
In my view, the solution is the DC Offset to make the glitch symmetrical and lowpass filtering to reduce the amplitude to the point where it disappears.
This is basically another version of the capacitive feed through which gives us BCK (mainly) feedtrough on the output.
Thor
Hi Thorsten,
If I may, I recalled yrs back reading about using 100 ohm resistors on the I2S lines to slow or damped
the signals. Do they have any positive effects on SQ ?
Apologies & many thanks again
If I may, I recalled yrs back reading about using 100 ohm resistors on the I2S lines to slow or damped
the signals. Do they have any positive effects on SQ ?
Apologies & many thanks again
This are already mentioned here. Some kohm R alone or a little less (eg. 680R like in the MVLabs website example) with some pF to ground (I used this: 680R; 39pF to ground). Reduces the coupling pulses from bit input pins to output pin via substrate. Definetively recommended.
- Home
- Source & Line
- Digital Line Level
- Building the ultimate NOS DAC using TDA1541A