use digital attenuation to improve 16-bit CD playback on 24 bit DAC
Digital volume control has traditionally been regarded sceptically by purists because it involves a loss of resolution. This was definitely true in the times of 16 bit ladder DACs. It is still true when high attenuations are used.
However, a slight, constant attenuation might even be benefitial with modern DACs. A well-dithered CD has no more than 18 bit information. A modern 24 bit DAC will fade to noise at about -125 dB which is more than 20 bits. On the other hand, all DACs seem to have some increase in THD at near full scale. This can be seen from the fact that THD+N at 0 dB input is always less than THD+N+20 dB at -20 dB input. Also, THD+N vs. frequency plots are usually given at -3 dB. Furthermore, there are usually plots of THD + N or dBr vs input amplitude. The linear plot always will level off somewhere between -10 and -3 dB.
I am not sure about the nature of this distortion mechanism. It cannot be exclusively due to compressive effeects the I/V amps as even current out DACs like the AD1853 and the PCM1738 exhibit this kind of behavior.
Attenuating the input signal digitally would move the MSB to the start of the linear THD vs. input region. The noise floor is still far enough away for a 16 bit signal.
This can be done using the digital attenuator implemented in the digital filter of most DACs. This operation involves multiplying all filter coefficients by a factor <1 , providing that the internal word width is sufficient, e.g. 32 bits, this should be perfectly harmless.
It can be done practically by connecting a microcontroller to the DAC's serial control port. IT CAN EVEN BE DONE WITH THE REMOTE CONTROL OF MANY COMMERCIAL PLAYERS!
The other way to do it is to use a left-justified input mode such as I2S and delay the SDATA by one bit clock with a flip-flop, resulting in an effective attenuation by 6 dB. If SDATA does not idle on 0, it will have to be set to 0 for that one BITCLOCK period.
What do you think?
The only DAC that I am aware of that gives performance at 16 bit input is the CS43122. For a
0 dB input, THD+N is specified at 95 dB which is about 3 dB below the theoretical performance. This might be due to the full scale distortion mentioned above. Strangely, THD+N at -20 dB is just 75 dB. Probably, this was just a conservative guess.
Digital attentuation isn't a bad idea, and not just to reduce full-scale distortions in the DAC. If the digital attenuation happens prior to any digital filtering stages, it can help reduce the possibility of clipping if the filter's output exceeds full scale (eg on transient overshoots and so forth). I believe the HDCD filters like the PMD-100 perform a fixed 1dB attenuation before anything else.
I've previously thought about combining a little digital attenuation with an analog stepped attenuator (relay controlled) to reduce the stepped attenuator cost and improve the precision and inter-channel matching.
Can you comment on the idea of performing some analog attenuation at the current reference of a dac like AD1853? I've been starting to experiment with a dual system (uC and analog) that attenuates the current reference in 6dB steps and then uses digital attenuation to obtain a 1dB step size. My logic was that attenuation of 18-24dB at Iref would mean no resistor noise in the output, as induced noise at Iref would be common mode at the output. (ha!) Unfortunately my test equipment really isn't good enough to detect distortion figures on a DAC this good, and Analog doesn't provide any information for nonstandard Iref values.
Stepping the Iref current is another very appealing attenuation method. I've heard of it being used on at least one of the high-end sony dsp models (one which was highly reviewed by Stereophile, for whatever that's worth...). Unfortunately, i don't have any of the nitty-gritty technical details, or even the part number of the DAC they're using.
My only question at this point is what the effect of Iref noise will be... I'm not entirely certain it will produce common-mode noise at the output, although perhaps the sigma-delta operation of the DACs can counteract Iref noise to a certain extent anyway. As well, one can't be certain what will happen to the other performance parameters of the DAC. The datasheet specifically states that the Iref can be used for attenuation "up to -50dB", so presumably it should work just fine. Despite a couple of unknowns, it's still a very intriguing possibility, and i bet it will work very well. I guess the only way to know for sure is to try it out, and do a little comparison testing. If you get around to it, I'd love to hear about your results!
A couple of comments about changing gain digitally - sorry if I am covering thing you already know.
Multiplying a digital signal by a constant can be 'perfect' (if you use enough decimal places - or should that be binary places) - it is the requantisation process that is non-linear and consequently introduces distortion. Whenever a requantisation is performed, dither must be added in an attempt to linearise the operation at the expense of adding some noise. The traditional dither proces 'convets' the distortion in to white noise (mathematically, decorrelates the quantisation noise from the input signal). You mention 18 bits in regard to CD, however, as I'm sure you are aware, CD can only store 16 bits of data per sample, so I presume you are refering here to noise shaping, whereby the spectrum of the quantisation noise is un-whitened, so that the noise is reduced at some frequencies (and consequently increased at others - no free lunces in audio!). When re-dithering such a noise-shaped signal, unless also using a noise-shaping technique, the perceved noise floor will be raised. This excludes power-of-two gain, as no requntisation is required provided that no bits are lost (it could be argued that this is not a gain at all, but merely a change of persective!??)
I would expect that the THD+N with a -20dB input to be 20dB lower than that at 0dB, as the noise floor doesn't know how big your sample values are! Or did I misunderstand what you have said?
Looking at the data sheet for the PCM1738, for the 44.1kHz case, THD+N seems to bottom out at about 0.0004% at -3dB and remains flat to 0dB. This would suggest that there is no benefit in reducing the gain below -3dB (as after this the THD+N ratio rises), however, it also suggest that there would be no penalty with this device in reducing the gain by upto about -3dB, although no benefit either.
Interestingly, the 96k data shows a small increase above -6dB - is this what you are talking about? If so, it is likely that a larger bandwidth has been used for THD+N measurments for the 96k case. The actual increase at the end is probably due to the delta-sigma modulator in the converter (when dirven harder, they tend to increase their out-of-audio-band noise, which will most likely be visible in a 40k bandwidth, but much less visible in a 20k one - a clue on page 8, top right?)
As always, the data sheet is somewhat lacking. THe specral plot is only shown at -60dB where all the distortion is well hidden below the noise floor. We can only guess what frequency was used as input to the THD measuremtns, but I'll bet is was carefully chosen to show the device in the best light. I'll wager that cracking up the input for the spectral plot would reveal obvious harmonic lines fairly quickly.
Hope that wasn't too much of a rant - I've probably totally missed the point you were trying to make and blithered on uncontrollably - sorry! In short, looking at PCM1738 data sheet, I can see no reason to attenuate the input in the name of distortion or SNR - the smaller the THD+N the better, and the smallest number corresponds with the biggest input.
the PCM1738 data sheet has a couple of obvious errors, e.g. 48 and 96 kHz reversed on some plots.
I think you grasped the point of how more than 16 bits of information can be strored on a CD at lower frequencies pretty comprehensively. Bob Katz has a couple of enlightening articles on this at www.digido.com, if I may suggest further reading.
I guess I didn't phase my point clearly enough. A modern DAC will have a linear range in the plot of THD vs. output level than is significantly broader than 16 or even 18 bits. However, this linear range does not begin at 0 dB, rather at - 3 to -8 dB, depending on the DAC in question. Right shifting the 16 bit data by 1 bit (i.e. 6 dB)will not result in the loss of any information, provided the DAC is sufficiently linear. However, it will move the 16 bit range away from full scale where nonlinearities occur.
Look at any of these plots, the graph will be linear between -3-8 dB and -110 or so dB. The 16-18 bit information corresponds to a certain interval on the x-axis. Moving this interval slightly to the right will give you a bigger interval in the y-direction.
If it still isn't clear enough, don't hesitate to aks.
Thanks for the reply. What I am having trouble understanding is why, in your opion, there is any virtue in alligning oneself to a straight line portion in the THD +N plot. THD+N is somthing we wish we had as little of as possible regardless of the level of signal, and would much rather that it stayed at a uniformly low-as-possible level - these THD+N graphs do not represent the transfer funtion of the converter - merely the accumulated THD+N _of_ the transfer funtion - a THD of 0.001% would be completely undetectable by eye if graphed in this manner. When we see that the graph has flattened out between -8 to -3dB and 0dB, this means that there is no more THD+N (ratiometrically) at 0dB than there is at -3, and hence, no reason to choose one point over the other if all we wanted was a full scale output at that level - not very useful, admitedly. Let's assume we have a flat THD+N from 0dB to -3dB of 0.001% which rises from this point to 0.002% at -6dB. If we alligh ourselves MSBit to MSBit, then with a full scale sine wave, we will have a THD+N of 0.001%. If, we provided a 6dB attenuation as you suggest, with our full-scale sine wave, we would now only be able to make 0.002% THD+N which is worse.
you are right, I didn't think about it this way. But then, the graphs I was looking at at the time didn't just flatten out, they turned around. I guess with what you have alerted me to, it doesn't really make sense to attenuate if and only if you have a converter that just flattens out.
|All times are GMT. The time now is 08:43 PM.|
vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2014 DragonByte Technologies Ltd.
Copyright ©1999-2014 diyAudio