DAC/digital filter question

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hi

I am planning to make a CD-player and currently I'm wondering how to construct the DAC and the digital filter.

I wondering if an extreme simple circuit will be better than state of the art with oversampling and you name it.

The simple version would be just a simple 16 bit DAC, no oversampling and without any filter.

The way I see it oversampling adds something to the signal (the oversampling itself) that wasn't there to start with.

But on the other hand, a simple circuit will have the some of the same "feature" when converting the digital signal to an analog one. The big question is which sound the best?

What are your thoughts on this?

/Flemming J P
 
Actually, properly implemented oversampling by sinx/x convolution (or FIR filtering) adds absolutely nothing to the signal... one of the great beauties of working in the digital domain. Oversampling doesn't work so well when the digital filter is compromised by economic decisions which will tend to decrease the number of "taps" used in the filter, and hence the ultimate quality of the interpolation. Anyway, a well implemented interpolation filter will spit out the original samples with nothing more than some extra samples added in between... samples which follow precisely the curve of the signal (eg. the technique is not an approximation method) - no phase shift, no frequency response changes, no added signals, nothing subtracted. I guess unless you have a really intuitive understanding of the algorithm and how it works, it's perhaps difficult to visualize this, so you'll have to take my word for it...

If you have any doubts about what an oversampling interpolation filter does to the signal, you'd be absolutely flabbergasted by what happens inside a sigma-delta DAC. These little buggers "throw away" the vast bulk of the original signal's resolution (ever hear the term "1-bit"?), and add huge amounts of frequency-shaped pseudo-random noise... yes *noise* (it's all ultrasonic, and gets filtered out in the analog domain), while oversampling and "interpolating" up to 128 times!!! What's more, most bitstream type DACs use various feedback techniques to correct and linearize the output. Anyway, what comes out the other side of a sigma delta dac isn't anything even remotely resembling the original input samples, and yet the best sigma-delta DACs are audiophile-approved (eg CS43122)... in fact, 99% of the DACs you hear on a daily basis are probably sigma-delta types because they're much cheaper to manufacture.

Anyway, my point is, the minimal and very benign processing done to accomplish simple oversampling interpolation is absolutely nothing to worry about, and reports of "audible" effects are quite exaggerated (though I won't say false - since practical FIR filters aren't *quite* perfect, and there are bad filters). All that oversampling does is deliver a much smoother output signal, so that a simpler analog filter can be used... thus relieving the potential for much greater sonic damage caused by a high-order analog filter. To be honest, i am amazed and dismayed at the fascination with non-oversampling, unfiltered DACs, and the piles of baseless "evidence" supporting the belief that they are sonically superior. So far, i have never seen a valid argument why interpolation should produce any negative sonic effects.

My advice: go for a good 8x oversampling chip (NPC, DF1704/1706, or PMD100 if you can get one), and use a low-order analog ultrasonic filter centered at say 60kHz. I wouldn't risk going filterless (oversampling or not), as you risk sending high-level ultrasonic trash to your subsequent amps / speakers and causing unnecessary distortion or damage.
 
I would say that eight times oversampling is probably enough, a single SM5847 should be great - I am about to build a DAC with it.

Here is something else to consider - The higher the sampling rate the DAC chips run at the lower the jitter you need to get a given level of resolution in the time domain. Thus the more oversampling you use the less jitter you must achieve. This, I think is the only reason a zero oversampling DAC may sound 'better' than a similar DAC with a good oversampling filter.
 
Yes,you're right.

Yes,you're right. But i just order a ultra high preformance TCXO, the jitter preformance of it(worst case): 10ps p-p = 1.3ps Rms, & i will use the AD1896 as ASRC. So, i think the jitter is nothing, it will not effect my system.
 
hill,

That sounds good, where did you get your TCXO?

To get the lowest jitter to the DAC chips do not run the master clock through the oversampling chip. Instead connect it directly to the BCLK of the PCM1704s and then loop it back to the input of your oversampling filter (You may need to invert it). If you're using the SM5847 you could also investigate the jitter free mode that the NPC chips have.
 
Dave,

I get the TCXO from the Raltron Co. TX-125 series. I'd ordered the sine wave output type TCXO, & using the ECL IC from Mitrel to convert it to different PECL signal for long distance transmission & very carefully layout. So,i reach the great preformance.

Thanks for your adviced.
 
Dave,
I'm also considering building a DAC with the SM5847 (w/CS8420 & PCM1704's). Have you had a chance to experiment with the jitter-free mode, or have any other feedback for us who are considering this chip?
Also, I had not considered a TCXO... any opinions on how the TI or National stack up against the Valpey Fisher XO's that seem to have a cult-like following by some (like Mr. Erland Unruh, whose articles on DAC design I much admire).
 
hill,
try this link:
http://www.galstar.com/~ntracy/ACG/
Mr. Unruh has a few articles posted on the Audio Crafters Guild site under Articles & Essays. Also he contributes occasionally to Audio Electronics magazine (now called AudioXpress)-issue 2/99 has a short description of a DAC with FIFO buffer and 4 layer PCB (very impressive!) Photos are available at the AGC site under Guild Craftsmen Projects.
 
Dave:

I think you're spot-on with the jitter requirement when using oversampling. This is probably the first decent argument i've heard in favour of non-oversampling. Hmm.. I might have to sit down with a pad and pencil sometime to try and quantify the effect of jitter on oversampled vs. non-oversampled outputs, since I'm not quite sure the relationship will be linear. It seems to me that as we increase the sample rate, we may have some dither effects working in our favour, subject to the following caveat: the nature of the jitter should become more important as we increase the oversampling rate, such that truly random jitter up to a certain amount may have a net positive effect, via the aforementioned dither-linearizing properties of genuine randomness. OTOH, data-correlated jitter could be proportionately more damaging for higher data conversion rates. So, one might draw the conclusion that increasing the oversampling rate makes the DAC more sensetive to poor clocks (particularly clocks which are susceptible to data-modulated jitter).

While pondering the problem further, I have come up with another possible argument for lower conversion rates: I-V converter settling time. An I-V conversion stage will not settle instantly on the new voltage when the DAC current output changes. The step size will play an important role in how long it takes an I-V stage to slew to the new voltage and settle on the proper value. For small steps (eg, an LSB change), almost any reasonable opamp could slew it's output to the new value very quickly, and arrive at the new voltage with minimal overshoot. However, a large step will cause a longer transition time for the I-V stage to slew and settle on the new value. This is then a phenomena which is data-correlated - the errors produced will be dependent on the slope of the analog signal being reproduced. Higher frequency notes and larger amplitude signals will lead to greater I-V conversion errors during the portions of the waveform with the steepest slope. Clearly, a faster I-V stage can go a long way towards minimizing this problem. So, here again we can see that higher oversampling rates could potentially degrade the sound if the I-V stage isn't up to the challenge. Perhaps this is one reason the resistor I-V stage is subjectively quite popular (even though it introduces another type of non-monotonicity error) - a resistor has no slew rate or overshoot to worry about, if it's sufficiently free from parasitics. Anyway, a good I-V design should be paramount to acheiving the best performance from R2R DACs like the PCM1704, especially at the higher sample rates.

Anyway, that's the best I could come up with. I'm not convinced that these two effects warrant a reduction in the oversampling, so long as one is careful to design the clock and I-V stages well. Nonetheless, a valuable insight. Thank you Dave. :)
 
Perhaps viewing the subject from a completely different perspective might shed a bit more light on the o\s no o\s debate.
Putting aside the type of digital filter ,which is a whole new topic for debate given that at least two manufacturers have strayed from the sinx/x path, unless you have 16 bit samples and 8 bit coeff's you are going to have to lose bits be it by rounding off or truncating and more often than not you also need to dither.
But what type of dither and at what level? It is clear that dithering is not an open and shut case otherwise why would Sony bother with Super Bit Mapping, Apogee with UV22, Prism Sound with D-Ream, JVC with its XRCD processing and host of other proprietary dithering schemes out there. Closer to home the PMD-100 has 8 dither settings and the CS5396 ADC has user adjustable 9bit psychoacoustic filter if I remember correctly. Once you accept there is no one size fits all approach to dither and that all these schemes ultimately have a subject effect it is no great leap to suppose there might be some who their dacs oversampling and thus dither free.
ray.
 
hifiZen,

I think the calculation goes something like this:

Take the time interval of one sample period and divide it by the number of different output levels the DAC can have ie. 2^16.

eg. No oversampling - 1 second / 44100 = 22.676 us. Now divide that by 2^16 and we get 346 ps. So at 44100 346ps p-p jitter will render 16 bits resolution in the time domain.

So basically max p-p jitter equals 1 / (Fs * 2^(bits))

Where Fs is the sampling rate of the DACs.

So out of interest 44100 with 8 times oversampling equals,

43.25 ps of jitter.

I don't know a great deal about DSP etc (Is it hard to learn ?) but I agree that jitter up to a certian level may provide a degree of dither resulting in a positive effect. Also won't the noise floor provide a form of dither? Especially in 24 bit electronics.

As for I/V conversion you provide an interesting argument. Have you tried a common gate MOSFET design? I think it is the best I/V converter around. It is high speed, provides the DAC chip with a low impedance load (lower than a resistor based I/V converter), uses no negative feedback and should be very linear.
I believe it is used in the Passlabs D1 DAC and Wadia use something similar I think (they have a custom made I/V chip) along with 16 times oversampling................
 
oversampling and jitter requirements

The argument that a highly oversampled signal is more sensitive to jitter is as old as it is misguided. It was first brought up by the Japanese guy who re-invented the non-oversampling DAC.

It is true that a high-bit, high-oversampling rate signal can tolerate less jitter. This is because it contains more information that one would like to conserve (e.g. the LSB is pretty small).

On the other hand, when you take a 16/44 signal, it contains pretty little information. The oversampling will simple intrapolate more samples, but each sample remains there for a shorter time. If you get its length wrong, it will only contribute e.g. at 1/8th of what a single sample in the original signal contributed.

The same argument goes for the resolution. In interpolation, you gain apparent resolution. You have to properly dither it or output it at 20 bits in order not to introduce rounding artefacts. But it will not contain more than 16 (or 18 bit information if proper dithering was used in mastering from the original tapes), i.e. it makes no sense whatsoever to divide the sample length by 2^20 to determine the theoretical jitter requirement.

Also, the argument about the slewing of the I/V converters might be wrong. Oversampling will replace one big step every 22.6 us with 8 smaller steps every 2.8 us. So the frequency of the transitions goes up, but the slewing itself is diminished.


There are is one point that I consider relevant in the non-oversampling advocates' arguments. Oversampling will introduce a constant delay which is harmless. But it will generate any one sample from a weighted average of samples that came before and after it. This means that you have some smearing in the time domain that may account for a lack of localization.

Preferably, a digital filter should be as short as possible. Burr-Brown does not give the impulse response of their DF1700, 1704 and 1706 filters.

On the other hand, Phillips DAC7 systems were highly acclaimed in their time. They used either an NPC5842/SAA7350 or a TDA1307 to do the upsampling.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.