What type of interpolation is mostly used in oversampling/upsampling dacs ?

Status
Not open for further replies.
There are different types of interpolations ranging from quick and easy(linear/average) to complex(trigonometric/poly) implementations.

Typically in oversampling/upsampling DACs what type of interpolation is used ?
 
To simplify things, it's a filter running at the desired sample frequency, with a corner frequency below the original Nyquist frequency. It's fed with a signal at the upsampled frequency, whose samples corresponding to the original sample period are the original samples, and zero elsewhere.

For example, say you want to upsample from 44.1 to 88.2 kHz. Design a 88.2 kHz lowpass filter with a corner frequency no higher than 22.05 kHz, and feed it with a pulse train where the even samples are the 44.1 kHz samples, and the odd samples are zero. The original signal aliases because of the zero samples, but all the alias products are above 22.05 kHz so the lowpass filter removes them.

Trigonometric/polynomial interpolations won't have the proper frequency domain behaviour unless they're carefully designed to do so, in which case you're back at the filter design stage of the problem again.

Here's more:

http://en.wikipedia.org/wiki/Upsampling
 
DSP_Geek said:

For example, say you want to upsample from 44.1 to 88.2 kHz. Design a 88.2 kHz lowpass filter with a corner frequency no higher than 22.05 kHz, and feed it with a pulse train where the even samples are the 44.1 kHz samples, and the odd samples are zero. The original signal aliases because of the zero samples, but all the alias products are above 22.05 kHz so the lowpass filter removes them.
So there are really no new samples that are actually calculated. For the sake of argument no better or worse than linear interpolation or even simple averaging.

DSP_Geek said:

Trigonometric/polynomial interpolations won't have the proper frequency domain behaviour unless they're carefully designed to do so, in which case you're back at the filter design stage of the problem again.
hmm..interesting. As only theoritical as it may sound, I was kinda thinking FFT, followed by FFT in reverse. For example, first you would figure out the dominant harmonics (fundamental+harmonics) in the signal, based on a few samples (possibly two?) i.e. figure out the trigonometric polynomial from those samples, and then actually calculate points between those samples based on that trigonometric polynomial.
 
So there are really no new samples that are actually calculated.

Absolutely not! There's no way to get more information from the signal you already have. No extra resolution can *ever* be gained by oversampling (regardless of method). That would break Shannon's sampling theorem.

All oversampling does is allow your DAC to use a shallower and more transparent analogue reconstruction filter at its output - a worthwhile gain, for sure. But oversampling isn't some magical process which generates extra resolution.
 
As far as I know, the only workable realtime solution is what has already been stated by DSP_Geek - upsample your signal by stuffing it with zeros, then lowpass filter the result. A perfect lowpass filter will create the intermediate values *exactly*, given the bandwidth constraints of the digital channel. The question is really how close you can get your digital filter to perfect, both in terms of frequency response and in terms of numerical precision.

Silicon space is always limited and shortcuts may be taken in onboard filters in both FIR filter length and word length. The numerical precision of such filtering is probably limited (and probably undithered). This could explain why many report superior results when using DSP-based outboard oversampling filters and the like.

A dedicated DSP oversampling filter will almost certainly produce a better result than the one built into a DAC. The main problem is that none of the chip makers seem to publish any data about the implementation of their on-board filters, so it's hard to know if it's worth the extra effort.
 
Wingfeather said:

A dedicated DSP oversampling filter will almost certainly produce a better result than the one built into a DAC. The main problem is that none of the chip makers seem to publish any data about the implementation of their on-board filters, so it's hard to know if it's worth the extra effort.

Manufacturers do provide filter information for some parts. For example page 8 of

http://focus.ti.com/lit/ds/symlink/pcm1794a.pdf

shows all you need to know about the filter response. Implementation details, such as an 8x oversampling filter actually being composed of successive 2x filters, are not disclosed, as they should be obvious to the experienced practitioner.

There are ways of making a better filter than what most DACs currently use, but curve-fitting and trigonometric approximations are not the answer. Most DACs oversample using half-band filters, which are -6db at the Nyquist frequency, not great, but every other coefficient is equal to zero so computation can be halved by skipping the zero multiplies. Real gain can be found in running the last filter not as a half-band, but by setting the corner frequency a bit lower and rolling it off a bit more gently. Moving the corner lower minimizes possible aliasing, and a gentler rolloff improves transient response near the corner frequency.

One could throw in a bit of dither, but with DACs running at 20 bits or more it's rather gilding the lily when the source is merely 16 bits.

All that would have taxed older DSPs a fair bit, but newer parts can handle all that with ease even for 5.1 channels.
 
Status
Not open for further replies.