Time domain interpolation instead of the usual low pass filtering?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
this has been bugging me for some time now.
we all know the good old textbook method of low pass filtering in the digital domain in order to reconstruct the sampled signal.
but what are the implications of doing some sort of interpolation in the time domain? just like the upsampling filters that you see the results of when you zoom out a photo in a picture viewer.
why would you want to do it, you ask. I read some paper by Anagram about adaptive time filtering. they claim that the type of interpolation they do generates a signal that has better jitter-induced distortion figures (uncorrelated distortion).
 
In my view upsampling is oversampling combined with extending the bitdepth. Is the only corect way to do OS.
If you double the bitrate (2xOS) and keep the same bitdepth you cannot do correct linear interpolation - that would require to add samples at 1/2 height, between the two original samples. You will end up with a signal thas 2x the sampling rate but only 15bit quality. In order to maintain the original sound quality, you need to have that 1/2 step available, therfore you need to add 1 bit to the final bitdepth.
That means for 2xOS you will need minimum output of 17 bit, for 4xOS you will need 18 bit, for 8xOS you will need 19bit...
That is linear interpolation is done in some modern cips.

If you want someting more fancy than linear interpolation, you need to add more bits to the original depth. For example if you want to have available three steps (1/4, 1/2, 3/4) between the 2 adjacent samples, you need to add 2 bits at every doubling (2x-18bit, 4x-20bit, 8x-22bit).
That's what Anagram/Cabridge Audio, AL Processing/Denon and others do (I think Harman Kardon tried it with Real-Time Linear Smoothing - RLS III).

Reading material.
 
Last edited:
I belive that the interpolation combined with OS (upsampling) leads to closer reproduction of the original analog signal than just digital filtering when we are dealing with 16bit/44.1kHz sources.
But what Wadia does sounds... overkill to me:
Using digital signal processing (DSP), a curve is fitted that conforms to the current sample plus future and prior samples calculated at 48 bit resolution and output at 24-bit precision. The DigiMaster 1.4 operates at the rate of 64-times re-sampling of 44.1 kHz, that is, 63 interpolated values are calculated for each original sample from the CD.

Of course better alternative is using from the start something better - like 24bit/96-192kHz or DSD64.
 
Last edited:
I believe if you work out "derivative matching" "time domain" interpolation you can generate "Gaussian Window" FIR filter coefficients

the impulse and frequency response both look Gaussian - which means no overshoot And slow frequency roll-off

once you start multiplying in a digital filter you need at least the sum of your signal and coefficient bit depths for calculation

you can still use noise shaped dither at the truncation/rounding stage to reduce bit depth to the original signal or DAC's bit depth for output - with the upsampling you get lots more room for pushing the dither noise into inaudible frequencies
 
Linear interpolation between sample points looks superficially better at low frequencies, but is actually worse than the normal sample-and-hold DAC as it creates even more HF cut. Two ways to see this: do the maths, or get some graph paper (or a spreadsheet) and draw the output for sample-and-hold and linear interpolation for a 22.05kHz signal. Interpolation has a smaller fundamental component. Interpolation gives smaller ultrasonic images, but also more HF cut.

The mathematically ideal solution is a DAC outputting spikes (Dirac deltas) into a brick-wall reconstruction filter. Sample-and-hold is a reasonable compromise. Interpolation takes us even further from the ideal. A few years ago I thought interpolation was the answer, but investigation showed that it is worse not better.
 
I suspect that if you try higher-order interpolations (e.g. spline) then the situation gets worse, while looking better at low frequencies.

Sampling at f (e.g. 44.1kHz) means that the output frequency response after the DAC must have a zero at f, and also that there will be images appearing between f/2 and f. This can't be avoided. All you can control is the shape of the frequency response: 0-f/2 is the audio band, f/2-f is the first image band. Assuming the freq response will be a fairly smooth curve, you can't have what you like: level in audio, zero in image. You can of course get this by adding filters after the DAC, but here we are talking about the DAC output itself.

Dirac delta spikes from the DAC give a flat frequency response, until it reaches the zero at f. Images are just as strong as the wanted audio, so you need to filter them out.

Sample-and-hold from the DAC, the usual situation, gives a sinc frequency response which smoothly curves down to the zero at f. As a result the top of the audo band at f/2 is about 3dB down (I can't remember the actual figure), but images are somewhat suppressed. NOS DACs usually accept this as it is, OS DACs use digital filtering to boost HF audio and block images.

Linear interpolation makes the frequency response curve down even more, so you get smaller images from LF audio, but more HF reduction. Higher-order interpolations go even further in this direction, I think. Anything which makes the LF look better, in the sense that the raw DAC output looks more like the original analogue input, will necessarily mean more HF droop because with a smooth frequency response curve you can't reduce images without reducing HF audio too. It will sound smoother, which some people may misinterpret as being 'better'.

The maths of all this will be in a graduate-level textbook on signals and communication theory.
 
Anyting that will be closer to the ORIGINAL signal will sound better.
The extra HF that you claim that S&H provides is not present "natural" in the sound, was created during the initial S&H, at the ADC stage.
Interpolating with incresing the bitdepth as required (not at the same bitdepth) will get the resultat analog output closer to the original signal (that means before the ADC).
 
Anyting that will be closer to the ORIGINAL signal will sound better.
Yes, of course. Provided that it really is closer, and not just at lower frequencies. Interpolation improves LF at the expense of HF droop, as I explained.

The extra HF that you claim that S&H provides is not present "natural" in the sound, was created during the initial S&H, at the ADC stage.
No. Images are created at the DAC stage, but I agree that they were not present in the original analogue signal. The ADC stage, sampling, has the opposite effect of folding aliases down into the audio range - hence the need for an anti-alias filter before the ADC.

Interpolation, whether done by analogue or digital means, will have the same effect. Oversampling, with appropriate digital filtering, provides a method to avoid the problem. In effect, it simulates narrower pulses followed by a near-ideal brick-wall reconstruction filter.
 
I know that the low pass filtering is the way to go as far as theory is concerned.
interpolation is not even filtering because the shape of the output depends only on the shape of the input signal and not on filter coefficients/kernel (which are non existant in this case).

but what drew my attention was the fact that they claim that the harmonics generated are not correlated to the input signal and thus sound better. otherwise I would not even mention it. maybe they use some combination of low pass filtering and interpolation that does not generate the high frequency drop-out? I mean best of both worlds. just maybe...?
 
They are not harmonics, but images. They are correlated with the input signal, with the lowest appearing at 'sampling frequency'-'signal frequency'.

You can always correct for the droop using filtering, but that will undo the interpolation you started with. The mathematically correct method will exactly reproduce the signal at the output of the anti-alias filter before the ADC; you can't do better than that. Anything else is by definition a distortion, although some of these distortions may sound 'better'. It depends what you want: accuracy (within the limits imposed by the CD system), or pleasant sounding distortion.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.