The king of all upsampling/oversampling questions...

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I have been researching the use of upsampling/oversampling (not to state that they are the same, but are very similar) and I have come across a very interesting question.

I understand the theory behind the system - interpolate between real samples to create new samples at a higher sampling frequency (and almost always higher resolution), but can anyone tell me the system used to interpolate these new samples?

I have been toying with some ideas of my own, but fear they have already been used.

Are the new samples based on statistical functions from adjacent samples or, more complexly, on FFT analysis - windowing many adjacent samples and using the frequency content?

I would be intreagued to know what techniques are currently in use - I hope someone out there knows how the big guns do this.
 
nothing strange

Upsampling and oversampling is completely different in description.

Well, oversampling is well known and quite easy. Look at some given data maybe the output volate of a DAC

0.3, 0.7, 0.1, -0.9, ...

A non-os DAC will exactly hold the output voltage until the next sampling data comes. And now, we are spreading the time a little bit lets say 4 times:

0.3 0.3 0.3 0.3 0.7 0.7. 0.7 0.7 0.1 0.1 0.1 0.1 -0.9 -0.9 -0.9 -0.9

This is exactly the same as before only the time resolution in this example is 4 times higher so there are more numbers in.

The oversampling filter is very easy, it only takes the first samples and throw the next 3 ones away. Our oversampled data looks now like

0.3 0 0 0 0.7 0 0 0 0.1 0 0 0 -0.9 0 0 0

Unfortunately the energy of the is 4 the signal is 4-times lower as before, so multiply it with 4:

1.2 0 0 0 2.8 0 0 0 0.4 0 0 0 -3.6 0 0 0

Thats it, our oversampling filter is ready. Now you need only a simple low-pass filter, to get rid of the side-bands (they are mirrored at the sampling rate and if you have a 4-times oversampling filter the side bands are much more away). A simple FIR filter with about 100 taps is doing some mathematical filtering (only multiply, add and time shifting a little bit), the coefficents you can find nearly everywhere on the web and so you get rid of the zeros in our example and the high frequency pulses.

I think programming a DSP will take some hours (or days?) to implement this oversampling feature if you have the correct coefficents.

Upsampling is very much more complicated, it uses a (digital) PLL to syncronise two clocks, it filters data in much more behavior, and so on.

Look on the Analog Devices datasheed AD1896, there is a very simplified description (and this is over some pages...) of the theory. But upsamling as well as oversampling contains no FFT or statistical analysis.
 
Re: nothing strange

bocka said:

Upsampling is very much more complicated, it uses a (digital) PLL to syncronise two clocks, it filters data in much more behavior, and so on.

Not necessarily, what you are describing is asynchronous sample rate conversion where the incoming and outgoing sample rates are not synchronized, this is what the AD1896 does.

Upsampling is the correct term when the output sample rate is synchronous too (and usually an integer multiple of) the incoming sample rate, and in this case a simple FIR filter is sufficient as you showed in your example above.

To annex666: Here http://www.dspguide.com/ is a good free book that describes basic DSP without going into too much heavy math. Here you can read how to do FIR filters in the time-domain using convolution.
 
Re: Re: The king of all upsampling/oversampling questions...

Steve Eddy said:

How can you get any higher resolution?

Well, the filter is usually executed on a processor with higher resolution (32bit) than that of the incoming signal (16bit), so the output signal will have more resolution and then dither can be applied to produce a final 24bit dithered signal.

However the SNR will not increase, it is limited by the input signal. So there is always this never-ending discussion about what the point is of increasing the bit-depth when the signal already is limited by the 16 bits on CD. Maybe this is what you had in mind?


Peter Daniel said:

Thanks for the link Peter, I just read this paper and I don't know whether to laugh or cry. All I want to say is that I am always sceptical of technical papers coming from equipment makers. You have to think consider if the conclusion was reached first, and then the rest of the paper was written to back it up.
 
Re: Re: Re: The king of all upsampling/oversampling questions...

ojg said:
Well, the filter is usually executed on a processor with higher resolution (32bit) than that of the incoming signal (16bit), so the output signal will have more resolution and then dither can be applied to produce a final 24bit dithered signal.

I still don't see how you get any more resolution when the original signal has already been quantized at 16-bits. To me, more resolution implies more information. And you've already fixed the amount of information when you did the original 16-bit quantization.

However the SNR will not increase, it is limited by the input signal. So there is always this never-ending discussion about what the point is of increasing the bit-depth when the signal already is limited by the 16 bits on CD. Maybe this is what you had in mind?

Yes. Again, I fail to see how one achieves any greater resolution when the the resolution has already been established by the 16-bit quantization of the original signal.

se
 
Re: Re: Re: The king of all upsampling/oversampling questions...

ojg said:







Thanks for the link Peter, I just read this paper and I don't know whether to laugh or cry. All I want to say is that I am always sceptical of technical papers coming from equipment makers. You have to think consider if the conclusion was reached first, and then the rest of the paper was written to back it up.

Unfortunately I can't comment, because I didn't read it.;)
 
Re: Re: The king of all upsampling/oversampling questions...

Steve Eddy said:


How can you get any higher resolution?

se


It goes something like this. For whatever reason a digital is required. A desired specification for said filter is laid down. From the desired specification flow the coefficients for the filter. These coefficients are unlikely to be nice round numbers, so in order to remain close to the calculated value, these coefficients need to be as long as is practically possible. Assume a bunch of cheapskates with a job lot of 16 bit DSP chips. This chip will implement the filter using its multipliers. If you multiply a x-bit number by a y-bit number you get a x+y-bit number. So the 16-bit input is multiplied the 16-bit coefficients and produces a 32bit result which is dithered down to 24 bits. The data has now been requantized and its resolution increased but the full scale value originally represented by 16-bits has not changed.It is analogous to going from a 30cm ruler to a 300mm ruler. They are both the same length but the later has finer increments or greater resolution.

ray
 
Re: Re: Re: The king of all upsampling/oversampling questions...

rfbrw said:
It goes something like this. For whatever reason a digital is required. A desired specification for said filter is laid down. From the desired specification flow the coefficients for the filter. These coefficients are unlikely to be nice round numbers, so in order to remain close to the calculated value, these coefficients need to be as long as is practically possible. Assume a bunch of cheapskates with a job lot of 16 bit DSP chips. This chip will implement the filter using its multipliers. If you multiply a x-bit number by a y-bit number you get a x+y-bit number. So the 16-bit input is multiplied the 16-bit coefficients and produces a 32bit result which is dithered down to 24 bits. The data has now been requantized and its resolution increased but the full scale value originally represented by 16-bits has not changed.It is analogous to going from a 30cm ruler to a 300mm ruler. They are both the same length but the later has finer increments or greater resolution.

Then "resolution" is just semantics. You don't actually get anything in the end. You get no more information (the waveform doesn't change). So what's the point?

se
 
Re: Re: Re: The king of all upsampling/oversampling questions...

ojg said:
Thanks for the link Peter, I just read this paper and I don't know whether to laugh or cry. All I want to say is that I am always sceptical of technical papers coming from equipment makers. You have to think consider if the conclusion was reached first, and then the rest of the paper was written to back it up.

Actually it's quite the opposite. The article isn't from anyone at 47 Labs. It was written by Ryohei Kusunoki who as far as I can tell isn't a manufacturer and isn't associated with 47 Labs. What 47 Labs did was design their DAC around the principles put forth by Kusunoki in a series of articles for MJ magazine.

se
 
Re: Re: Re: Re: The king of all upsampling/oversampling questions...

Steve Eddy said:


Then "resolution" is just semantics. You don't actually get anything in the end. You get no more information (the waveform doesn't change). So what's the point?

se


It is what it is. A closer approximation to what was originally encoded. The original intention was ease the analogue filter requirements and wordlength increase was simply a byproduct.
Things differ somewhat in the processing arena. Again the required processing determines the type of filtering required and that determines the wordlength. At least part of Sony's reasoning behind the 24-bit OXF-R3 desk was that when all 100+ channels were summed, there was still at worst a 20bit noise floor.
I suppose it is about what you do with the information you have.Even though no new information is created it is still pretty easy to see difference between 8 and 10 bit video under the right circumstances.

ray.
 
Re: Re: Re: Re: Re: The king of all upsampling/oversampling questions...

rfbrw said:
It is what it is. A closer approximation to what was originally encoded.

How is it a closer approximation to what was originally encoded when the orignial analogue signal is long gone and ALL you can know about it is what's contained in the 16-bit data?

At least I suppose it is about what you do with the information you have.Even though no new information is created it is still pretty easy to see difference between 8 and 10 bit video under the right circumstances.

You mean 8 and 10 bit video where the sampling was originally done at 8 and 10 bits or do you mean video that was originally quantized at 8 bits and jacked up to 10 bits?

se
 
I think it is important to distinguish between over- and downsampling and the bit width. the bit width is important for the signal processing and there is no informatin added by oversampling a 16bit quantisized signal. But if you use less resolution by filtering, and this is what oversampling requires, you can loose some information on quatisation. Perhaps you don't realy know how oversampling works in a chip like the ad1896. is is quite easy to understand. look at your timesignal with samples spaced by a fixed time offset. if you want to oversample the signal you have to generate signal between two of these samples, to get your signal at the right time. this is done by simply adding zeeros between two samples followed by lowpass filtering. the more zeros you insert, the higher is your resolution of your oversampled signal, as it is easy to find the most correct signal between the two samples. adi uses 2^20 points i think. and now there is another problem. you need the lowpassfilter interpolating your source signal. if you oversample your 48kHz signal by 128 the new stopband frequency has to be 24kHz, wich corresponds to 0.5/128 as normalized frequency. The quality of this filter (stopbandattenuation) directly influeces the signalquality. 100 dB stopbandattenuation is necessary to get a resolution about 16bits. having interpolated about 2^20 samples inbetween two inputsamples, you simply pick one at the right time and you have your outputsamples.
Now, using bad resolution in computation, perhaps with a fixedpoint dsp, the filtering will produce much distortion.
in reality the chip will not compute such an amount of samples inbetween. it uses a polyphasefilter, knowing many inputsamples of the fir filter are zero, the filterlength is much smaller.
you can look at the datasheet, there is a small explaination, too.
 
Re: Re: Re: Re: Re: Re: The king of all upsampling/oversampling questions...

Originally posted by Steve Eddy


How is it a closer approximation to what was originally encoded when the orignial analogue signal is long gone and ALL you can know about it is what's contained in the 16-bit data

It stands to reason that a close approximation is not the same as the original and as I said before its a question of doing the best with what you have but then if one is dogmatically opposed to the concept in the first place none of this really matters.

You mean 8 and 10 bit video where the sampling was originally done at 8 and 10 bits or do you mean video that was originally quantized at 8 bits and jacked up to 10 bits?

se

Both

ray
 
I tend to agree with SE on this one. The information has already been lost in the recording/mastering process when truncating to 16-bits. What was in the signal before this procedure we don't know, we can only make a qualified guess.

I like the ruler analogy, so here's an example. Let's say that at a certain pont in time the signal is 123456789nm. Now the recording studio measures this with a 300mm ruler to be 123mm. The mastering house truncates this by "remeasuring" this signal with a 30cm ruler to be 12cm. This is what gets stored on the CD.

Now lets throw another variable into the discussion, dithering. All CDs todays are dithered with different degrees of noiseshaping. Dithering works by modulating the least significant digit, so that on average a better approximation of the original signal is stored.

So to continue the example, the mastering house when trying to approximate their 123mm signal with a 30cm ruler decides to set one out of three samples to 13cm and the others to 12cm. On average this becomes (12+12+13)/3=12.3cm=123mm.

So by playing some tricks with the least significant digit (and thereby creating some high-frequency noise) the signal has a higher resolution than the bit-depth implies. Now here is what I am still struggling to understand: When over/upsampling dithered signals, is the extra resolution caused by dithering ruined in the process or does the new signal contain the information that was in the dithering?

As a sidenote: SACD is just a 1-bit oversampled signal with dithering and very heavy noiseshaping. Still everybody seems to agree that SACD has much higher resolution than the 1-bit quantization implies. So bit-depth does not always correlate with resolution...
 
Re: Re: Re: Re: The king of all upsampling/oversampling questions...

Steve Eddy said:


I still don't see how you get any more resolution when the original signal has already been quantized at 16-bits. To me, more resolution implies more information. And you've already fixed the amount of information when you did the original 16-bit quantization.

Yes. Again, I fail to see how one achieves any greater resolution when the the resolution has already been established by the 16-bit quantization of the original signal.
---------------------------------------------------

You don't and you don't. See white papers in dcsltd.co.uk. dCs is well known for their own software and hardware on upsampling and high sampling frequency products. They also sound exceedingly good.

Chip based upsamplers such as the CS8420 and AD ASRCs sound different but rather imperfect. I have not heard a CS8420 product that sounds similar. The best implementation I know is the Assemblage D2D with 2 PLLs . It also doesn't sound good on a long term basis as it seems to add its own dry sharpish character to the music. However, when you first put these things in the chain, the improvement on 16/44.1 impresses.
 
ojg said:
I tend to agree with SE on this one. The information has already been lost in the recording/mastering process when truncating to 16-bits. What was in the signal before this procedure we don't know, we can only make a qualified guess.

I like the ruler analogy, so here's an example. Let's say that at a certain pont in time the signal is 123456789nm. Now the recording studio measures this with a 300mm ruler to be 123mm. The mastering house truncates this by "remeasuring" this signal with a 30cm ruler to be 12cm. This is what gets stored on the CD.

Now lets throw another variable into the discussion, dithering. All CDs todays are dithered with different degrees of noiseshaping. Dithering works by modulating the least significant digit, so that on average a better approximation of the original signal is stored.

So to continue the example, the mastering house when trying to approximate their 123mm signal with a 30cm ruler decides to set one out of three samples to 13cm and the others to 12cm. On average this becomes (12+12+13)/3=12.3cm=123mm.

So by playing some tricks with the least significant digit (and thereby creating some high-frequency noise) the signal has a higher resolution than the bit-depth implies. Now here is what I am still struggling to understand: When over/upsampling dithered signals, is the extra resolution caused by dithering ruined in the process or does the new signal contain the information that was in the dithering?

As a sidenote: SACD is just a 1-bit oversampled signal with dithering and very heavy noiseshaping. Still everybody seems to agree that SACD has much higher resolution than the 1-bit quantization implies. So bit-depth does not always correlate with resolution...


When I use the term resolution, I use it purely in a technical sense. Using the ruler analogy, if I wish to measure 12.75cm I would get closer to the distance required with a ruler calibrated in mm than I would with one calibrated in cm. In the specific case a digital oversampling filter it is used in order simplify the analogue filter requirements by performing the more extreme filtering in the digital domain. The resulting output has a wordlength greater than 16-bits. It is a byproduct of performing the filtering. In the absence of a dac with the same wordlength as the product of the input data and the coefficients it makes sense to use as many of the bits as possible.

ray.
 
Re: Re: Re: Re: Re: Re: Re: The king of all upsampling/oversampling questions...

rfbrw said:
It stands to reason that a close approximation is not the same as the original and as I said before its a question of doing the best with what you have but then if one is dogmatically opposed to the concept in the first place none of this really matters.

I'm not dogmatically opposed to anything. I just don't see that you achieve any greater resolution seeing as you're not adding any more information.

As for approximations, while certainly a close approximation is not the same as the original, what was said originally was that you get a closER approximation. What I'm saying is that I don't see how you can get any closER once the original signal has been quantized.

Basically all you're doing is rescaling the waveform. The waveform remains the same. If the waveform remains the same, you're no closer in approximation than you were with the 16-bit data.

se
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.