Non OS opinions

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hell no!

For (*^*(^sake (*), I make these darn things every day.

A sampled signal has its images above f/2s, right?

Oversample that train N times by inserting N-1 dummies between each two samples still gives a signal with images above the original fs/2. The spectrum may be a little bit different, but that depends on the nature of the dummies (copies, zeroes, ...) and their related aperture effects.
Why? Simply because no information was modified in the process.

Only when you combine oversampling with steep digital filtering at the original fs/2 you do get the intended result of oversampling (and DF) widening the transition band for the subsequent analogue filter.

Please look into the datasheet of e.g. the DF1704.

If you do not digitally filter at the original fs/2, then, quite simply, part of the original images will still be there. Back to square zero.




Alvaius,

would love to do the matlab stuff, but I have a massive lack of time, and it would not teach me, or you, anything new. Except that you prefer the rather artificial concept of impulse-response-of-a-sampled-system to let you predict how something would work out in the real world, while I know deeply how suspect this is. Like testing a car for buoyancy. Interesting, but potentially misleading when the car is meant to be driven on a road.

Given time, I would run the impulse/square responses alongside the band-limited-impulse and band-limited-square responses. Ringing will be gone, bar revealing any ringing caused by the (ADC-level) band-limiting. But that ringing would then just be the proper reproduction of the input signal.

I'll keep it at this, but if I find a document clarifying these things in an unambiguous way I'll post it here.









(*)

I'd like a cup of sake now indeed.
 
What are you all crazy talking about????:xeye:


I don't understand why a thread about Non OS opinions does show anybody posting the simmilarity to the Pass Zen projects:

Simple to build, educative, easy to understand and bad specs.... so what ... this is for fun .. right?


gr,
Thijs ;)
 
ultranalog said:

Here, you said it yourself.

Remco

What did I say myself?

If you mean the "spectrum may be a little bit different" part then I can assure you that these differences are not quite what you have in mind. They are only the consequence of spreading the original energy over a wider frequency band, and simple linear gain is all that is required to restore the original levels.

And if the dummies are copies-of-the-left-neighbour then the aperture effect would only cause slight drops in level around the odd multiples
of the original fs/2.

Go to dCS and ask Mike Story, if you want a second opinion.
 
And if the dummies are copies-of-the-left-neighbour then the aperture effect would only cause slight drops in level around the odd multiples
of the original fs/2.
The higher the polynome order, the better the approximation of the original, the less image information.

In any case, since we're 'making up' data in between the real samples, it follows that no longer the sinc is the one and only perfect filter shape.

Go to dCS and ask Mike Story, if you want a second opinion.
He's certainly a nice person to talk to. But from the paper I read this afternoon, it seems he favours non-sinc filters too.

Remco
 
A sampled system does have its images above FS/2. However, when you oversample, those new images are now at a new FS, not the original FS (assuming you padded with 0s if I remember correctly).

If I use your logic, most of which I agree with, then a system "sampled" at 44KHz, can not have any information above 22KHz. This is not logic, this is absolutely truth! So here I am sitting with a mathematical representation of a signal. There are no components in that representation over 22KHz. Now if I suddenly change the sampling frequency to 176Khz, my original signal is still only made up of components 22KHz and under. Components between 22KHz and the new FS/2 = 88KHz, can not magically appear. All artifacts will actually be above the new FS/2 or 88KHz! However, you do need to maintain enough precision in your arithmetic.

I may not do this every day, but I do it enough to know what I am doing... sometimes the brain needs a bit of jogging every once in a while.
 
What Werner means and I agree with, is that a 44k digital signal has the analog content copied between 24k and 64k. By Nyquist. If you insert zero's when oversampling, the new data stream is still only defined once every 1/44100 th second, and the mirrors between 24k and 64k still appear. That is what the filter is for.

However, if you interpolate proper data instead of zero's, you create a system that has a significant value once every 1/88200 th second, and images are fiercely suppressed.

Remco
 
ultranalog said:

The higher the polynome order, the better the approximation of the original, the less image information.

In any case, since we're 'making up' data in between the real samples, it follows that no longer the sinc is the one and only perfect filter shape.

If you are talking polynomes, you are talking digital filtering. I was talking oversampling, and what it does (not) do to the original signal's spectrum.

Look at your polynomials then, and observe that they constitute a low-pass at the original fs/2.

And since when gives 'making up samples' you the power to evade Shannon's theorem???

Please go back to the very basics of this all, and proceed from there. Do you base assumptions on later-day lore (read: simplifications) and real-world engineering tradeoffs.
Otherwise the discussion explodes into something unmanageable, which little transfer of knowledge.
 
If you are talking polynomes, you are talking digital filtering.

No, you're messing up to different phenomena again.

Digital filtering:
1. ensures 'no' content is found above the new Fs/2, so that it can uniquely be decoded
2. removes the remainder of the interpolation between the old Fs/2 and the new Fs/2

Tho extremes are a 0 order interpolator and the perfect interpolator, which doesn't exist.

And since when gives 'making up samples' you the power to evade Shannon's theorem???
Since the signal is altered, just like you said.

Remco
 
Didn't someone quote dCS?

Well if you are going to throw a quote in my face.....


From Hi-Fi News and Record Review - August 2000

'There is apparently no extra information in the upsampled signal that was not present in the initial signal,’ says Mike Story of dCS. ‘With a 44.1 kS/s input, both the input data stream and the upsampled data stream will only contain a spectrum that must be between 0 and 22.05 kHz and is probably only between 0 and 20kHz.' Yes in a 44KHz system, the image will be folded from 44KHz down to 22.

********************************************

Now my comments....

If you pad with a copy of the data, then you are replicating the sampling (or if you want to think of it the modulating) waveform at 44.1KHz and hence if you increase to 176KHz, you will still have images between 22 and 44 and there would be no benefit to oversampling. However, if you pad with 0's, you are adding impulse responses and you increase the effective sample rate. The images are now around 176KHz, and not 44KHz. And the images are still only 22KHz wide! Hence you could filter at 40KHz and not 20KHz. That is why oversampling works.
 
ultranalog said:


No, you're messing up to different phenomena again.


I do? Really?

Every low-pass filter is an interpolator. And every interpolator is, in this context, a low pass filter.


Story's papers do not touch our little affaire here. He talks about digital anti-aliasing filters at the (oversampling) ADC. Seemingly similar, but oh so different when you look at the details. Because the ringing introduced by a DAC digital filter-based reconstructor is a necessity to get to proper Sinc reconstruction, when the same filter is used as an AA at an ADC, its ringing indeed spreads original signal energy and harms perceived fidelity. So what is good for an oversampling DAC, appears to be bad for an oversampling ADC.

And that's the real mess of digital audio: Shannon's sweet "given f(t) with no components above fs/2", without telling us how to obtain that filtered f(t) without killing a lot of it.

That's why 44.1k initially was a very bad idea.

Remember my initial 'let's do without filtering altogether and sample at 192k or more'?
 
Every low-pass filter is an interpolator. And every interpolator is, in this context, a low pass filter.
These are separate blocks. I'd like to see a filter that leaves every Nth sample intact.

Story's papers do not touch our little affaire here. He talks about digital anti-aliasing filters at the (oversampling) ADC.
No, he's looking at both. Check the table in page 3.

That's why 44.1k initially was a very bad idea.
I'm still not against that. But unless we build a non-oversampling time machine that will take us back to 1970 or so, there's not a terrible lot we can do about it. We'll just have to work with what's out there.

Remco
 
Re: Didn't someone quote dCS?

alvaius said:

If you pad with a copy of the data, then you are replicating the sampling (or if you want to think of it the modulating) waveform at 44.1KHz and hence if you increase to 176KHz, you will still have images between 22 and 44 and there would be no benefit to oversampling.

Right. But the aperture effect of the copy padding would reduce the treble portions of all images somewhat.

However, if you pad with 0's, you are adding impulse responses and you increase the effective sample rate. The images are now around 176KHz, and not 44KHz. And the images are still only 22KHz wide! Hence you could filter at 40KHz and not 20KHz. That is why oversampling works. [/QUOTE]

No. It would still have all the original images at their original positions, only (all of them) lower in level as total signal energy has been lowered by addition of the zeroes.

Think of it: if your statement were true, then digital filter components would be unnecessary. It would amount to magic: you don't do a thing (well, not much), and the result would be massive.

I think there was a Hifi News article by Keith Howard documenting the spectrum of a zero-padded stream.
 
ultranalog said:

These are separate blocks. I'd like to see a filter that leaves every Nth sample intact.


y = x[0] + 0.57 x[1] - 0.06 x [3] + 0.57 x[-1] - 0.06 x[-3]

Feed it a twice oversampled zero-padded input, as is customary in this business.

-30dB at the original (pre-oversampled) 0.75fs.

Another example: our famous Sinc function.


No, he's looking at both. Check the table in page 3.

Maybe because he ... wants to listen to the signal? A bit hard to do without DAC. Read properly. The article is about ADC AA filters.
 
y = x[0] + 0.57 x[1] - 0.06 x [3] + 0.57 x[-1] - 0.06 x[-3]
Since you're not using even order denominators, the filter is not very efficient. Making it separate blocks allow for a better suppression, even with non-sinc coefficients, which can be obtained by making them separate blocks.
Maybe because he ... wants to listen to the signal? A bit hard to do without DAC.
Interesting then isn't it, that combination 3, 5 and 6 all use the same ADC but are different in image?

Remco
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.