Kurt,
Surely you realise that Nyquist does not come into this at all? It is strictly Shannon/Weaver and basic information theory.
If I take a 16 Bit/1FS signal and use some form of digital manipulation to re-describe it in more data points, both in terms of amplitude and time domain, the minimum loss (a loss is always inherent to such a process) occurs if I can precisely decode the new Data in both time domain and amplitude domain with the full accuracy of the new Data.
Of course, if using a Delta Sigma DAC or a "Hybrid" DAC we may as well give up any hope of ever getting anything resembling the original signal, as the underlying system is severely deficient in actual resolution anyway.
So in this the case the challenge is to find a combination of filter, modulator and whatsnought that gives good sound.
You realise that this is patently untrue, I hope?
If you apply 4 X over Sampling you do not get three new datapoints and the original datapoint, but you get four completely new datapoints.
What you describe would be pretty much linear interpolation, which is not used in digital filters, but can be implemented using Multi-Bit DAC's with or without additional digital filtering (see Cambridge Audio CD-2 and the "ultimate TDA1541 DAC" Metathread for more on that approach).
Hmm, it "kills" time domain jitter, but it cannot remove the fact that our incoming data is subject to timing variations. So what it does, it shifts the variation out of the simple time domain modulation into a complex amplitude/phase modulation.
Agreed, Non-OS is not the only way to get good sound (but one of the easier ones), I for one would like to see a DAC that is not Non-OS and still produces State Of The Art sound quality and especially realism.
So again, K+H get on with it, instead of badmouthing Non-OS DAC's and getting involved in arguments about basic information theory, which you will loose, given your grasp of what really goes on in digital filters etc.
Ciao T
Well I do not know where you got this idea, but it surely is not compliant to the Nyquist theorem.
Surely you realise that Nyquist does not come into this at all? It is strictly Shannon/Weaver and basic information theory.
If I take a 16 Bit/1FS signal and use some form of digital manipulation to re-describe it in more data points, both in terms of amplitude and time domain, the minimum loss (a loss is always inherent to such a process) occurs if I can precisely decode the new Data in both time domain and amplitude domain with the full accuracy of the new Data.
Of course, if using a Delta Sigma DAC or a "Hybrid" DAC we may as well give up any hope of ever getting anything resembling the original signal, as the underlying system is severely deficient in actual resolution anyway.
So in this the case the challenge is to find a combination of filter, modulator and whatsnought that gives good sound.
In oversampling the data converted is the original 16 bit data, but with estimated data interpolated between each original sample.
You realise that this is patently untrue, I hope?
If you apply 4 X over Sampling you do not get three new datapoints and the original datapoint, but you get four completely new datapoints.
What you describe would be pretty much linear interpolation, which is not used in digital filters, but can be implemented using Multi-Bit DAC's with or without additional digital filtering (see Cambridge Audio CD-2 and the "ultimate TDA1541 DAC" Metathread for more on that approach).
To me the choise is obvious, but I would actually prefer upsampling, which is the real jitter killer.
Hmm, it "kills" time domain jitter, but it cannot remove the fact that our incoming data is subject to timing variations. So what it does, it shifts the variation out of the simple time domain modulation into a complex amplitude/phase modulation.
NOS DAC discussions belong in NOS DAC related threads.
Agreed, Non-OS is not the only way to get good sound (but one of the easier ones), I for one would like to see a DAC that is not Non-OS and still produces State Of The Art sound quality and especially realism.
So again, K+H get on with it, instead of badmouthing Non-OS DAC's and getting involved in arguments about basic information theory, which you will loose, given your grasp of what really goes on in digital filters etc.
Ciao T
Agreed, Non-OS is not the only way to get good sound (but one of the easier ones), I for one would like to see a DAC that is not Non-OS and still produces State Of The Art sound quality and especially realism.
Ciao T
OK, I'm sure most of us realized that your preferences are different from our by now. Maybe you should realize, that you can talk bad about CS chips and good about NOS until X-mas, without any effect on this project.
If you are so much into NOS, I really do not understand, why you haven't made your own NOS project in here. Feel free to do so!
We already invited you to a session where you can enjoy a CS based DAC, that will bring you all the realizm you ever wanted. Until you accept this offer, please keep on track, and stop wasting both your own and our time 🙂.
[snip]If you apply 4 X over Sampling you do not get three new datapoints and the original datapoint, but you get four completely new datapoints. [snip]Ciao T
Thorsten,
Doesn't that depend on whether the os is synchronous or asynchronous? Isn't it the case that for sync os the original datapoint is kept and augmented with 3 'new' data points?
jd
Hi,
They should not be, assuming we are interested in producing realistic reproduction of music.
I have not even mentioned CS for probably a dozen posts by now.
I did, not particulary here of course and about a decade ago. 😛
(and it did use shunt regulators too!)
Now there are many Non-OS DAC's of varying quality, no need for another one.
Hmm, I said several times that it was your project and you should do it the way you like.
I am not the one who badmothese anything, nor do I make seriously wrong and equally seriously dismissive statements about a given technology.
Right now I am merely correcting these.
Past that, if you really want to max out your DAC I'd first not use 3-Pin regulators to directly supply any given part of the circuit (nor as CCS) and I would not use the LC Audio Shunts but look at something which does not have the extra noise from the Zenner Diodes.
Discrete Analogue stages we have covered, as you guy's dislike (J)Fets without good reason just make sure the base current modulation is taken care off (keep all impedances in the base of the transistors very low).
I'd still suggest to use a receiver that provides low jitter before using an ASRC and making the ASR bypassable as well.
Use any DAC you like, even CS if you must. If a DAC has some form of "Vref" pin(s) use local shunts to produce the reference voltage required directly, rather than via chip internal resistor dividers and external Capacitors.
Look carefully at digital filter response and out of band noise when selecting the DAC would be another suggestion.
Of course, there is no need to consider any of my suggestions and I shall not repeat them again.
Ciao T
OK, I'm sure most of us realized that your preferences are different from our by now.
They should not be, assuming we are interested in producing realistic reproduction of music.
Maybe you should realize, that you can talk bad about CS chips
I have not even mentioned CS for probably a dozen posts by now.
If you are so much into NOS, I really do not understand, why you haven't made your own NOS project in here. Feel free to do so!
I did, not particulary here of course and about a decade ago. 😛
(and it did use shunt regulators too!)
Now there are many Non-OS DAC's of varying quality, no need for another one.
We already invited you to a session where you can enjoy a CS based DAC, that will bring you all the realizm you ever wanted. Until you accept this offer, please keep on track, and stop wasting both your own and our time 🙂.
Hmm, I said several times that it was your project and you should do it the way you like.
I am not the one who badmothese anything, nor do I make seriously wrong and equally seriously dismissive statements about a given technology.
Right now I am merely correcting these.
Past that, if you really want to max out your DAC I'd first not use 3-Pin regulators to directly supply any given part of the circuit (nor as CCS) and I would not use the LC Audio Shunts but look at something which does not have the extra noise from the Zenner Diodes.
Discrete Analogue stages we have covered, as you guy's dislike (J)Fets without good reason just make sure the base current modulation is taken care off (keep all impedances in the base of the transistors very low).
I'd still suggest to use a receiver that provides low jitter before using an ASRC and making the ASR bypassable as well.
Use any DAC you like, even CS if you must. If a DAC has some form of "Vref" pin(s) use local shunts to produce the reference voltage required directly, rather than via chip internal resistor dividers and external Capacitors.
Look carefully at digital filter response and out of band noise when selecting the DAC would be another suggestion.
Of course, there is no need to consider any of my suggestions and I shall not repeat them again.
Ciao T
Hi,
Most (almost all) digital filters do not work like that. They use a long delay line with multiple taps and then sum the outputs of different taps along the length according to certain weighted factors to give the desired response. This means any any given sample at the output it is actually made up out of many Input samples, for want of a better work "smeard around' in time.
So the original sample is gone, forever. The new Sample is something new, with a more or less tenous relationship between it and the source (or even further removed, the original acoustic event captured by a microphone and then amplified and converted to digital at a minimum).
Ciao T
Doesn't that depend on whether the os is synchronous or asynchronous? Isn't it the case that for sync os the original datapoint is kept and augmented with 3 'new' data points?
Most (almost all) digital filters do not work like that. They use a long delay line with multiple taps and then sum the outputs of different taps along the length according to certain weighted factors to give the desired response. This means any any given sample at the output it is actually made up out of many Input samples, for want of a better work "smeard around' in time.
So the original sample is gone, forever. The new Sample is something new, with a more or less tenous relationship between it and the source (or even further removed, the original acoustic event captured by a microphone and then amplified and converted to digital at a minimum).
Ciao T
Hi,
Most (almost all) digital filters do not work like that. They use a long delay line with multiple taps and then sum the outputs of different taps along the length according to certain weighted factors to give the desired response. This means any any given sample at the output it is actually made up out of many Input samples, for want of a better work "smeard around' in time.
So the original sample is gone, forever. The new Sample is something new, with a more or less tenous relationship between it and the source (or even further removed, the original acoustic event captured by a microphone and then amplified and converted to digital at a minimum).
Ciao T
OK, I thought you were talking about a src ahead of the DAC, not a digital filter (these are different, no?). I assumed that a sync src would need a accurate clock somehow linked to the incoming sample stream and then synchronously insert the additional samples. And if that sr clock was not accurately linked (locked?) to the incoming rate you would actually increase your jitter and/or increase sample value erors.
jd
Hi,
Not really. Why should they be fundamentally different? Both are at the core digital filters. The responses differ (the ASRC filters tend to be much more primitive than the reconstruction filters in DAC's) and if an ASRC is used "upsampling" it will mainly domiante the subsequent system, but you will also get fun interactions. These interactions are best seen as "effects", but switching among them can be endless fun (as long as you are paid for it anyway, if not I have better things to do).
Then it's impulse response would still be a single pulse, not the usual ringing mess. The ringing, to be precise is in fact "images" of the original pulse added into the signal by the delay line, just as the rounded top is a result of the addition factors of the same delay line.
In fact, a suitable delay line with suitable taps can make very steep analogue filters, they used to be used in Colour TV decoders, then they where surface wave devices, mostly.
Interesting view.
In reality what a sample rate converter does is much more prosaic. It keeps a lot of samples in a memory buffer and uses a rate estimator (this is the actual crucial part) to work out the difference between incoming and outgoing sample rate. It then selects a suitable set of configuration from a lookup table.
Clearly, the rate estimator at best can only use as many samples as are in memory for the whole buffer. This generally represents the limit in terms of real jitter rejection, most SRC's have around 100 or so samples in memory, so for a 44.1KHz sample rate the jitter corner for the ASRC would be around 441Hz.
Of course, the output of the ASRC is not jittered in the time domain (if locked to a suitably precise reference clock). Yet the jitter cannot have been buffered by the ASRC.
So, where has it gone? That is the question for Sherlock "The Fiddler" Holmes...
Of course, most DIY'ers and also most professional EE's only see ASRC's as "black boxes" with miraculous powers.
They remove Jitter, they create magically 192/24 Music from a 44/16 source. They bring the world peace (especially in the Middle East and Kreplakisthan), feed the hungry, heal cancer and walk on water...
Of course, they do nothing of this whatsoever (and not just the walk on water part).
Ciao T
OK, I thought you were talking about a src ahead of the DAC, not a digital filter (these are different, no?).
Not really. Why should they be fundamentally different? Both are at the core digital filters. The responses differ (the ASRC filters tend to be much more primitive than the reconstruction filters in DAC's) and if an ASRC is used "upsampling" it will mainly domiante the subsequent system, but you will also get fun interactions. These interactions are best seen as "effects", but switching among them can be endless fun (as long as you are paid for it anyway, if not I have better things to do).
I assumed that a sync src would need a accurate clock somehow linked to the incoming sample stream and then synchronously insert the additional samples.
Then it's impulse response would still be a single pulse, not the usual ringing mess. The ringing, to be precise is in fact "images" of the original pulse added into the signal by the delay line, just as the rounded top is a result of the addition factors of the same delay line.
In fact, a suitable delay line with suitable taps can make very steep analogue filters, they used to be used in Colour TV decoders, then they where surface wave devices, mostly.
And if that sr clock was not accurately linked (locked?) to the incoming rate you would actually increase your jitter and/or increase sample value erors.
Interesting view.
In reality what a sample rate converter does is much more prosaic. It keeps a lot of samples in a memory buffer and uses a rate estimator (this is the actual crucial part) to work out the difference between incoming and outgoing sample rate. It then selects a suitable set of configuration from a lookup table.
Clearly, the rate estimator at best can only use as many samples as are in memory for the whole buffer. This generally represents the limit in terms of real jitter rejection, most SRC's have around 100 or so samples in memory, so for a 44.1KHz sample rate the jitter corner for the ASRC would be around 441Hz.
Of course, the output of the ASRC is not jittered in the time domain (if locked to a suitably precise reference clock). Yet the jitter cannot have been buffered by the ASRC.
So, where has it gone? That is the question for Sherlock "The Fiddler" Holmes...
Of course, most DIY'ers and also most professional EE's only see ASRC's as "black boxes" with miraculous powers.
They remove Jitter, they create magically 192/24 Music from a 44/16 source. They bring the world peace (especially in the Middle East and Kreplakisthan), feed the hungry, heal cancer and walk on water...
Of course, they do nothing of this whatsoever (and not just the walk on water part).
Ciao T
[snip]Interesting view.
In reality what a sample rate converter does is much more prosaic. It keeps a lot of samples in a memory buffer and uses a rate estimator (this is the actual crucial part) to work out the difference between incoming and outgoing sample rate. It then selects a suitable set of configuration from a lookup table.
Clearly, the rate estimator at best can only use as many samples as are in memory for the whole buffer. This generally represents the limit in terms of real jitter rejection, most SRC's have around 100 or so samples in memory, so for a 44.1KHz sample rate the jitter corner for the ASRC would be around 441Hz.
Of course, the output of the ASRC is not jittered in the time domain (if locked to a suitably precise reference clock). Yet the jitter cannot have been buffered by the ASRC.
So, where has it gone? That is the question for Sherlock "The Fiddler" Holmes...[snip]Ciao T
Right, it's the rate estimator I meant, didn't know the word. So, if the RE is off, you get samples that come out unjittered (assuming a perfect clocking out) but they may have the wrong value, correct?
jd
Hi,
Not quite. First, the rate estimator cannot really be "off". It will have to follow the source sample rate or risk running out of samples (if that sounds like a PLL, that is correct, it is basically a digital PLL).
If it does have to track the source, it will dynamically switch between different digital filters. So the input jitter is converted into different digital filter responses, which means phase (and frequency) response modulation, on top of some amplitude modulation.
If the result is more or less audible than the original jitter and/or more annoying is another story and in general seems a quite complex issue.
Ciao T
Right, it's the rate estimator I meant, didn't know the word. So, if the RE is off, you get samples that come out unjittered (assuming a perfect clocking out) but they may have the wrong value, correct?
Not quite. First, the rate estimator cannot really be "off". It will have to follow the source sample rate or risk running out of samples (if that sounds like a PLL, that is correct, it is basically a digital PLL).
If it does have to track the source, it will dynamically switch between different digital filters. So the input jitter is converted into different digital filter responses, which means phase (and frequency) response modulation, on top of some amplitude modulation.
If the result is more or less audible than the original jitter and/or more annoying is another story and in general seems a quite complex issue.
Ciao T
Thorsten,
Doesn't that depend on whether the os is synchronous or asynchronous? Isn't it the case that for sync os the original datapoint is kept and augmented with 3 'new' data points?
jd
This question always bugged me, too...😕
Let's have a look at the frequency response of a classic digital filter (synch 4x OS, SM5841). Assuming this curve is generated using a full scale 16bit file/track and assuming that we can live with 0.2dB scaling to avoid overflow... then what about that pesky +/-0.03dB ripple...it seems that most of the output samples are new/rewritten.
Attachments
Thorsten,
Doesn't that depend on whether the os is synchronous or asynchronous? Isn't it the case that for sync os the original datapoint is kept and augmented with 3 'new' data points?
jd
Jan is correct an exact frequency multiple convolutional sin(x)/x interpolation preserves exactly the original samples. The sin(x)/x is zero at all the "right" places, If you think about it it could be no other way. It would violate first principles.
Last edited:
FIR filter misunderstandings
Must've been a while since you really looked in detail at the datasheets then. They don't in general use a long delay line, they tend to use several shorter ones, running at different rates. Take a plain vanilla 8X OS chip from NPC as an example - from memory it would have 3 stages, the first one does fs->2fs and its by far the longest as it has the most stringent requirements. A shorter stage does 2fs->4fs and the third stage is the shortest of all, from 4fs->8fs. Such a multirate structure is a much more efficient use of resources than a single filter which would need to run at the output rate (8fs).
As Scott has pointed out, the sin(x)/x function ensures that the original samples get multiplied by unity and all other samples find a zero as their multiplier, thus preserving the original values. In the first filter, all even values will be stuffed with zeros and odd values (say) are the originals. The impulse response being sin(x)/x gives us the originals back for odds, and sin(x)/x interpolations for the even values. Rinse and repeat a further couple of times.
Most (almost all) digital filters do not work like that. They use a long delay line with multiple taps and then sum the outputs of different taps along the length according to certain weighted factors to give the desired response. This means any any given sample at the output it is actually made up out of many Input samples, for want of a better work "smeard around' in time.
Must've been a while since you really looked in detail at the datasheets then. They don't in general use a long delay line, they tend to use several shorter ones, running at different rates. Take a plain vanilla 8X OS chip from NPC as an example - from memory it would have 3 stages, the first one does fs->2fs and its by far the longest as it has the most stringent requirements. A shorter stage does 2fs->4fs and the third stage is the shortest of all, from 4fs->8fs. Such a multirate structure is a much more efficient use of resources than a single filter which would need to run at the output rate (8fs).
So the original sample is gone, forever. The new Sample is something new, with a more or less tenous relationship between it and the source (or even further removed, the original acoustic event captured by a microphone and then amplified and converted to digital at a minimum).
As Scott has pointed out, the sin(x)/x function ensures that the original samples get multiplied by unity and all other samples find a zero as their multiplier, thus preserving the original values. In the first filter, all even values will be stuffed with zeros and odd values (say) are the originals. The impulse response being sin(x)/x gives us the originals back for odds, and sin(x)/x interpolations for the even values. Rinse and repeat a further couple of times.
This question always bugged me, too...😕
Let's have a look at the frequency response of a classic digital filter (synch 4x OS, SM5841). Assuming this curve is generated using a full scale 16bit file/track and assuming that we can live with 0.2dB scaling to avoid overflow... then what about that pesky +/-0.03dB ripple...it seems that most of the output samples are new/rewritten.
Yep, in the case of a 4X filter, it would indeed be the case that 75% of the samples are new/rewritten. Here, of course the 0.2dB scaling means all bets are off...
Irony in digital
The rate estimator needs to know nothing about what's in the samples themselves. It just keeps track of the ratio between input and output frequencies - in actuality, it will be the periods which are measured, relative to its own master clock. It does some averaging of those periods in order to get a better estimate.
Your explanation falls over when datasheets are inspected - AD's 1896 does quite a bit better than 441Hz in its slow mode - it has a corner frequency about 3Hz with a 30MHz master clock. That's with a 192kHz output frequency too - over 4X more data coming out, so your explanation would require its corner frequency to be around 1900Hz. Quite a big difference.
The simple answer is - it was never there in the first place, assuming that the samples were indeed recorded with a clean clock. The jitter was just an artifact of transmission. Elementary...
The (rather delicious) irony here is that you're clearly in the 'camp' you yourself demarcate in that you've not looked much 'under the hood'. By thine own words shall ye be justified....😀
In reality what a sample rate converter does is much more prosaic. It keeps a lot of samples in a memory buffer and uses a rate estimator (this is the actual crucial part) to work out the difference between incoming and outgoing sample rate. It then selects a suitable set of configuration from a lookup table.
Clearly, the rate estimator at best can only use as many samples as are in memory for the whole buffer.
The rate estimator needs to know nothing about what's in the samples themselves. It just keeps track of the ratio between input and output frequencies - in actuality, it will be the periods which are measured, relative to its own master clock. It does some averaging of those periods in order to get a better estimate.
This generally represents the limit in terms of real jitter rejection, most SRC's have around 100 or so samples in memory, so for a 44.1KHz sample rate the jitter corner for the ASRC would be around 441Hz.
Your explanation falls over when datasheets are inspected - AD's 1896 does quite a bit better than 441Hz in its slow mode - it has a corner frequency about 3Hz with a 30MHz master clock. That's with a 192kHz output frequency too - over 4X more data coming out, so your explanation would require its corner frequency to be around 1900Hz. Quite a big difference.
Of course, the output of the ASRC is not jittered in the time domain (if locked to a suitably precise reference clock). Yet the jitter cannot have been buffered by the ASRC.
So, where has it gone? That is the question for Sherlock "The Fiddler" Holmes...
The simple answer is - it was never there in the first place, assuming that the samples were indeed recorded with a clean clock. The jitter was just an artifact of transmission. Elementary...
Of course, most DIY'ers and also most professional EE's only see ASRC's as "black boxes" with miraculous powers.
The (rather delicious) irony here is that you're clearly in the 'camp' you yourself demarcate in that you've not looked much 'under the hood'. By thine own words shall ye be justified....😀
Last edited:
More importantly why do the myths of what is going on "under the hood" persist. This is a mathematical exercise either understand it or stand aside. People like Wadia have done nothing but create a smokescreen of Bull Dada with their "French curve" fitting which does nothing but allow images to incur.
Last edited:
Hi Scott,
Hmmm, okay, I take your word for it.
It does leave the question though why the actual output of the filter(s) clearly shows that the original sample is also manipulated? It may not be manipulated by the actual oversampling process, but it does not pass the digital filter intact. I do not, in all honesty see the "factor one multiplication" applying even in pure theory, I'll have refresh my theory it seems.
Ciao T
Jan is correct an exact frequency multiple convolutional sin(x)/x interpolation preserves exactly the original samples. The sin(x)/x is zero at all the "right" places, If you think about it it could be no other way. It would violate first principles.
Hmmm, okay, I take your word for it.
It does leave the question though why the actual output of the filter(s) clearly shows that the original sample is also manipulated? It may not be manipulated by the actual oversampling process, but it does not pass the digital filter intact. I do not, in all honesty see the "factor one multiplication" applying even in pure theory, I'll have refresh my theory it seems.
Ciao T
Hi Scott,
Hmmm, okay, I take your word for it.
It does leave the question though why the actual output of the filter(s) clearly shows that the original sample is also manipulated? It may not be manipulated by the actual oversampling process, but it does not pass the digital filter intact. I do not, in all honesty see the "factor one multiplication" applying even in pure theory, I'll have refresh my theory it seems.
Ciao T
You can draw the convolution process on a couple of sheets of paper and slide them past each other. What a particular implementation does is not of concern. People tweek filter coefficients to get what they want, you need to understand the underlying process and then move on to other possibilities.
Hi,
Never said anything different. But it has a limited number of samples to work with and as the usual chips are designed for varispeed etc. they will not use a vary large number of samples.
Now, under what conditions is the "slow mode" engaged? When you have tons of input jitter? Or if the input jitter is minimised?
It is amusing to send a large amount of jitter into an ASRC and then see what it does, in a close in FFT. It is even more instructive to do the same thing afterwards with a well designed secondary PLL reclocker.
Actually, I looked to fair degree "under the hood" and I find observed performance to varying degrees inconsistent with the supposed and/or assumed theory of operation to question if "under the hood" things really work like they should.
Ciao T
The rate estimator needs to know nothing about what's in the samples themselves. It just keeps track of the ratio between input and output frequencies
Never said anything different. But it has a limited number of samples to work with and as the usual chips are designed for varispeed etc. they will not use a vary large number of samples.
Your explanation falls over when datasheets are inspected - AD's 1896 does quite a bit better than 441Hz in its slow mode - it has a corner frequency about 3Hz with a 30MHz master clock. That's with a 192kHz output frequency too - over 4X more data coming out, so your explanation would require its corner frequency to be around 1900Hz. Quite a big difference.
Now, under what conditions is the "slow mode" engaged? When you have tons of input jitter? Or if the input jitter is minimised?
The simple answer is - it was never there in the first place, assuming that the samples were indeed recorded with a clean clock. The jitter was just an artifact of transmission. Elementary...
It is amusing to send a large amount of jitter into an ASRC and then see what it does, in a close in FFT. It is even more instructive to do the same thing afterwards with a well designed secondary PLL reclocker.
The (rather delicious) irony here is that you're clearly in the 'camp' you yourself demarcate in that you've not looked much 'under the hood'. By thine own words shall ye be justified....😀
Actually, I looked to fair degree "under the hood" and I find observed performance to varying degrees inconsistent with the supposed and/or assumed theory of operation to question if "under the hood" things really work like they should.
Ciao T
Hi,
The result in mathematical terms for a multi-stage filter should the same as for a single pass. As said, I still remember essentially identically working filters in analog hardware.
So, why do my measurements of sending an impulse to a DAC first without digital filtering and then with show different peak amplitudes? If the ooriginal sample is preserved, should I not get the same peak amplitude?
Ciao T
Must've been a while since you really looked in detail at the datasheets then. They don't in general use a long delay line, they tend to use several shorter ones, running at different rates. Take a plain vanilla 8X OS chip from NPC as an example - from memory it would have 3 stages, the first one does fs->2fs and its by far the longest as it has the most stringent requirements. A shorter stage does 2fs->4fs and the third stage is the shortest of all, from 4fs->8fs. Such a multirate structure is a much more efficient use of resources than a single filter which would need to run at the output rate (8fs).
The result in mathematical terms for a multi-stage filter should the same as for a single pass. As said, I still remember essentially identically working filters in analog hardware.
As Scott has pointed out, the sin(x)/x function ensures that the original samples get multiplied by unity and all other samples find a zero as their multiplier, thus preserving the original values.
So, why do my measurements of sending an impulse to a DAC first without digital filtering and then with show different peak amplitudes? If the ooriginal sample is preserved, should I not get the same peak amplitude?
Ciao T
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Line Level
- Open Source DAC R&D Project