Transparent or not depends on the definition of transparency for example, some people think delays are transparent while some others don't.The best resamplers are totally transparent, whether or not you think the process is "imperfect".
Linearity is similar. In mathematics, y = mx + c is a linear relation from x to y however, according to system sciences, the same is non-linear as it does not satisfy the superposition theorem.
Yet another example is that of a perfect code which, according to Information Theory, is neither a code capable of detecting and correcting all errors nor one that corrects all the errors it detects.
Last edited:
One problem is that there are only few, if any, modern audio DAC chips which officially support sample rates below 44.1kHz.
Also, the simultaneous multirate support is something most operating system or hardware platforms have problems with, even dedicated audio DSP chips.
Much easier to throw more computing power on the problem ;-)
I see the issue: I wasn't clear in stating that after the subwoofer FIR you convert the signal back up to the original sample rate.
The applicable term is "affine".Linearity is similar. In mathematics, y = mx + c is a linear relation from x to y however, according to system sciences, the same is non-linear as it does not satisfy the superposition theorem.
ALl you need is a high quality sound card, the correct software and a modern PC. You will have all the computing power you need.Much easier to throw more computing power on the problem ;-)
Obviously you are not reading any true audiophile forums, haha ;-)am waiting for the day that somebody claims (assuming identical implementations) that software executed on one processor sounds better than on another processor.
They are discussing the sonic signatures of different RAMs and SSDs and some much more fancy stuff than these simple things, for example: https://www.audiosciencereview.com/...hancer-software-a-perfect-placebo-test.20955/
OK. Then again, you will need quality down-/upsampling algorithms which usually means larger FIR filters for these, so at some point there is a break even.I see the issue: I wasn't clear in stating that after the subwoofer FIR you convert the signal back up to the original sample rate.
Just when I think the world can't get any more absurd, it does.They are discussing the sonic signatures of different RAMs and SSDs
Obviously you are not reading any true audiophile forums, haha ;-)
They are discussing the sonic signatures of different RAMs and SSDs and some much more fancy stuff than these simple things, for example: https://www.audiosciencereview.com/...hancer-software-a-perfect-placebo-test.20955/
The problem with audiophiles (I know, I used to be one) is they spent too much time listening to how music is reproduced as opposed to listening to music. Mozart is Mozart whether played on a Baldwin or a Steinway.
I see the issue: I wasn't clear in stating that after the subwoofer FIR you convert the signal back up to the original sample rate.
No need for fancy upsampling algorithms if the upsampling ratio is an integer. For example,OK. Then again, you will need quality down-/upsampling algorithms which usually means larger FIR filters for these, so at some point there is a break even
Code:
[x,Fs,bits] = wavread('Come September.wav');
for i = 1:length(x)
y(2*i-1) = x(i);
y(2*i) = x(i);
end
wavwrite(y,2*Fs,bits,'C:\...\....\Resampled.wav');
Output file for 88.2kHz attached, rename to *.wav after downloading. However, with this method, the reconstruction filter would have to designed for the original Fs and not the resampled one, as the samples are simply being repeated.
Attachments
Last edited:
That's what's assumed anyway, of course.No need for fancy upsampling algorithms if the upsampling ratio is an integer.
But the effort is the FIR filter, not the insertion of repeated samples -- which btw just complicates the filter, easier would be to use inserted zeros because then the convolving algo can be simplified... even more so when we have a half-band filter kernel where every other sample is also zero, except for the center sample. But that was my my point, for example half-band filter is not recommended for lower sample rates as the images are fully in the audible range. We need a filter that is way down in the stop band at (the downsampled) fs/2 and for a short one with low ripple this means the transition region is large etc etc.
And the requirements for the downsampling filter are just as hard because of potential aliasing which again would be fully in the audible range (even though it is filtered thereafter if the stopband of the target filter is reached before the aliases appear
So my take would be unless very specific conditions are met, there is not much benefit from using an intermediate downsampling stage to achieve reduced computational overhead in a low frequency FIR filter for a woofer/subwoofer path, at least not when excellent quality is the goal. When we can live with a certain amount of artifacts it might be a worthwile option, though.
The upsampler in my understanding is the last stage before D/A conversion. What convolution would you want to perform after that ? The reconstruction filter I mean is analogue and its phase linearisation is already assumed to be included in the FIR filter, just as you'd want to do for the loudspeaker.But the effort is the FIR filter, not the insertion of repeated samples -- which btw just complicates the filter, easier would be to use inserted zeros because then the convolving algo can be simplified...
Input => anti alias => downsample => EQ / net phase EQ => upsample => DAC.
The delay in using repeated samples is literally nothing as all we do is pass the same word n-times within every sample period. The technical Fs is increased so that the incompatible equipment now becomes compatible.
[--------a--------] [--------b--------] [--------c--------] [--------d--------]
[---a---] [---a---] [---b---] [---b---] [---c---] [---c---] [---d---] [---d---]
If downsampling is acceptable then upsampling shouldn't be an issue at all. Besides sub-band coding/decoding has been very successfully used in many codecs since the 1980s for example, Dolby AC3.
....there is not much benefit from using an intermediate downsampling stage to achieve reduced computational overhead in a low frequency FIR filter....
There may not be much benefit in using sub-band resampling but if the user has already chosen downsampling (despite its drawbacks), an essentially "lossless" upsampling should not discourage him/her from trying the method !!What are the products where increased processor efficiency is still essential?
Asking because it seems like it shouldn't matter much anymore...
Input => anti-alias filter => downsample(remove samples) => EQ / net phase EQ => upsample(insert zero/repeated samples) ==> anti-imaging filter => DAC.The upsampler in my understanding is the last stage before D/A conversion. What convolution would you want to perform after that ? The reconstruction filter I mean is analogue and its phase linearisation is already assumed to be included in the FIR filter, just as you'd want to do for the loudspeaker.
Input => anti alias => downsample => EQ / net phase EQ => upsample => DAC.
I see what you mean with moving that anti-imaging filter after the DAC in the analog domain, but the requirements for that filter are quite brutal as we all know.
The repeated samples simply retain the waveshape, assuming an R-2R NOS DAC would be used. But the transition between fs/2 and fs is exactly one octave and a filter generally gives 6dB/octave/order, so we're talking at least a 10th order filter here, maybe 20 !!I see what you mean with moving that anti-imaging filter after the DAC in the analog domain, but the requirements for that filter are quite brutal as we all know.
The zero padding method only reserves the extra time slots for upsampling but does not interpolate between existing samples and therefore requires an interpolation filter, which is typically FIR.
Last edited:
Links to modules available for some of the analogue devices DSP's
https://wiki.analog.com/resources/tools-software/sigmastudiov2/modules/advanceddsp/downsampling
https://wiki.analog.com/resources/tools-software/sigmastudiov2/modules/advanceddsp/upsampling
https://wiki.analog.com/resources/tools-software/sigmastudiov2/modules/advanceddsp/downsampling
https://wiki.analog.com/resources/tools-software/sigmastudiov2/modules/advanceddsp/upsampling
Well, you are basically introducing a ZOH with the associated HF drop (3.6dB@fs) which must be compensated, and you have the ZOH images which are not any less severe than the ones created by zero samples, thus the requirement for the anti-imaging filter are no different.The repeated samples simply retain the waveshape, assuming an R-2R NOS DAC would be used. But the transition between fs/2 and fs is exactly one octave and a filter generally gives 6dB/octave/order, so we're talking at least a 10th order filter here, maybe 20 !!
The zero padding method only reserves the extra time slots for upsampling but does not interpolate between existing samples and therefore requires an interpolation filter, which is typically FIR.
4:1 upsampling example:
Well, it shoudn't be difficult to compensate that sinx/x envelope in the EQ stage, along with the response of the loudspeaker.Well, you are basically introducing a ZOH with the associated HF drop (3.6dB@fs) which must be compensated,
I'm not saying that the images would be very different. These need to be adequately rejected by the filter that follows, since it is still within 20kHz and possibly quite audible.the ZOH images which are not any less severe than the ones created by zero samples, thus the requirement for the anti-imaging filter are no different.
To be honest, this issue is caused by the act of downsampling and the upsampler is simply through and through and is therefore not to be blamed.
Proper upsampling interpolation is done by stuffing zeros in the intermediate samples between the existing ones, so for example upsampling at 1:2 would have:
sample at time 1
0.0
sample at time 2
0.0
sample at time 3
0.0
...and so on.
This does not introduce extra energy into the signal and also avoids the zero-order hold problem. However, you really need an anti-imaging filter at the upper sample rate because the input is effectively modulated by a carrier at Fs/2 so aliases will occur at the same level as the input signal. If the input signal is full bandwidth then the filter should be a brick wall, but the passband for the aforementioned subwoofer EQ is a small fraction of the sampling rate so the filter requirement can be relaxed considerably. I'll elaborate further but the margins of this comment are too narrow to contain the proof.
sample at time 1
0.0
sample at time 2
0.0
sample at time 3
0.0
...and so on.
This does not introduce extra energy into the signal and also avoids the zero-order hold problem. However, you really need an anti-imaging filter at the upper sample rate because the input is effectively modulated by a carrier at Fs/2 so aliases will occur at the same level as the input signal. If the input signal is full bandwidth then the filter should be a brick wall, but the passband for the aforementioned subwoofer EQ is a small fraction of the sampling rate so the filter requirement can be relaxed considerably. I'll elaborate further but the margins of this comment are too narrow to contain the proof.
It's probably a lot easier if one takes a good look at the frequency spectrum (at various stages of the process) to immediately understand what is going on.I'll elaborate further but the margins of this comment are too narrow to contain the proof.
http://smartdata.ece.ufl.edu/eee5502/2019_fall/media/2019_eee5502_slides24.pdf
Any sampling (including natural sampling or pulse amplitude modulation) will have (more or less) the same baseband signal repeating (aliases or images) infinitely at regular frequency intervals. Downsampling attempts to move this interval down, while upsampling moves it up.
It is thus not possible for an upsampler to be "transparent", as it involves interpolation or alternatively the insertion of fake samples in time. The "most transparent" upsampling method is repeating samples that keeps the time-domain waveform but requires reconstruction at the original Fs/2 (not the upsampled one), which is clearly a disadvantage, like the "upsampling" never happened in the first place !!
Last edited:
- Home
- Loudspeakers
- Multi-Way
- Removing Loudspeaker Group Delay using reverse-IIR filtering