Passive pre-filtering can slow the signal seen by the opamp down to the region where the opamp is well behaved.
Passive pre-filtering like what?
The op-amp is usually used as a low-pass filter, the article makes a case for the use of video op-amp's as LPF and that's very interesting!
It is fast heading for the usual /// causality denial paradigm.
Causality = latency is true, yes, but latency can be vastly reduced using IIR, minimum-phase or filterless / Nos.
I'd lean toward that you and others are right that latency is not very important in pure audio, aside from audio/visual.
I just cited the "media players / ASIO / Kernel Streaming sounds different" for completeness.
If you connect a Midi synthesizer, an electric guitar or similar to a PC at home though, then the audio playback needs to be in sync with the physical striking.
That kind of latency is perceivable well under 100 ms, I'd suspect it's more like 10 ms.
It is fast heading for the usual Shannon-Nyquist /// denial paradigm.
I am not personally aware if DSD-wide, DSD128, DSD256, DSD512 or something else I've missed is in conflict with Shannon-Nyquist.
If you are I'm sure your knowledge would be appreciated.
It is fast heading for the usual Fourier /// denial paradigm.
I looked around for a while for the controversy and haven't really seen any.
A waveform is a number of sines, sines / frequency is defined by the wave cycle time.
I don't personally see how this is in conflict with video op-amp's or DSD.
An externally hosted image should be here but it was not working when we last tested it.
Ah that's not engineering dynamic range, that's perceptual dynamic range. Easy to confuse the two but they're not equivalent - with shaped dither the engineering (measured) dynamic range will actually be lower than 96dB by quite a bit seeing as the shaping can only move noise around (within the 22kHz bandwidth) it can't remove it. The shaped dither gets its increased perceived dynamic range by moving noise away from the 2-4kHz band where the ear is most sensitive and piling it up at higher freqs (where the ear isn't so sensitive).
O.k., with shaped dither we can move noise away from the 2-4 kHz area.
However can the music signal then vary by 120 dB in the 2-4 kHz area as well, or is it still limited to a 96 dB ceiling?
Last edited:
Passive pre-filtering like what?
I have several examples of passive post-I/V filters on my blog. I'm even using a kind of opamp after the LC filter in my current DAC incarnation (AD815).
Ok, with shaped dither we can move noise away from the 2-4 kHz area but can the music signal vary by 120 dB in the 2-4 kHz area or is it still limited to 96 dB?
Its an interesting question - one I'm not sure I know the answer to. Instantaneously the answer would have to be 'no' - the instantaneous SNR would have to be limited to the basic quantization noise. However with a bandpass filter in place (passing only 2-4kHz) then better than 96dB can certainly be achieved (engineering-wise) as the bandwidth is considerably lower than the full 22kHz.
I saw an FFT the other day (over here : Conclusive "Proof" that higher resolution audio sounds different - Page 82 ) where the effect of using shaped dither can clearly be seen in the plot. Note how the noise 'floor' disappears off the plot in the critical 4kHz region (the second FFT in the post, the first is using normal dither).
Last edited:
human hearing does use bandpass filters - critical bands are ~20% of center frequency above 500 Hz http://en.wikipedia.org/wiki/Critical_band is a bare start
lots of poorly stated/confusing things about human hearing and noise, noise thresholds, masking can be better understood if you use critical band theory, recognize that hearing has frequency and bandwidth dependencies that make simpleminded engineering's single flat full bandwidth RMS S/N number less useful
weighting curves and the reasons to use them have been around a while now, since 1936? http://en.wikipedia.org/wiki/A-weighting
lots of poorly stated/confusing things about human hearing and noise, noise thresholds, masking can be better understood if you use critical band theory, recognize that hearing has frequency and bandwidth dependencies that make simpleminded engineering's single flat full bandwidth RMS S/N number less useful
weighting curves and the reasons to use them have been around a while now, since 1936? http://en.wikipedia.org/wiki/A-weighting
Last edited:
Its an interesting question - one I'm not sure I know the answer to. Instantaneously the answer would have to be 'no' - the instantaneous SNR would have to be limited to the basic quantization noise. However with a bandpass filter in place (passing only 2-4kHz) then better than 96dB can certainly be achieved (engineering-wise) as the bandwidth is considerably lower than the full 22kHz.
With 16-bit music made completely in software without an ADC the softest note to the loudest note is limited to 96 dB isn't it?
I mean...... without quantization noise. =)


Last edited:
With 16-bit music made completely in software without an ADC the softest note to the loudest note is limited to 96 dB isn't it?
No.
I like very much monosyllabic answers. Really, are very edifying...
Hmm... very interesting answer. It needs a thorough analysis...
But which reality? Virtual reality or real reality?More perfect than reality doesn't exist.
"Do you mean digital is more perfect than reality"
[A] digital [very high quality camera lens visual, on a very high quality screen,] is more perfect than [human] reality.
K fixed.
[A] digital [very high quality camera lens visual, on a very high quality screen,] is more perfect than [human] reality.
K fixed.
Last edited:
You're pretty much disagreeing with the definition of "Nyquist frequency". There's a nice explanation here: Nyquist?Shannon sampling theorem - Wikipedia, the free encyclopedia. Some relevant snips:No, that doesn't seem true at all.that perfect reproduction is only possible if the waveform which is sampled contains no frequency components higher than the Nyquist frequency ///
Further down it explains the problems caused by aliasing.The sampling theorem introduces the concept of a sample-rate that is sufficient for perfect fidelity for the class of bandlimited functions. And it expresses the sample-rate in terms of the function's bandwidth.
................
If a function x(t) contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced 1/2W seconds apart.
In more modern notation, we use hertz for cps, and B instead of W for bandwidth...
A sufficient sample-rate is therefore 2B samples/second, or anything larger. Conversely, for a given sample rate fs the bandlimit for perfect reconstruction is B ≤ fs/2 . When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing.
................
The two thresholds, 2B and fs/2 are respectively called the Nyquist rate and Nyquist frequency.
The reason I mention ultrasonics is because you said this:This isn't an ultrasonic discussion the way you paint it.
The point is that if the original input signal contains no ultrasonic components (above the Nyquist frequency), then reproduction is perfect....after all, if there's pre-echo it's not perfect is it?
The DAC only exhibits pre-echo if the original input signal does contain ultrasonic components (above the Nyquist frequency),
Oversampling does not change resolution; it makes filter design easier.Kastor L said:With oversampling you can achieve the equivalent of 17-bit resolution or 102 dB.
Such a file might not be bandlimited, unless you make it very carefully.We can also avoid quantization error and dither entirely by simply making a 16-bit file via software alone, like in Reason, what is the dynamic range then?
You need to gain more understanding. When you understand your questions, the answers will make more sense.
But which reality? Virtual reality or real reality?
Real reality, not amplified instruments.
You're pretty much disagreeing with the definition of "Nyquist frequency"
I'm not, I just couldn't follow what you were saying. It looked like you were implying something like that as long as we apply the rules of Nyquist-Shannon then the reconstruction is perfect.
The point is that if the original input signal contains no ultrasonic components above the Nyquist frequency then reproduction is perfect.
The DAC only exhibits pre-echo if the original input signal contains ultrasonic components above the Nyquist frequency
Alright, I have a question, if there are precisely zero ultrasonic components above the Nyquist frequency during the recording, are you saying minimum-, linear- and maximum-phase in the DAC will all look/perform exactly identical in a scope, or just that we will hear them as identical?
Last edited:
Linear phase reconstruction will look identical to the original.Alright, I have a question, if there are precisely zero ultrasonic components above the Nyquist frequency during the recording, are you saying minimum-, linear- and maximum-phase in the DAC will all look/perform exactly identical in a scope, or just that we will hear them as identical?
Minimum phase reconstruction will look different.
I don't know about maximum phase. Is that ever used with DACs?
With your latest comment above I think I can see where you're coming from now, you think linear-phase is more accurate to the original than minimum-phase?
Oversampling does not change resolution; it makes filter design easier.
http://en.wikipedia.org/wiki/Audio_bit_depth
"For example, 14-bit ADC can produce 16-bit 48 kHz audio if operated at 16x oversampling, or 768 kHz"
Such a file might not be bandlimited, unless you make it very carefully.
A 16-bit / 44.1 kHz file just like any other file except it's purely digitally encoded.
You're confusing recording with playback.
Taking poorly understood stuff out of context and arriving at a deep misunderstanding is the hazard of googlefishing without doing the work of first learning the basics.
Taking poorly understood stuff out of context and arriving at a deep misunderstanding is the hazard of googlefishing without doing the work of first learning the basics.
Real reality, not amplified instruments.
Dam I listen to music from the no-real reality, therefore I do not exist in the real reality I am just a figment of my imagination😱
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- Highest resolution without quantization noise