Highest resolution without quantization noise

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Yes!
Sampling theorem states: If a function x(t) contains no frequencies higher than W cps, it is completely determined by giving its ordinates at a series of points spaced 1/2W seconds apart. see:https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
Iaw: If a signal is band limited (and a real world signal always is), a digital system with a sampling frequency of (a bit more) twice that bandwith can COMPLETELY reproduce it.
can't argue with that... for it's true.
So for humans everything we can hear can be COMPLETELY reproduced by a 50kHz samplingrate. (I'm being very careful here, good implemented redbook is enough).
that is 100% correct...
Oversampling makes the engineering of the anti-aliasing and reconstruction filters much easier. The most part of the filtering can be done in the digital domain. All modern converters use oversampling.
In DSP you use oversampling when frequencies above fs/2 are generated. This reduces/eliminates aliasing artefacts.
correct too..
Here we're talking about the digital part of the anti-aliasing and reconstruction filters. Perfect filters have infinite taps, but human perception is not perfect and so we only need "blameless" filters. "Blameless" means we can't hear a difference between input and output.
The anti-aliasing and reconstruction filters need to be of a certain quality, and therefore have a certain amount of taps, to be "blameless".
yes...
The time when anti-aliasing and reconstruction filters were not properly designed (or digital systems in general) is decades ago. Even very cheap modern converters have excellent performance and are "blameless". Of cause there are exceptions where there is no reconstruction filter or oversampling being used, but you'll always have some nutters around.
sometimes irritating, but sometimes very funny...
Although I can't directly answer your questions, I hope this will explain it.
'nuf said..
 
I never said that and I can assure you that it is complete nonsense

I thought that's what you were trying to say.

What is it you were trying to say then? That linear phase is perfect without the frequencies above the Nyquist rate? Or minimum? Or neither of the two?

Minus latency, which has already been covered by Ken Newton, D. Mills and I in previous posts.
 
Tattoo,

I'm limited to my phone right now so I can't write long answers with links or try your files yet, my computer seems to have broken down yesterday.

Ken Newton,

I can't answer your post in detail either, but yes, I'm looking for the ---most ideal--- system when it comes to DAC performance, theoretically ideal / perfect first and perceptually second.

I'm unravelling some concepts as well, to get a "hard" idea of them, like the various types of dynamic range, various kinds of resolution.

Inadvertently "debunking" a few products or at least their advertised concepts / performance as well, like the Metrum, Phasure NOS1, Chord Hugo which we seem to be getting somewhere with right now.

By the way it seems like you were incorrect earlier about the maximum 96 dB dynamic range in a 16-bit digitally encoded file so I'm not sure why you are talking to me about "not learning" when you didn't seem to have the "basic concepts" nailed yourself.
 
Last edited:
Are the PS enough noise free for the better digital shematic you talk about ? Or does it need necessairly battery PS with bigs caps filtering because the noise floor and to be drifted free ? Just a urban legend ( does the PS are quiet enough ?) ?

I try to understand when our hears are more dependent of the PS design in a DAC than the shematic around the digital stuffs itselves ? Are not the LDO regs or their needed embeded compactness to be near to the active devices their main default but a needed trade off (EMT, etc...)... all things being equal of course (with a same output : being current or analogic stage)

Or do you talk just in theory, putting layout issues because OT in this discussion ?

I was not referring to power supply noise and I agree with you that a lot of it is urban legend.

I have a portable amplifier which can run on battery or via AC power and I have never heard any real difference, however virtual ground versus ground make a difference.

I have a Current-out DAC with AD828 - video op-amp - in LPF then LME49720 in I/V, I think it sounds very high quality, when I changed AD828 to a few normal op-amp's the sound changed a lot, for the lesser. That article provides a theory why, I was surprised!

Just my experience.


What do you mean with "micro dynamic range"?

Time resolution is not depending on sample rate (as long as the nyquist criteria is satisfied).

This one is easy, macro is softest to loudest volume, micro is for example a +- 0.5 dB sound in the music, which I call quiver, some call it tremolo.

With a 4-bit file or 4-bit DAC that volume variance will disappear right? That's all.

Nyquist is when the sampling rate is fulfilled to catch all of the time impulses, yes.

5 MHz DSD recording does nothing then in your view? A straightforward question as well.
 
This one is easy, macro is softest to loudest volume, micro is for example a +- 0.5 dB sound in the music, which I call quiver, some call it tremolo.

Thank you - it helps to know what you mean with those words - it would be easier if you used the same terminology as everybody else.

With a 4-bit file or 4-bit DAC that volume variance will disappear right?
No, of course it won't "disappear". Especially not if properly dithered. There will be a lot of noise, but the original signal (including the "volume variance") will still be there.

I suggest you try it yourself (as I did earlier). Make or record a 24-bit test signal, including your 0.5 dB "volume variances", and then use audacity or sox to attenuate it 72 dB, convert it to 16 bits with proper dithering, and amplify it back with 72 dB. Listen to the result. What do you hear?

5 MHz DSD recording does nothing then in your view? A straightforward question as well.
You might think it is a straightforward question, but I still don't understand it. What do you mean with "does nothing". A 5 MHz DSD recording describes an approximation of the original signal, so it definitely "does" something.
 
"Perfect filters have infinite taps, but human perception is not perfect and so we only need "blameless" filters."

It can't be infinite taps in strict terms surely, I'm sure there's a point at which it mirrors reality or reaches a terminal velocity, like when the electrons or air movement stops responding to the filter tap length.

The Chord Hugo thread wrote the number "1 million".

In a head-fi post he wrote that human hearing is sampling at an evidenced 250 kHz I think it was, I think he wrote that higher resolution files have higher timing accuracy as well, I will try to find the link now.

My take on it is that a 192 kHz file even if excessive, will put less stress on the filter tap length, the same performance result at less taps in the filter, or do you think they are not connected like that?
 
http://www.head-fi.org/t/702787/chord-hugo/3015

"I agree very much with these two quotes, and it is simply down to the very big WTA tap length that Hugo enjoys - 26,368 taps, way way bigger than any other DAC I have seen.

Having all those taps means the interpolation filter does a more accurate job of reconstructing the original timing of the recording. Timing is an incredibly important cue for the brain, and we know that the ear/brain can resolve down to 4 micro seconds - so the brain via the inter-aural network is sampling at 250 kHz!

///

Anyhow, if the interpolation filter has an infinite number of taps, then it will reconstruct the timing and amplitude of the original bandwidth limited signal perfectly. That is a mathematical certainty. So increasing tap length will give better sound, because you are reconstructing the timing more accurately. Is 26,368 the last word? No its not, there is a huge difference going from 18,432 to Hugo's 26,368, I can't imagine that increasing it further won't make a big difference. When would increasing tap length stop improving the sound - 100k? 1M? 10M?

Nobody knows

Since Hugo has more taps than any other DAC, then the timing problems of red-book CD will be better handled by Hugo than any other DAC, and so the timing benefits of higher sample rates will get much smaller.

But why the suggestion that red-book has maybe better than higher sample rate recordings? I am starting to see this too, and I think the problem maybe down to the problems that high sample rate has - they have better timing resolution than red-book, but they let in a lot of HF rubbish from the ADC noise shapers. Now I know out of band noise creates big SQ problems, as it inter modulates in the analogue sections, it increases the DAC's sensitivity to jitter, with the result of more noise floor modulation, giving a harder more aggressive SQ. I hear this with DXD recordings, a brightness that sounds just like noise floor modulation. I am experimenting on filtering out this noise, to see if there is some benefit in doing this. Now red-book has timing problems, but it has no noise above 22.05 kHz (if you do the interpolation filter correctly!). So Hugo goes a very long way to fix the timing problems, so high rez recordings no longer enjoys better timing than red-book"
 
Last edited:
Thank you - it helps to know what you mean with those words - it would be easier if you used the same terminology as everybody else.

No, of course it won't "disappear". Especially not if properly dithered. There will be a lot of noise, but the original signal (including the "volume variance") will still be there.

I suggest you try it yourself (as I did earlier). Make or record a 24-bit test signal, including your 0.5 dB "volume variances", and then use audacity or sox to attenuate it 72 dB, convert it to 16 bits with proper dithering, and amplify it back with 72 dB. Listen to the result. What do you hear?

You might think it is a straightforward question, but I still don't understand it. What do you mean with "does nothing". A 5 MHz DSD recording describes an approximation of the original signal, so it definitely "does" something.

Thank you for the answer, I listened to a 4-bit file without dither earlier in this thread and I was surprised at the "resolution", it sounded fine, then I realised, the resolution normal people speak of without EE terminology is usually referring to time resolution, low THD and microphone quality, that's "high resolution" even if there is close to zero dynamic range.

Next I need to downsize a file like you did so I can compare them.

With only let's say 16 or 32 levels of volume amplitude with 64 dB macro dynamic range I thought the micro dynamic range will disappear "within a single level"?, speaking of an undithered or purely digitally encoded file here.

Yes sorry I was not clear, normal DSD fulfills Nyquist, so double-rate DSD and octa-rate DSD "does nothing", in terms of the time resolution in your view?
 
Last edited:
Yes sorry I was not clear, normal DSD fulfills Nyquist, so double-rate DSD and octa-rate DSD "does nothing", in terms of the time resolution in your view?

Well, "does nothing" is an oversimplification, but just like a 44.1 kHz sample rate PCM signal can reproduce timing differences much smaller than the sample interval, a standard-rate DSD file can represent timing differences way beyond what the human ear can detect.
 
Oh, look! There's someone that is wrong on the Internet! Wow!

Haha =)

So far I can deduct that it has really high / unsatisfactory latency for audio/visual, plus unsatisfactory pre-ringing.

He says 192 kHz recordings have much higher timing resolution, within the 20 - 20 bandwidth.

If that is true, then they should sound different, if they don't then why would tap length?

Although SY said these are not related I think.

I'm really not sure, that's why I'm checking in detail, to shine light on these considerations.

Even if a 20,000+ filter tap length is perceivable, I'm pretty sure it could be made for a few dollars, not a few thousand =)

Or you'd think the equivalent could be made in PC software.
 
Last edited:
Chord Hugo - Page 202

"I agree very much with these two quotes, and it is simply down to the very big WTA tap length that Hugo enjoys - 26,368 taps, way way bigger than any other DAC I have seen.

Having all those taps means the interpolation filter does a more accurate job of reconstructing the original timing of the recording. Timing is an incredibly important cue for the brain, and we know that the ear/brain can resolve down to 4 micro seconds - so the brain via the inter-aural network is sampling at 250 kHz!

///

Anyhow, if the interpolation filter has an infinite number of taps, then it will reconstruct the timing and amplitude of the original bandwidth limited signal perfectly. That is a mathematical certainty. So increasing tap length will give better sound, because you are reconstructing the timing more accurately. Is 26,368 the last word? No its not, there is a huge difference going from 18,432 to Hugo's 26,368, I can't imagine that increasing it further won't make a big difference. When would increasing tap length stop improving the sound - 100k? 1M? 10M?

Nobody knows

Since Hugo has more taps than any other DAC, then the timing problems of red-book CD will be better handled by Hugo than any other DAC, and so the timing benefits of higher sample rates will get much smaller.

But why the suggestion that red-book has maybe better than higher sample rate recordings? I am starting to see this too, and I think the problem maybe down to the problems that high sample rate has - they have better timing resolution than red-book, but they let in a lot of HF rubbish from the ADC noise shapers. Now I know out of band noise creates big SQ problems, as it inter modulates in the analogue sections, it increases the DAC's sensitivity to jitter, with the result of more noise floor modulation, giving a harder more aggressive SQ. I hear this with DXD recordings, a brightness that sounds just like noise floor modulation. I am experimenting on filtering out this noise, to see if there is some benefit in doing this. Now red-book has timing problems, but it has no noise above 22.05 kHz (if you do the interpolation filter correctly!). So Hugo goes a very long way to fix the timing problems, so high rez recordings no longer enjoys better timing than red-book"
This is just a lot of misinformation and as mentioned before it goes against sampling theory.
I believe this myth comes from people looking at sample editor software and thinking this is actually what a digital wave looks like. :headbash:
 
/// as mentioned before it goes against sampling theory.

I believe this myth comes from people looking at sample editor software and thinking this is actually what a digital wave looks like.

Strictly speaking you can't say it goes against "the Nyquist-Shannon sampling theorem".

It's in conflict with the chain of events, Nyquist-Shannon --> oversampling --> reconstruction filter.

In a filterless, non-oversampling DAC, a recording with 192,000 samples per second will have higher timing resolution in 20 Hz - 20,000 Hz than a recording with 48,000 / 96,000 samples per second, right?

Just trying to be concise and factual.



Nyquist?Shannon sampling theorem
 
Last edited:
Strictly speaking you can't say it goes against "the Nyquist-Shannon sampling theorem".

It's in conflict with the chain of events, Nyquist-Shannon --> oversampling --> reconstruction filter.

In a filterless, non-oversampling DAC, a recording with 192,000 samples per second will have higher timing resolution in 20 Hz - 20,000 Hz than a recording with 48,000 / 96,000 samples per second, right?

Just trying to be concise and factual.

Nyquist?Shannon sampling theorem

"If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart."

If it is completely determined, it is completely determined. So the timing is completely determined too. So if your signal is bandwidth-limited to 20 kHz, it will be just as completely determined with a 48 kHz sample rate as with a 192 kHz sample rate.
 
All bets are off in Nyquist-Shannon land with a filterless NOS DAC since bandlimiting is a tenet of the theory.

Well, a truly filterless NOS DAC is broken by definition. In practice, they all have some sort of low-pass filter due to the limited bandwidth of the amp and speaker they are feeding, it is just sub-optimal and not under the control of the designer.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.