Hypex Ncore

Status
Not open for further replies.
Thank you Julf. Your reply is highly appreciated.

No need to go into metaphysics. I am not saying the theory is flawless or perfect. It is just that the issues discussed here are such fundamental parts of the Nyquist–Shannon sampling theorem, that if they weren't true, none of our current digital audio systems would work.


I would absolutely not use the word "absolutely". Doing a linear scaling operation (that is what a volume/gain adjustment is) will have a very small effect due to rounding errors from the finite (24-bit) precision. What we are saying is that that effect is lower than the noise contribution of both the DAC (that has a dynamic range of 20 bits) or the source material (that probably has a dynamic range of less than 12 bits). That effect will probably also be much smaller than the effect of an analog attenuator.

Just a simple reminder, the dynamic range / SNR of an undithered digital system is:

16 bits - 96 dB
20 bits - 120 dB
24 bits - 144 dB



I assume you have done an ABX with two versions of exactly the same material, and rules out things like different gain or EQ between the HDCD and CD versions?



It is of course possible for a digital volume control to be badly implemented, just as it is possible for an analog control to be badly implemented.

Agree! -no methaphysics will solve this :)

The 16 vs 20 bit observations would probably not fall inside your category of approved ABX. However, if I may describe the details a bit, they probably speak more clearly, then my subjective interpretations ;)

Arcam DVD capable of HDCD decoding vs Naim CD5 only 16 bit decoding.

All tried 16 bit source material, the Naim is far superior than the arcam -both in terms of HIFI and musicality.

When trying HDCD material, the game was clearly reversed in HIFI terms, whereas musicality seems to be more a matter of taste.

everyone for whom I demonstrated this agreed in a jaw dropping manner. I/we used the exact same CDs for direct AB comparison.

I'd say that adjusting for the same sound pressure would be difficult since the experienced difference between quite and loud passages were so different (far more expressive on the HDCD decoding).
-would the volume have had to be equalized in terms of average output, minimum or maximum? Would make quite a difference. (I'd say the average was about the same, min and max weren't). Equalizing, both players should be +/- close to nort in the audio band. If there is an equalizing effect of the different decoders on the DVD I don't know, but valid point, though...


Yes, absolutely no "absolute" in the world of hifi or the world in general.
The way I read you, is that this is relative -meaning that when looking at the probable loss in information it is relatively small in comparison to other losses/distortions/deformations of the digital signal, right?

If I read you right, an important assumption must be that these types of deformations of the signal are comparable not just quantitatively, but also qualitatively in terms of how we perceive them. Of course if this is true, we can disregard looking at sources of signal deformations that should be "hidden" in other types of limitations. Practically speaking, time and money should be better spend somewhere else. Am I still on the right track?

Well, my thinking, if I may be so bold to present that, is that although speakers distort magnitudes more than say amps and sources, most people report big audible differences in the later -although the distortion of the former in a quantitatively sense should "shadow" the comparatively insignificant differences in the later. My personal interpretation is that this is because of the qualitatively differences in these distortions -meaning that we can't necessarily compare them based on quantitatively figures and thus rule out the impact of the source of the lesser numbers. (I hope you can accept that there should be some mileage in this assessment)

When relating this to the digital domain, my question is that how can we be sure that the same thing is not happening -that quantitatively small sources of distortion can still bare significant audibility, although other apparent limitations should seem much larger?

Please excuse my ignorance on this subject, but would the argument be that because digitalized information IS nothing but quantitative levels, nothing can possible be "hidden" between these levels? And therefore we should be able to go as far to reduce the bit depth to the dynamic range of the signal, right?

Dithering, I am not qualified to discuss, but from my understanding it is a method that works by introducing low level noise to the signal in order to modulate the bit depth of the signal from say 16bits to 24 or even 32? How does this method actually alter the quantized levels of the source material into more bit depth? I assume it must somehow be adding information that was not present in the source material, right?

Regarding the Sonos' implementation, I assume it must have been poorly executed -the difference was quite staggering!
As always implementation is really everything :)
 
Last edited:
The bitrate IS the dynamic range of the signal.

Bit rate is the amount of information transmitted or processed per time unit. It is measured in bits/s. You can calculate the bit rate by multiplying the word length ("number of bits") with the sample rate, so the bit rate of a CD is
2 (for 2 stereo channels) x 16 (bits) x 44100 (samples/s), or 1411200 bits/s (1.4 Mbits/s).

The dynamic range, measured in decibel, is determined by the word length (number of bits). As I posted earlier:

16 bits - 96 dB
20 bits - 120 dB
24 bits - 144 dB
 
Arcam DVD capable of HDCD decoding vs Naim CD5 only 16 bit decoding.

Actually, your description is pretty much exactly what can be expected if playing a HDCD-encoded recording on a non-HDCD-aware player. See HFN: The HDCD Enigma

"One curious consequence of this is that if the listener tries an HDCD with soft peak limiting on two players – one HDCD and the other CD – they may not like the uncorrected distortion (introduced by HDCD recording) when played on the CD player. So such a comparison may give an impression that HDCD is inherently ‘better’, when in reality a normal CD made of the same source material might also have sounded much better than the HDCD played on the CD player. In essence, the use of the peak alterations may cause CD replay to sound ‘worse’ rather than show HDCD as genuinely ‘better’."

Yes, absolutely no "absolute" in the world of hifi or the world in general.
The way I read you, is that this is relative -meaning that when looking at the probable loss in information it is relatively small in comparison to other losses/distortions/deformations of the digital signal, right?

If I read you right, an important assumption must be that these types of deformations of the signal are comparable not just quantitatively, but also qualitatively in terms of how we perceive them. Of course if this is true, we can disregard looking at sources of signal deformations that should be "hidden" in other types of limitations. Practically speaking, time and money should be better spend somewhere else. Am I still on the right track?

Definitely.

When relating this to the digital domain, my question is that how can we be sure that the same thing is not happening -that quantitatively small sources of distortion can still bare significant audibility, although other apparent limitations should seem much larger?

Nothing is ever 100% sure, but a heck of a lot of research, and pretty massive controlled listening tests have been done by Bell Labs and others.

Please excuse my ignorance on this subject, but would the argument be that because digitalized information IS nothing but quantitative levels, nothing can possible be "hidden" between these levels? And therefore we should be able to go as far to reduce the bit depth to the dynamic range of the signal, right?

Right.

Dithering, I am not qualified to discuss, but from my understanding it is a method that works by introducing low level noise to the signal in order to modulate the bit depth of the signal from say 16bits to 24 or even 32? How does this method actually alter the quantized levels of the source material into more bit depth? I assume it must somehow be adding information that was not present in the source material, right?

It doesn't add anything that wasn't there, but it shifts the noise. There is a pretty good illustration here: Australian Hifi: What is Dither?
 
Dithering, I am not qualified to discuss, but from my understanding it is a method that works by introducing low level noise to the signal in order to modulate the bit depth of the signal from say 16bits to 24 or even 32? How does this method actually alter the quantized levels of the source material into more bit depth? I assume it must somehow be adding information that was not present in the source material, right?

Once again I am recommending Monty Montgomery's Digital Show & Tell, the part about dither is quite illustrative.

If you don't have the time or patience to watch the video, there is a text version as well, but actually seeing the waveforms is an educating experience.
 
Once again I am recommending Monty Montgomery's Digital Show & Tell, the part about dither is quite illustrative.

If you don't have the time or patience to watch the video, there is a text version as well, but actually seeing the waveforms is an educating experience.

Thanks again Julf!

I don't have the time right now, but will take a look at your recommendations as soon as time allows me to :)

all the best,
 
Dithering, I am not qualified to discuss, but from my understanding it is a method that works by introducing low level noise to the signal in order to modulate the bit depth of the signal from say 16bits to 24 or even 32? How does this method actually alter the quantized levels of the source material into more bit depth? I assume it must somehow be adding information that was not present in the source material, right?
I think you're misunderstanding dithering, it's not something that fixes what is broken, it prevents the signal from "breaking". once you sample w/o dither it's done and over with, you can't get data back.
I don't think it's possible to really prove that dithering is "perfect" because it would require an absolute understanding of human hearing.
the best way to understand it intuitively is this: think of a sine wave that's sampled with say 3 bits resolution, 1 bit corresponding to 1 volt. obviously you can't encode say 3.6 volts that way, only 3 and 4 is possible. but if you add noise, you'll actually be sampling 3.6+x (where x is the dither). the result will either be sampled as 3 (x<0.4) or 4 (x>=0.4). but statistically it will be 3, 4 times out of 10 and 4, 6 times out of ten. which averages to... 3.6.
the only plausible argument against dither is that the ear/brain system is not actually perceiving the averaged but that instandaneous value (3.6+x, in our case) but as far as current understanding goes that's not the case.
 
reading up more on this, does anyone know off hand what the value of R141 is? I am running 101db sensitive speakers, and i want to either remove it, or lower the value so that gain is lower.

1.2K.

To quote Bruno: "Not much has changed since http://www.hypex.nl/docs/appnotes/gain_appnote.pdf

Gain is 4.17*(1+2*Rf/Rg). Rg=1.2k, Rf=2.2k. Rg, as noted in the bugs section of the data sheet is not marked (will be in the future), but it's called R141 and it's the one furthest to the left of the input connector. The maximum gain reduction you can get is 13.4dB, which is when you remove R141 altogether."
 
Status
Not open for further replies.