What do you think makes NOS sound different?

What remains at the end is just the quantizer noise plus the almost insignificant 0.5 dB Diff noise that was added, because the CM noise will be completely removed.

The usual way of adding Tri Dither results in 4.8dB increase in noise, so what is gained with this technique is ca 4 dB less noise.

Hans

Doesn't the amplitude of CM-dither required depend on whatever amount is necessary to randomize a given DAC's harmonics? And the amplitude of the DM-dither, therefore, a function of the CM-dither amplitude, if it is fully suppress the parasitic noise-modulation, and not always 0.5dB?
 
Last edited:
No, the DM dither should be just enough to prevent noise modulation and that is already with 0.1 LSB rms.
More will only result in more noise at the output and would be a complete waste.
The CM dither should have a minimal value, but since it is removed by the diff amp, it’s level is completely uncritical as long as it is =>0.3 LSB rms.
Both values are shown in the diagram.

Hans

P.s. quantization noise is 0.288 rms, added to the 0.1 rms DM noise gives an increase of 0.5dB
 
Last edited:
True, but you can presumably also use a larger than normal random dither for reducing distortion due to DNL, as long as you cancel it afterwards with a second DAC, like in the differential or paralleled DAC system. Anyway, I think I've mentioned that about half a dozen times now, so it's time to stop repeating myself.
 
You can presumably also use a larger than normal random dither for reducing distortion due to DNL, as long as you cancel it afterwards with a second DAC, like in the differential or paralleled DAC system.
Anyway, I think I've mentioned that about half a dozen times now, so it's time to stop repeating myself.
Lol, that's why I was so surprised when you suggested a miniature minimalistic CM dither of +/- 0.25 LSB with 0.25 LSB DC offset :D :D :D

Hans
 
Reverse engineered it, hell, you guys improved it! :wiz:

It maybe would be helpful to all us non dither technique experts, to show a simplified conceptual diagram of what the final solution vision is. Perhaps, something like the simplified diagram shown at the top of page 14 in Anagram’s data sheet. :D

Sorry for joining in late here, these latest discussions are an interesting diversion, nice work.

WRT Anagram (Edel/ABC/Engineered....) modules utilizing sub dither, I have a couple of modules in the parts store somewhere , picked up cheap ages
ago, never had time to evaluate them.

Looking at S8's data sheet, https://www.google.com.au/url?sa=t&...-DS-152E.pdf&usg=AOvVaw0xypbUIugmJlZL-onYBDiP assuming they used the PCM1794 example on P16 for the FFT on P17, it appears to have extracted very good measured
performance from this DAC considering the baseline of Dscope measuring system may be to a large extent what is seen.

TCD
 
Sorry for joining in late here, these latest discussions are an interesting diversion, nice work...TCD

Welcome, Terry.

The ad hoc cooperative re-engineering of Anagram's Sonic Scrambling technique by Marcel and Hans was fun to watch. The additional processing required to implement their improved solution over the original base solution is minimal. Conceptually, the original's TPDF dither generator is decomposed in to the two needed independent RPDF dither generationers, while the original's adder and limiter would require three inputs instead of two. Obviously, the most practical new implementation would be in firmware by MCU or FPGA, which requires programming experience with such devices. Although the processing blocks are not complex, a purely hardware implementation would prove cumbersome, I should think.
 
Regarding section 8.1, I'd just like to point out that you can make as good a digital interpolation filter as you like with one or more FPGAs and put its or their main clocks at whatever frequencies cause least harm in your DAC. That's a third option when you don't like the standard interpolation filters.

As an aside, digital designers tend to regard configuring an FPGA as a kind of hardware design. It's illogical because you are in fact just programming a programmable digital chip. It's probably because the tools used for it are very similar to those used for front-end digital chip design, although it is easier because there are things you don't need to worry about, like inserting scan chains for testability or your company having to pay loads of money for new masks if there is a small bug somewhere.
 
Last edited:
Regarding section 8.1, I'd just like to point out that you can make as good a digital interpolation filter as you like with one or more FPGAs and put its or their main clocks at whatever frequencies cause least harm in your DAC. That's a third option when you don't like the standard interpolation filters.

I completely agree, Marcel. Such solutions take the topological position that dedicated OS filter chips used to occupy, before their function was integrated on to the DAC chip. These could also remove the need to have a PC/Mac involved. I only left that third option out of the report because we did not evaluate any such solutions in the investigation. I tried to constrain my report to only cover that which we investigated, or tested. Coincidently, in an exchange with Abraxalito early this thread, he mentioned that he planned to create a chip based interpolation-filter, around an ARM core MCU. I asked him to keep the thread apprised of his progress, but we never communicated about it again. https://www.diyaudio.com/forums/digital-line-level/371931-makes-nos-sound-2.html#post6644231

Your comment makes me wonder whether you might be thinking of producing an FPGA based, high-perf. OS interpolation-filter for your own DAC designs?

As an aside, digital designers tend to regard configuring an FPGA as a kind of hardware design. It's illogical because you are in fact just programming a programmable digital chip. It's probably because the tools used for it are very similar to those used for front-end digital chip design, although it is easier because there are things you don't need to worry about, like inserting scan chains for testability or your company having to pay loads of money for new masks if there is a small bug somewhere.

They best would need to learn some version of an HDL. At least they did when I last was involved with FPGAs many years ago. Are VHDL and Verilog still the most popular choices? Circuit schematic based, FPGA visual design tools were just gaining a foothold then. I can see why, however, that the HDL languages might seem more than a bit strange to engineers used to, IF/THEN/ELSE programming structure.
 
I have no intention to do anything with FPGAs in the near future except debugging a decimation filter for TNT, but there is this thread:

16x Digital interpolation filter - drive PCM56, PCM58, AD1865 and so on up to 768 kHz

No idea if it is good enough for you all, but it's better than the standard interpolation filters.

The digital designers I know use Verilog. No idea why, as it is a very quircky and inconsistent language. For example:

There are two ways to define constants (literals). One of them gives the wrong value, not an error message but a wrong value, when the number doesn't fit in a 32 bit signed constant.

When you add a couple of signed and an unsigned variable, the whole addition becomes unsigned. Can be fixed with $signed().

You have to declare whether variables are of the reg or the wire type, but there is also a default, so you don't get a normal error message when you forget to declare one. This can be solved with the statement `default_nettype none.