27bit DAC -> 162 dB dynamics...

Looking at the Katz review and measurements, it seems like the “single-path” one measures very poorly? Several hundred micro volt output noise?

A more fair review would be to compare with a SOTA conventional DAC in the same price range at least. Those can produce a fairly clean sine wave at -90 dBFS.

This review seems rigged to me.
 
  • Like
Reactions: jan.didden
I have a little problem following your analysis.... I would assume the two different DACs could be optimised around handling signal levels equal to half its output - if so, there would be quite different circumstances for the two DACs. I can't quite follow your reasoning around accuracy... - please help me?
The idea of sticking two dacs together to get more bits is hardly new. Segmented dacs have been around a long time. People have also stacked dac chips like TDA1541 so that each dac plays only the top half or the bottom half of the AC waveform.

The question is more about how far the techniques can practically be taken. Apparently for the new Millennia dac they claim that when the high level dac is doing most of the loud reproduction, the lower bits of the lower level dac are being masked anyway by human perceptual limits. Therefore the claim is, the fact that lowest bits are not being heard means there isn't a problem with the exact LSB size.
 
Regarding "veiled or grainy" sound, didn't that come from Bill Whitlock?

Please see attached, then search for veiled or grainy. Also, seems to me there was a video of him in a discussion panel where he was asked how to know if you have veiled sound. Again IIRC, he answered to effect that you know it when its gone.
 

Attachments

for a "multi-path" DAC - but will it really do 162 dB N when level is, say -44 dBfs?

I say: no.

I call bogus 😎

It probably works ok. TI has a similar feature on their ADC's called "Dynamic Range Enhancer".

It uses a PGA (programmable gain amplifier) that is part of the ADC anyway and that can adjust gain in < 1 sample.

You can select up to 30dB extra gain at a threshold of -66dBFS and anything up to (say) 2dB at -12dBFS.

As long as the analogue side has less noise then the DAC you can improve resolution at anything below the threshold.

It can of course also be applied to a DAC.

Something similar was applied guitar amplifiers as the "powersoak" in the 80's if for totally different reasons. The idea was you crank up the up the Amp (say Marshall 100W Tube Head) to give you the "overdriven" tone, the power supply sag etc. plus the tone of a 4 X 12" Marshall Cabinet in a small studio room, at ear, engineer and even guitarist friendly levels.

For iFi I took the powersoak concept and turned it into an attenuator for those pesky 130dB/1V sensitive in ear monitors that also block the ear canal and produce significant isolation from outside noise, so every bit of noise is highly audible and to get 85dB SPL @ -20dBFS you need to turn the gain down 24dB, often in the digital domain. Use a 15R + 1R "powersoak" with nice highish power thin film resistors and now 1V output get 86dB @ -20dBFS with a 1V signal with the volume at a level that gives 1V out.

If we started with an amplifier with -120dBV noise and 25dB attenuation for a 95dB SNR we now turned the situation into the equivalent of amplifier with -144dBV output noise (and 62.5mV maximum output). It sure works this way.

Next stop, why not automate the whole gig many ways of doing it. Do it right and you start with a -120dBV system and improve it significantly in the real world. It's all about scaling the "0dBFS" level.

Will it create mind blowing sonic improvements? If you believe it will, it probably will.

TU Dresden - Faculty of Psychology - We hear what we expect to hear

Thor
 
I have a little problem following your analysis.... I would assume the two different DACs could be optimised around handling signal levels equal to half its output - if so, there would be quite different circumstances for the two DACs. I can't quite follow your reasoning around accuracy... - please help me?



//
To remain monotonic - a key linearity requirement for a DAC - the LSB of the 20 bit upper DAC must have the same 'accuracy' (not sure that this is the correct term in this context) as the lower 7 bit DAC LSB. There is no advantage in cutting up the DAC in two pieces and then go through heroic measures to try to stich it together again to what it was before.

As Google mentioned: A DAC is termed monotonic if the analog output always increases or remains constant as the digital input increases. If the DNL is less than -1 LSB, the DACs transfer function is non-monotonic.

Jan
 
Last edited:
  • Like
Reactions: mvs0 and TNT
To remain monotonic - a key linearity requirement for a DAC - the LSB of the 20 bit upper DAC must have the same 'accuracy' (not sure that this is the correct term in this context) as the lower 7 bit DAC LSB. There is no advantage in cutting up the DAC in two pieces and then go through heroic measures to try to stich it together again to what it was before.

As Google mentioned: A DAC is termed monotonic if the analog output always increases or remains constant as the digital input increases. If the DNL is less than -1 LSB, the DACs transfer function is non-monotonic.

Jan
You are one of the few here that understand the actual problem.