oversample or NOS DAC ??

Status
Not open for further replies.
That's where I saw it...
Figure 2 shows the output signal of the D/A converter downstream of a 60-kHz lowpass filter, as implemented on the Burr-Brown demo board.
They are saying that at 8x there is not too much HF left... Whatever.

And anyway, nowadays there are devices like DSD 1792A that can be hooked to external DSP/OS if you don't like their version of OS 🙂
Actually I have a few of those waiting for a new project.
 
They are saying that at 8x there is not too much HF left... Whatever.

The Trinity DAC first applies 8*OS, using a digital brickwall filter, up to this point it's just a plain 8* OS DAC like many others.

Next, the 8* oversampled output is interpolated 8* again, now using linear interpolation instead, with 8* selected PCM1704 / channel. The linear interpolation (on the already oversampled and brickwall-filtered signal) is done by delay-add scheme.

Using a 44.1/16 input signal, the sample rate would be boosted to (44,100 * 64 = 2.8224 MHz!). It's assumed that the RCA interlinks and power amplifier input filter are sufficient to attenuate these HF signals, so no extra analogue reconstruction filter is required.


Question remains, does linear interpolation (OS) have the same effects on jitter sensitivity than using a sin(x)/x brickwall filter with the same OS factor.

If both interpolation methods result in similar increased jitter sensitivity, Trinity DAC jitter sensitivity equals that of a 64* OS DAC!


Trinity DAC running @ 44.1/16 would require the following max! jitter amplitude:

1 / (44,100 * 64) / 2^16 / 2 = 2.7 ps rms.

This value is extremely difficult / impossible to obtain, using 16 DAC chips (8 for each channel) connected to the masterclock.

When processing 192/24 input signal, things get even much worse:

1 / (192,000 * 64) / 2^24 / 2 = 2.4 femto seconds rms

It's safe to say that these low jitter values are indeed impossible to obtain using this configuration. So with given practical masterclock (distribution) limitations, sound quality will degrade.


NOS DACs have lowest jitter sensitivity (173ps rms @ 44.1/16).
 
I still have a weakness for linear (direct) interpolation done with the pcm1704 chips. But I also think that Trinity overdid it.

2.7ps is possible to reach, but not with that many DACs. I'm pretty sure is made only for 44.1k (another limitation).
 
-ecdesigns- said:

It's safe to say that these low jitter values are indeed impossible to obtain using this configuration. So with given practical masterclock (distribution) limitations, sound quality will degrade.

Then it is just as well that the performance target all the dacs have to meet is that associated with the output of the digital filter.
 
-ecdesigns- said:

Question remains, does linear interpolation (OS) have the same effects on jitter sensitivity than using a sin(x)/x brickwall filter with the same OS factor.


That's a good question... In my guts I feel that since the interpolation it is done in analog domain, it shouldn't have increased jitter sensitivity, it should have the same jitter requirements as individual DAC's.
Jitter affects the conversion itself only and it is a DAC spec. Interpolation AHEAD of DAC increases the equivalent jitter that DAC see, but after the DAC... I guess it doesn't matter anymore.

But I am not sure about that. Hmmm, if that's true than I am more atracted by analog/delay linear interpolation.
 
Where did you get those 20ps from??? From the formula posted by ecdesigns results 170ps for 44.1kHz. From the BB seminary results 2ns.
 

Attachments

  • jitter.jpg
    jitter.jpg
    64.8 KB · Views: 295
-ecdesigns- said:
The 20ps, 21.6ps to be more precise, probably refers to 44.1/16 and 8 * OS. The 173ps rms refers to 44.1/16 NOS.
44.1/16 NOS, 1 / 44,100 / 2^16 / 2 = 173ps rms.
44.1/16 8* OS, 1 / (44,100 * 8) / 2^16 / 2 = 21.6ps rms.

How come in the BB design seminary they are talking of 2000ps as the necessary limit for 16bit (NOS)? The link to the doc was posted above.
It is a 10x difference.

And nobody has an opinion about my statement above?

In my guts I feel that since the interpolation it is done in analog domain, it shouldn't have increased jitter sensitivity, it should have the same jitter requirements as individual DAC's.
Jitter affects the conversion itself only and it is a DAC spec. Interpolation AHEAD of DAC increases the equivalent jitter that DAC see, but after the DAC... I guess it doesn't matter anymore.
 
And what, pray tell, does this figure refer to ? Wordclock jitter, bitclock jitter or data jitter ? Perhaps you could explain the derivation of this formula.

This is the formula used by kusunoki (1, Oversampling and Jitter), and it refers to the timing signal that determines the exact moment a digital sample is converted to an analogue value. With most DACs this is determined by the bit clock, with delta sigma DACs it's the system clock (SCK).

Here is the link to Kusunoki's article:

http://www.sakurasystems.com/articles/Kusunoki.html

All I can say, based on years of measurements and listening tests, is that bit clock jitter has significant impact on sound quality, and there doesn't seem to be a minimum value. So even when using a NOS DAC with say 100ps rms bit clock jitter amplitude, reducing it to 50ps rms will provide further improvements in sound quality (tighter bass, better transparency, more detail, and more open sound stage), this despite the fact that jitter amplitude is already below the calculated maximum allowable value.


The jitter amplitude (ps) only specifies the deviation from the fundamental (bit) clock frequency, it's also very important to consider the jitter frequency spectrum. Basically, the bit clock phase / frequency is modulated with a large bandwidth jitter frequency spectrum that can start at very low frequencies of say 1 Hz, up to the GHz range. This jitter spectrum is composed by many factors like intrinsic master clock jitter, power supply noise and hum, interference of digital signals (I2S, SPDIF), EM interference, and so on.

So when choosing a master clock, based on jitter specs, it's very important to not only look at jitter amplitude (ps), but also at the produced jitter frequency spectrum. Crystal oscillators may be specified at 0.5ps rms from 10 KHz and up, this basically means that it can still have considerable jitter amplitude in the audio range, despite 0.5ps specs. The master clock will also be affected by the power supply it runs on, the loads connected to it, and the way the clock signal is distributed. The actual timing signal jitter (bit clock) inside the DAC chip, right at the latches or circuits that determine the exact moment of D/A conversion are also determined by chip properties (lay-out) not only the external master clock jitter.

Looking at the DAC output signal (prior to reconstruction filtering) it consists of the audio spectrum and the reflected mirror images. Depending on the DAC configuration the first reflected mirror image starts at fs (NOS) or multiples of fs (OS), and the reflected mirror images are then repeated at multiples of fs. This entire large-bandwidth frequency spectrum (audio + reflected mirror images) will appear at the D/A conversion stage, and then enters the first connected analogue stages without prior analogue filtering.

When timing jitter is added, the entire frequency spectrum (audio + all reflected mirror images) will be modulated with the timing jitter spectrum. On top of that, the large bandwidth output signal that appears directly after D/A conversion will be polluted with interference signals, present in the DAC (power supply noise / hum, EM interference, SPDIF / I2S interface signals and so on). Even if the DAC output signal is filtered afterwards (analogue reconstruction filters), the damage is already done, as the analogue signal is already distorted prior to entering the first analogue filter stages.

Jitter affects the entire audio frequency spectrum, from deepest bass to highest trebles, despite the fact that timing deviations are in the ps range and theoretically should be inaudible. Based on this I suspect that the major reason for jitter induced sound quality degradation is disrupting the connected analogue circuits, preventing these from performing correctly. As the analogue circuits process the entire audio frequency range, they will also be able to affect the entire audio frequency range.
 
I am aware of the Kusunoki article and as far as I am concerned that formula does not hold water. It takes the sample period divided by the number of possible sample values associated with that wordlength and then divides the lot by 2. Good luck factoring in system clock to that. Me, I'll stick with likes of BB and LeCroy when it comes to determining jitter.
BTW, BCK and SCK do not determine conversion, wordclock does.
 
Status
Not open for further replies.