why is oversampling in CD players considered bad?

Status
Not open for further replies.
In the oscillograph below the yellow trace is the output of a non-oversampled AD1865 DAC with no output filtering. The green trace is the output of an AKM oversampling delta-sigma DAC. The digital signal fed to both DACs is a dithered 10KHz sine wave sampled at 44.1KHz. Incidentally, the rise/fall times of the steps in the yellow trace measure as much as 6 MICROseconds. Why does everyone obsess over a few PICOseconds of jitter when the accuracy of a typical multibit DAC is in the realm of MICROseconds?
 

Attachments

  • akm1865d10k1.png
    akm1865d10k1.png
    70.8 KB · Views: 926
Werner said:
The saving graces of non-OS are

1) typical spectral contents of music fall off above 3kHz

Of course, that has to be compensated.
It beats me how some designs with uncompensated analog stages (passive or active) are so much raved, they sound bad, muted.
Or is it just me that have a neutral system? (!)
(Sin x)/x compensation is needed, and when well done a NOS dac sounds VERY good.

PS: the TDA1541(A) sounds 'smoother' than the TDA1543 in NOS, but still, both need to be compensated. Slightly different, though.
 
Ulas said:
In the oscillograph below the yellow trace is the output of a non-oversampled AD1865 DAC with no output filtering. The green trace is the output of an AKM oversampling delta-sigma DAC. The digital signal fed to both DACs is a dithered 10KHz sine wave sampled at 44.1KHz. Incidentally, the rise/fall times of the steps in the yellow trace measure as much as 6 MICROseconds. Why does everyone obsess over a few PICOseconds of jitter when the accuracy of a typical multibit DAC is in the realm of MICROseconds?

because you still do not uinderstand how jitter affects conversion

you confirm the diyAudio experts are right again
 
Jitter in non OS DACS

Hello Guido

You made a point earlier in the thread that one of the benefits of non OS sampled dacs having lower than oversampled dacs. My question is why would all things being equal and using CS8412/4 receiver that be the case.

Regards
Arthur
 
This place is getting harder to read every day. Self-apponted experts who don't understand what jitter is, and it effects, telling those of us who do this for a living that we are the ones who don't get it.

Anyway.........as you increase the oversampling interger, the DACs sensitivity to jitter goes up. Inside of a typical CD player, you have a better chance of getting an aceptable sound with non-o/s if you have a lousy clock. With 8x oversampling, you will hear how bad it is. Or to put it different terms: you won't hear how good 8x can sound if you don't have a good clock.

As for a SPDIF RX scheme, things are a mess. So much of the jitter is correlated to the signal. The absolute jitter numbers are so high that I am not sure either scheme sounds better.

Jocko
 
Re: Jitter in non OS DACS

PHEONIX said:
Hello Guido

You made a point earlier in the thread that one of the benefits of non OS sampled dacs having lower than oversampled dacs. My question is why would all things being equal and using CS8412/4 receiver that be the case.

Regards
Arthur

Because commonly the 8414 does not clock the DAC in oversampled systems.

In non oversampled systems pin 12 is used. Apart from the fact that the jitter on pin 19 is high anyway, at pin 12 it is worse.

To all 1543 fans in the world: Go and reclock pin 12 with pin 19. Or better, use a decent PLL......

best
 
Jocko Homo said:

...as you increase the oversampling interger, the DACs sensitivity to jitter goes up.

Jocko

wrong - in multibit dacs having the same bit resolution reproducing the same bandwidth signal jitter sensitivity doesn't increase with sample rate

a fixed jiter interval is a ~proportionately larger faction of the oversampled sample width (for jitter muich smaller than the sample time) but the same signal slewing at the same rate is represented by proportionately smaller voltage steps (for large signals, reasonable bit resolution) - the result is a wash

you could argue that for a given technology switching errors become a more significant contribution to dac dynamic nonlinearity at higher sample rates but jitter by itself is not the issue in oversampling

practically many oversampled dacs use lower bit resolutions with massively higher oversampleing rates - the difference in bit resolution does cause differing jitter sensitivity
 
jcx said:

wrong - in multibit dacs having the same bit resolution reproducing the same bandwidth signal jitter sensitivity doesn't increase with sample rate

a fixed jiter interval is a ~proportionately larger faction of the oversampled sample width (for jitter muich smaller than the sample time) but the same signal slewing at the same rate is represented by proportionately smaller voltage steps (for large signals, reasonable bit resolution) - the result is a wash

Not really. The above is only true when considering random (AIWN) jitter. If you have signal correllated jitter the analysis does not work and the jitter sensitivity does rise with oversampling rate, and at the limit rises at the same rate as the sampling rate. Since it is signal correlated jitter that worries us, the overall assertion holds.
 
Re: Re: Jitter in non OS DACS

Guido Tent said:


Because commonly the 8414 does not clock the DAC in oversampled systems.

In non oversampled systems pin 12 is used. Apart from the fact that the jitter on pin 19 is high anyway, at pin 12 it is worse.

To all 1543 fans in the world: Go and reclock pin 12 with pin 19. Or better, use a decent PLL......

best

In the dddac I reclock pin 12 and pin 11 with a Tent XO clock, very audible difference (more detail & stage) with the normal reconstructed clock.

doede
 
experts - more trouble than they're worth?

Jocko Homo said:
This place is getting harder to read every day. Self-apponted experts who don't understand what jitter is, and it effects, telling those of us who do this for a living that we are the ones who don't get it.

Anyway.........as you increase the oversampling interger, the DACs sensitivity to jitter goes up. …

Jocko
jcx said:


wrong - in multibit dacs having the same bit resolution reproducing the same bandwidth signal jitter sensitivity doesn't increase with sample rate

a fixed jiter interval is a ~proportionately larger faction of the oversampled sample width (for jitter muich smaller than the sample time) but the same signal slewing at the same rate is represented by proportionately smaller voltage steps (for large signals, reasonable bit resolution) - the result is a wash

you could argue that for a given technology switching errors become a more significant contribution to dac dynamic nonlinearity at higher sample rates but jitter by itself is not the issue in oversampling

practically many oversampled dacs use lower bit resolutions with massively higher oversampleing rates - the difference in bit resolution does cause differing jitter sensitivity
Francis_Vaughan said:


Not really. The above is only true when considering random (AIWN) jitter. If you have signal correllated jitter the analysis does not work and the jitter sensitivity does rise with oversampling rate, and at the limit rises at the same rate as the sampling rate. Since it is signal correlated jitter that worries us, the overall assertion holds.

Jocko may be in good company but I don’t see how the attached sim can be reconciled with the expert’s opinion

I can't see how for a given jitter amplitude (nS, pk/rms) a multibit DAC's jitter sensitivity is increased by higher clock frequency

at the dac output dT*dV is the jitter induced error - correlate dT and dV all you want and you haven't increased its sensitivity to sample rate - perhaps its audible annoyance value can increase vs random jitter but sample rate falls out of the equation

please debug this sim if a certain “expert” wants to keep his credibility:
1KHz sine, sampled at 48KHz and 8x 48KHz, +/- 200 nS jitter
jitter.gif

yellow correlated jitter spectrum is same for both sample rates, red random jitter is 12 dB lower for 8x rate

uncorrelated random jitter's spectrum is spread by higher sample rate and less falls in the audio band with higher sampling rate but I'm saying the worst case is that even if all of the jitter energy is concetrated in the audio band by complete signal correlation it does not increase with sample rate

I'm sure many clock multiplication schemes worsen jitter but that is not the multibit DAC's fault
 

Attachments

netlist is short enough to post for those without LtSpice:

* C:\Program Files\LTC\SwCADIII\jitter_v_rate.asc
B1 nom 0 V=sin(2*pi*int(time*48000*{n})/({n}*48)) tripdv=10uv tripdt=1ns
R1 nom 0 1
B2 early 0 V=sin(2*pi*int((time+200n)*48000*{n})/({n}*48)) tripdv=10uv tripdt=1ns
R2 early 0 1
B3 late 0 V=sin(2*pi*int((time-200n)*48000*{n})/({n}*48)) tripdv=10uv tripdt=1ns
R3 late 0 1
B4 corr_jitter 0 V=((1+sgn(sin(2*pi*(time-1u)*1000)))*v(early)-(sgn(sin(2*pi*(time-1u)*1000))-1)*v(late))/2
R4 corr_jitter 0 1
B5 rand_jitter 0 V=((1+sgn(v(white)))*v(early)-(sgn(v(white))-1)*v(late))/2
R5 rand_jitter 0 1
B6 white 0 V=rand(int((time+1u)*48000*{n}))-1/2
R6 white 0 1
.tran 0 10ms 0 100n
.options plotwinsize=0
.step param n list 1 8
.backanno
.end
 
I am in good company. For me to be wrong, all of my customers would have to be deaf. BB would have to be lying to us. Notorious clock-mongers that they are............

Sorry, your sim and netlist mean nothing to me. I am not an "expert" in those areas.

Anything else?????

Here is something to try, and it doesn't need a simulator:

Listen to the rail on an SPDIF RX chip......or the filter chip after one. Tell us what you hear, and then explain why it should not make any difference.

Jocko
 
now you saying don't confuse expertise with facts

face it Jocko your statement I quoted is simply wrong in the terms I explicitly laid out

now you're confounding the issue with anecdotal subjective evidence based on a complex system - really expert engineering practice

to know where to put your efforts in engineering requires understanding the separate sensitivities of the pieces of a complex system

increased jitter sensitivity with sample rate is not a property of a high resolution multibit dac

If you have references you think say otherwise I'm willing to learn

are you?
 
Re: Re: Re: Jitter in non OS DACS

dddac said:


In the dddac I reclock pin 12 and pin 11 with a Tent XO clock, very audible difference (more detail & stage) with the normal reconstructed clock.

doede

This is asynchronous reclocking by the way.
It works, since the CS8412 uses double buffering internally, preventing meta-stability issues.
 
my 1st sim addresses that,

the yellow trace is from a sequence jittered samples of that is 200 nS "early" when the 1 KHz sine wave signal is positive and 200nS "late" when it is negative - any suggestions on how to get more correlated?

maybe this helps:
jitter2.gif

(increased jitter to 2 uS for visibility)

the sim plot in my previous post is 2 passes on top of each other, the yellow correlated dither traces are the same for 48 KHz and 8*48KHz sampling rate both with the same +/-200 nS bi-level jitter

I am claiming that when the multibit dac has enough resolution that it can resolve 8x more V steps when the clock is increased 8x then the jitter error (from the same jitter amplitude in nS) is constant in rms sum - ie a high resolution multibit dac's correlated jitter sensitivity is not changed by clock rate - random jitter is seen to go down in the sim as the increased clock rate spreads it beyond the audio frequency range

single bit dacs with delta-sigma modulation are a different story
 
Status
Not open for further replies.