Jitter? Non Issue or have we just given in?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
What ever happened to the focus on Jitter? I remember years ago seeing the DCS Elgar DAC and a reclocking device...The brand has escaped me...but there was a lot of talk then about jitter reduction and clock upgrades etc.

Now days with all the ipod/pad/laptop digital sources. I don't see much focus on jitter? When i first saw the DCS and i thought surely other companies would follow with a sort of data buffering system that would hyper accurately clock the data out to the DAC for ultra low jitter.

We sat listening today to the Audio Research DAC8 playing hi-rez audio files off a laptop and it made me wonder how much jitter were we hearing and if jitter was really a non issue or if we have all just given in and resigned to the fact that jitter is a fact of life and we just ignore it??
 
Artifact Audibility Comparisons

Jitter question - Hydrogenaudio Forums

No. Audible jitter related to digital equipment exists only when there are serious mistakes and incompetency.

A few months back I looked at the specs of the best analog tape machines that have ever existed. This was the best sound we had before digital.

While audiophiles are obsessing over dozens or hundreds of picoseconds of jitter, the best analog tape machines ever made had millions of picoseconds of jitter. Millions! Now the nature of that jitter may have been a little different, but that can't overcome the fact that analog tape is, at the best, thousands of time worse than even mediocre digital gear.

If jitter is such a problem in digital equipment, why aren't all these audiophiles running screaming, with blood trickling out of their ears, when they listen to analog playback? We now know that up to 50% of all SACDs and DVD-A titles that were ever released came from legacy sources, including analog master tapes. Why aren't people complaining about all the audible jitter in their new SACDs amd DVD-As?

The simple fact of the matter is that jitter associated with digital equipment was always an overblown issue I'm not saying that there is *no* digital equipment with audible jitter, but I'm telling you that you have to look long and hard to find it among equipment with any pretenses of quality at all.
 
If jitter is a matter of a signal distorted by timing mismatch (tape speed of playback vs. speed during recording, and speed differences while playback and recording) signal vs. - how is wow and flutter different from digital jitter as far as impact on the signal is concerned?

Its a big IF. Do you have any evidence to support that it is? I note your inaccuracy in the above - is it deliberate or accidental?
 
Jitter is a fact, but can we actually hear it? That's what I'm curious about. How much of a difference does it really make...

When the DAC is connected to audio equipment with high enough resolution for resolving 16 bits, jitter makes a day and night difference. It is the difference between synthetic "digital" unnatural sound that causes listening fatigue, and natural detailed music like we were used to get from analogue sources like tape or vinyl. It is the difference between a noisy background and a pitch black background.

With analogue sources, jitter can be much higher because we don't use sample pulses but have a continuous signal.


DACs output pulse sequences rather than analogue signals. DAC chips basically output RF spectrum (audio spectrum plus images) before reconstruction filtering takes place. All components between the DAC output and filter output are exposed to RF and therefore should remain fully stable and linear in this frequency range. If not, the pulses get distorted (amplitude, duration) that leads to distortion in the filtered / averaged output signal.

The pulse duration of each sample determines the energy that's finally delivered to the speaker voice coils.

Deviations in sample pulse width have similar effect as PWM (Pulse Width Modulation). In other words, two sample pulses with exact same amplitude and different duration will deliver different amounts of energy to the speaker voice coil.

Filtering the samples will average these errors, but will not remove them.

In order to maintain target resolution, max. allowable sample duration deviation (jitter) can be calculated:

1 / (sample rate * oversampling factor) / (bit depth / (1 / allowed bit error)).

Based on this we can exactly calculate how much jitter can be tolerated for target resolution:

For 44.1/16 NOS and max. tolerable bit error of 0.5 LSB (15.5 bit resolution), gives 1 / (44,100 * 1) / (2^16 / (1 / 0.5)) = 173ps

Similar for:

44.1/16 NOS, 0.1 LSB (15.9 bit resolution), 1 / (44,100 * 1) / (2^16 / (1 / 0.1)) = 34.6ps.
44.1/16 NOS, 0.01 LSB (15.99 bit resolution),1 / (44,100 * 1) / (2^16 / (1 / 0.01)) = 3.46ps.
44.1/16, 8 * oversampling, 0.5 LSB error (15.5 bit resolution), 1 / (44,100 * 8) / (2^16 / (1 / 0.5)) = 21.625ps.
44.1/16, 8 * oversampling, 0.1 LSB error (15.9 bit resolution), 1 / (44,100 * 8) / (2^16 / (1 / 0.1)) = 4.32ps.
44.1/16, 8 * oversampling, 0.01 LSB error (15.99 bit resolution), 1 / (44,100 * 8) / (2^16 / (1 / 0.01)) = 432 femto seconds.

96/24 NOS, 0.5 LSB (23.5 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.5)) = 310 femto seconds.
96/24 NOS, 0.1 LSB (23.9 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.1)) = 62 femto seconds.
96/24 NOS, 0.01 LSB (23.99 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.01)) = 6.2 femtoseconds.
96/24 8 * OS, 0.5 LSB (23.5 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.5)) = 38.8 femtoseconds.
96/24 8 * OS, 0.1 LSB (23.9 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.1)) = 7.76 femtoseconds.
96/24 8 * OS, 0.01 LSB (23.99 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.01)) = 776 attoseconds.

192/24 NOS, 0.5 LSB (23.5 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.5)) = 155 femtoseconds.
192/24 NOS, 0.1 LSB (23.9 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.1)) = 31 femtoseconds.
192/24 NOS, 0.01 LSB (23.99 bit resolution), 1 / (96,000 * 1) / (2^24 / (1 / 0.01)) = 3.1 femtoseconds.
192/24 8 * OS, 0.5 LSB (23.5 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.5)) = 19.4 femtoseconds.
192/24 8 * OS, 0.1 LSB (23.9 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.1)) = 3.88 femto seconds.
192/24 8 * OS, 0.01 LSB (23.99 bit resolution), 1 / (96,000 * 8) / (2^24 / (1 / 0.01)) = 388 attoseconds.

1000 attoseconds equals 1 femtosecond. 1000,000 attoseconds equals 1 picosecond.

For 44.1/16 NOS and 15.99 bit target resolution, jitter needs to be below 3.46 ps. This is not masterclock jitter, but the jitter present on the DAC chip at the D/A stage. CMOS logic creates peak currents during switching, these cause a lot of on-chip noise and ground bounce. This in turn makes it very difficult to maintain low on-chip jitter levels. It would be better to use (P)ECL that produces far less noise and has lower propagation delay. TDA154x are some of the very few DAC chips around that have CML (current mode logic). Current mode logic uses low voltage signal swings (low ground bounce) and each logic building block draws constant current (bias current) to keep switching noise levels very low.

Some of the very best slaved source configurations (SPDIF data transfer) just manage to achieve 20 ... 50 ps. This would just meet 44.1/16 NOS specs for 15.9 bit resolution (0.1 bit error).

Jitter sensitivity increases as sample pulse width gets smaller in relation to the given timing deviations (jitter). So jitter sensitivity increases with both, oversampling and higher sample rates.


Jitter also has specific spectrum, this spectrum depends on external factors like noise and interference signals.

This also means that power supply noise, ripple and hum modulates sample pulse width and thus affects jitter spectrum.

Because sample pulse width deviations end up as actual signals on the speaker, jitter frequency spectrum, and all inter modulations with other interference sources also becomes audible. So if we would modulate the masterclock with 1 KHz, we should be able to detect a low level 1KHz ripple plus harmonics on the DAC output. This is sound coloration caused by jitter and it can be measured and verified by spectrum analysis.

This also puts a unique "fingerprint" on each DAC as it is impossible to get exactly the same jitter amplitude and spectrum on multiple DACs of the same type and make.

The jitter amplitude and spectrum will give each DAC its characteristic sound (coloration) as interference signals are added to the audio signal. Most CD player and DAC tweaking are based on changing jitter spectrum rather than reducing jitter levels so target jitter specs are met.

Best transparency and highest resolution are likely to be achieved when meeting jitter specs and jitter spectrum is neutral (white noise spectrum).

Failing to meet the jitter specs leads to reduced resolution. Jitter spectrum can introduce dynamic distortion that becomes audible with specific sounds.
 
is it deliberate or accidental?

deliberate, as I am still trying to ensure I grasp the jitter problem. To which I never paid much attention.

To my understanding it is the difference in the clock between the sampling unit - the analogue to digital converter, and the receiver, the dac converter. Those timing differences can lead to distortions of the analogue signal.
This to my mind is equivalent in principal to the cutting speed of the lathe and the playback speed of the turntable, or the signal stored while taping with a slightly non constant speed on and the playback speed with a similar non constant machine.

the digital data stream generally comes out of the optical pickup with a large amount of jitter. The data is read into a buffer as at whatever rate it is actually read, and clocked out of that buffer with a steady clock. As long as there is data in the buffer, the process works well

Another example is playing a digital music file on a PC. The data is supplied to the audio interface in blocks of data - a clear example of data showing up in fits and spurts. Jitter is maximized.There is a data buffer in the audio interface that provides audio data to the DAC. The DAC is clocked by a stable oscillator.

Jitter question - Hydrogenaudio Forums

Buffer under run/over run does not seem to be a problem anymore (better control of and faster cd rom drive?) so maybe the jitter is actually also no longer a problem by employing buffers and the subsequent clocking during readout.
 
Last edited:
To my understanding it is the difference in the clock between the sampling unit - the analogue to digital converter, and the receiver, the dac converter.
Those timing differences can lead to distortions of the analogue signal.

Yep, no disagreement there.

This to my mind is equivalent in principal to the cutting speed of the lathe and the playback speed of the turntable, or the signal stored while taping with a slightly non constant speed on and the playback speed with a similar non constant machine.

But that's the part I'd like evidence for before going further. What's in your mind doesn't comport with my understanding of digital audio theory. To wit, the discrete time world is quite a separate, distinct world from the continuous time one. So what applies in one does not necessarily carry over to the other. If you think it does then its for you to provide the theoretical underpinning.
 
When the DAC is connected to audio equipment with high enough resolution for resolving 16 bits, jitter makes a day and night difference. It is the difference between synthetic "digital" unnatural sound that causes listening fatigue, and natural detailed music like we were used to get from analogue sources like tape or vinyl. It is the difference between a noisy background and a pitch black background.

OK so to explore this further, please describe the experimental set up. That is the way you managed to vary only jitter between two otherwise identical systems. Without firm controls then we can't be sure you were actually hearing jitter. Of particular interest are the DAC in use and the subsequent analog signal processing.

The other important aspect is the nature of the jitter. Was it random or in some way correlated with the signal?
 
The effect of jitter scales with signal amplitude doesn't it? i.e. the amplitude of noise/distortion caused by it reduces towards zero at low signal amplitudes. Therefore its effect is only heard alongside a much higher amplitude signal - the desired output signal - even if you turn the volume up during quiet passages, unlike some other types of digital noise and distortion. Does this make it less of an issue than it first appears?
 
The effect of jitter scales with signal amplitude doesn't it? i.e. the amplitude of noise/distortion caused by it reduces towards zero at low signal amplitudes.

I think this might depend on which kind of DAC you're using. Lo-bit DACs have huge amounts of out-of-band noise which are still present when there's no signal. In fact their OOB noise is at a maximum when the in-band sgnal is at a minimum. That's a consequence of Parseval's theorem (I think, correct me if I'm mistaken).

So what DAC have you got? :)
 
deliberate, as I am still trying to ensure I grasp the jitter problem. To which I never paid much attention.

The analog tape had variations in speed that where of very low frequency - few hertz to even below 1Hz. That is FREQUENCY MODULATION with that low frequency.
All the tape players had am inertial wheel inside that was keeping those variations small in value and, more importatly, in period. Your brain wasn't bothered too much if they where below a decent level, because it was a linear distortion.

The DAC's are converting the digial jitter from time domain in amplitude domain. The frequency of jitter is a variable mix (1-10000Hz) and changes very quickly, because is a product of multiple jitter sources. You have AMPLITUDE MODULATION with a higher frequency. Now we have non-linear distortion. And that modulation frequency is also variable. Very easy to pick up.

Now, the way to reduce that is using bigger buffers. Who has bigger buffers? Modern DSP's that you can find in numerous products today. Memory is cheap and small today.

Just an example: Any Denon player that has AL24 Processing uses a DSP (AD Blackfin family) that has it's own dedicated clock. PCM signal comes from the transport that is clocked with a general PLL divider-type of clock, having the usual jitter of an optical transport. Data is stored in the DSPs RAM with that frequency but is read with the DSP crystall frequency. The RAM is several MBytes in size, not the puny 2kB from the usual antique CD controllers.
 
Last edited:
Sure, I seem to remember reading that story when it originally came out. A wonderful tale. I am left wondering though - why did those guys study PLL theory only after they'd produced the first chip? Masking charges for ICs are truly horrendous.:eek:

I haven't re-read through to the end yet (still studying p2) but do they comment on the non-USB related jitter issues anywhere? This story seems to me relevant to the problems of audio-over-USB but not generally applicable to jitter in audio.
 
I did and that's right. Audio-over-USB was still new then. The issue of jitter where it counts (at the DAC) had been known and understood for decades- when I worked at Nicolet back in the '80s, it was something we accounted for in the design of our interfaces.

Re: analog tape analogies, think "bias."
 
... do they comment on the non-USB related jitter issues anywhere? This story seems to me relevant to the problems of audio-over-USB but not generally applicable to jitter in audio.
No, I think they were just trying to achieve 'crystal-like' jitter performance, and assumed that this was sufficient. They achieved measurable distortion performance close to the theoretical limit, so they presumably thought that the jitter performance was adequate. Their "golden-eared" listening panel previously appear to have been able to spot 0.03%'s worth of (a certain type of) jitter distortion which seems like quite an interesting fact in itself.
 
I think this might depend on which kind of DAC you're using. Lo-bit DACs have huge amounts of out-of-band noise which are still present when there's no signal. In fact their OOB noise is at a maximum when the in-band sgnal is at a minimum. That's a consequence of Parseval's theorem (I think, correct me if I'm mistaken).

Not sure I understand the significance of this. I had been assuming that the effect of jitter was to randomly, or non-randomly, delay or advance the output of samples against the desired moment when they should be output. Subtracting the resulting signal from the original would reveal the jitter 'noise'. Clearly against a zero output signal, the jitter noise would be zero, and against quiet signals it would be unmeasurably low - unlike, say, quantisation distortion. Therefore attempting to reduce jitter sufficiently to prevent measurable distortion of a 24 bit full scale signal would be a red herring - even without such astounding jitter performance, the finer quantisation benefits of 24 bit resolution would still be valid at lower amplitudes.

Are we saying that because of the internal workings of certain types of DAC, jitter can also be responsible for a degradation of the accuracy of the sample voltages, as well as their timing? Depending on the type of DAC, that could be revealed as noise even on 'silence'..?

I would hope that DACs could be designed to take care of their own internal clocking when 'assembling' the output voltage, regardless of the accuracy of the user's sample rate. I don't see why they couldn't.
 
Last edited:
Not sure I understand the significance of this. I had been assuming that the effect of jitter was to randomly, or non-randomly, delay or advance the output of samples against the desired moment when they should be output. Subtracting the resulting signal from the original would reveal the jitter 'noise'. Clearly against a zero output signal, the jitter noise would be zero, and against quiet signals it would be unmeasurably low - unlike, say, quantisation distortion. Therefore attempting to reduce jitter sufficiently to prevent measurable distortion of a 24 bit full scale signal would be a red herring - even without such astounding jitter performance, the finer quantisation benefits of 24 bit resolution would still be valid at lower amplitudes.

Yep, makes sense for traditional multi-bit or R2R ladder DACs.

Are we saying that because of the internal workings of certain types of DAC, jitter can also be responsible for a degradation of the accuracy of the sample voltages, as well as their timing? Depending on the type of DAC, that could be revealed as noise even on 'silence'..?

Not sure its relevant at this stage to distinguish between a voltage error and a timing error. As jkeny recently pointed out to me from another thread, the right data at the wrong time = the wrong data.

Take the Philips 'bitstream' concept as that's the one I'm most familiar with. At digital silence the DAC is still outputting full scale signal, that's all it can do being a 1-bit DAC. So in this case the silence is actually the presence of very high frequency noise (5.6MHz being the main frequency, from memory). A DAC putting out a 5.6MHz full scale squarewave is going to be very sensitive to the position of the edges of that signal in order not to introduce noise is it not?

I would hope that DACs could be designed to take care of their own internal clocking regardless of the accuracy of the user's sample rate. I don't see why they couldn't.

Many of them use switched capacitor filters on their outputs to get around this very problem. So its not an issue on all low-bit DACs.
 
Not sure its relevant at this stage to distinguish between a voltage error and a timing error. As jkeny recently pointed out to me from another thread, the right data at the wrong time = the wrong data.
I beg to differ. A small timing error on a small signal would give an unmeasurably tiny voltage error, so I see a great distinction between a 0.0003V voltage error which would be measurable at any time and the effect of a 12ps timing error which might not, depending on the signal.

Just to summarise, I agree that the internal workings of certain DACs would make them sensitive to jitter within their own internal clocks when 'assembling' the output voltage, but the accuracy of the user's sample rate would (should) be unrelated to this.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.