Guido Tent said:
The problem is both, as I never heard nor designed an SPDIF driven DAC that is fully transparant for the properties of the transport in front of it.
What everyone seems to talk about is jitter, wich we all know is supressed alot using a narrow PLL and a good clock. I guess that you are looking for something other than timing errors in the SPDIF-interface, do you have any clue to what?
I have heared a difference running on SPDIF from a reeeaaally old player and so has my friend. But as soon as we listened blindly we couldn't pick it out, that maybe saying something about the giant transfer function called "the brain" 😀
Did you check to see if the transport was sourcing a signal from a true 75 Ohm source and check that the DAC was a true 75 Ohm load? Oh, and that the cable was 75 Ohm characteristic impedance? The reason I ask is that older players often weren't properly 75 Ohm, and mistermination of a transmission line causes reflections and distortion of pulses.
Don't forget the connectors, even the most expensive RCA is still dire at those frequencies. There is a reason pro kit uses 75ohm BNCs or 110ohm digital XLRs.
If SPDIF is really so dire why do some people still think external DACs are always better?
Personally from a purely engineering POV the best way is just to buffer and reclock to a fixed DAC clock. Unless you're actually getting data errors the transport jitter won't make any difference at all.
A VCXO will always have worse performance than a fixed XO.
If SPDIF is really so dire why do some people still think external DACs are always better?
Personally from a purely engineering POV the best way is just to buffer and reclock to a fixed DAC clock. Unless you're actually getting data errors the transport jitter won't make any difference at all.
A VCXO will always have worse performance than a fixed XO.
lambda = v / f = (0.7*300e6)/6e6=35
With a 35m wavelength in a 1m cable any problems with impedance mismatches are unimportant. When you reach 1/10:th or more we could start to worry about that. But anyway, the reflections do happen even if they dont mess upp the following bit, and reflections within the same bit does not corrupt triggering edges since they bounce back after the receiver already detected the edge.
I can't hear any difference between 50 ohm cable, or even 92ohm. And while I have not tested, I really doubt that there would be any measurable differences between them if looking after the receiver 🙄
With a 35m wavelength in a 1m cable any problems with impedance mismatches are unimportant. When you reach 1/10:th or more we could start to worry about that. But anyway, the reflections do happen even if they dont mess upp the following bit, and reflections within the same bit does not corrupt triggering edges since they bounce back after the receiver already detected the edge.
I can't hear any difference between 50 ohm cable, or even 92ohm. And while I have not tested, I really doubt that there would be any measurable differences between them if looking after the receiver 🙄
Asgard said:
What everyone seems to talk about is jitter, wich we all know is supressed alot using a narrow PLL and a good clock. I guess that you are looking for something other than timing errors in the SPDIF-interface, do you have any clue to what?
I know about PLLs, and bandwidth (see the projects I am involved with - Tentlabs & Grimmaudio). I also use 75ohm connectors and ditto cables.
When the data is correct the only difference is jitter. the data IS correct as we have checked a variety of transports. What is left is jitter, or I miss something.
best
Its not the signal period that determines the reflections, its the edge rise/fall time.
You can have a 1Hz square wave with 1GHz edges
You can have a 1Hz square wave with 1GHz edges
BlackCatSound said:Its not the signal period that determines the reflections, its the edge rise/fall time.
You can have a 1Hz square wave with 1GHz edges
correct
let us see what we can do with such signal 🙂
Hi BlackCatSound,
Now you can compare a good external D/A design to an internal one that has all kinds of evils committed. I think that's where the silly cheap transport + good DAC started.
Hi Guido,
I'm not sure what happens when different DSP chips create the SPDIF signal, there may be differences. This could be a reason you may get varying performance from transports. Not my area of study.
-Chris
That has more to do with things other than technical issues. They really are inferior to the same D/A mounted within the CD transport running off the dedicated signals.If SPDIF is really so dire why do some people still think external DACs are always better?
Now you can compare a good external D/A design to an internal one that has all kinds of evils committed. I think that's where the silly cheap transport + good DAC started.
Hi Guido,
I'm not sure what happens when different DSP chips create the SPDIF signal, there may be differences. This could be a reason you may get varying performance from transports. Not my area of study.
-Chris
Guido Tent:
Well then, if jitter is all that remains then a good PLL should take care of it, taking into account that the PLL itself does not generate jitter och that the clock would be bad. The only jitter-frequencies that a PLL has problems with (as I'm shure you know) are really low ones wich are least harmful.
I guess that the search goes on if you can blindly identifiy a SPDIF with low jitter after the receiver, I dont think I can anyway
Well then, if jitter is all that remains then a good PLL should take care of it, taking into account that the PLL itself does not generate jitter och that the clock would be bad. The only jitter-frequencies that a PLL has problems with (as I'm shure you know) are really low ones wich are least harmful.
I guess that the search goes on if you can blindly identifiy a SPDIF with low jitter after the receiver, I dont think I can anyway

BlackCatSound said:Its not the signal period that determines the reflections, its the edge rise/fall time.
Yes that is true, but we are talking about sine waves and it's odd overtones. The steaper the rise time the higher the odd multiple, so you could say that only the really highest spectral components are subject to problems with impedance matching, the lower ones have long enough wavelength to not be subject to the workings of a transmission line (~1/4 wavelength according to textbooks). This is how I am educated, but I am always open for suggestions 🙂
anatech said:That has more to do with things other than technical issues. They really are inferior to the same D/A mounted within the CD transport running off the dedicated signals.
Well yes, obviously ignoring players where the designer had a total brain fade while designing the output section 🙂
Assuming identical DACs and analogue sections:
Transport -> DAC
or
Transport -> SPDIF encode -> long cable (compared to PCB trace lenghts) -> SPDIF decode -> Clock recover -> DAC
The S/PDIF signal leaving a CD player is a 64 bit word with a frequency of 44.1kHz, making the bit rate 2.82MHz. That, plus its odd harmonics is certainly into transmission line territory even on a short cable. It's precisely because I have measured pulse distortions on this signal that I queried correct matching. And, as was pointed out earlier, RCA phonos are not a terribly accurate 75 Ohm. A correctly fitted 75 Ohm BNC is better.
EC8010:
The more careful RF designers use 1/10th cable to wavelength ratio as a indication on whether correct termination is neccecary. But ok, say that we need proper impedances to x% accuracy, how would the reflected flanks affect the SPDIF-receiver? At each reflection the higher frequencies will reflect more than the lower, meaning that the fundamental sine will play a larger part in triggering than the odd ones. Less steap flanks mean less accuracy and more jitter. Jitter that a PLL will supress.
So I still can't really see how null reflections would improve quality, or does it affect the receiver in any other way than plain jitter? Please share your thoughts 😀
The more careful RF designers use 1/10th cable to wavelength ratio as a indication on whether correct termination is neccecary. But ok, say that we need proper impedances to x% accuracy, how would the reflected flanks affect the SPDIF-receiver? At each reflection the higher frequencies will reflect more than the lower, meaning that the fundamental sine will play a larger part in triggering than the odd ones. Less steap flanks mean less accuracy and more jitter. Jitter that a PLL will supress.
So I still can't really see how null reflections would improve quality, or does it affect the receiver in any other way than plain jitter? Please share your thoughts 😀
Hi Asgard,
Any data book I've read goes to great lengths to point out the source and load characteristics. I'd say it's important for the termination to be done properly.
I don't understand RF design very well compared to audio design, but I get the feeling that terminations are important and reflections are bad. The shape of pulses seems to be the real issue.
-Chris
Any data book I've read goes to great lengths to point out the source and load characteristics. I'd say it's important for the termination to be done properly.
I don't understand RF design very well compared to audio design, but I get the feeling that terminations are important and reflections are bad. The shape of pulses seems to be the real issue.
-Chris
BlackCatSound said:Any data book I've read goes to great lengths to point out the source and load characteristics. I'd say it's important for the termination to be done properly.
Yes they do, but every application has it's own reasons. All accur at "high" frequencies. In data communication you want to avoid corruption of data due to smearing (ISI - intersymbol interference) and in RF to make use of signal power (power transfer theorem). The first one is easily demonstrated when pulling the 50ohm termination from an old Ethernet cable running on coax

Ofcourse you should design good cables if you have the chance, but I still need someone to show me either the theory to why impedance matching sounds better in SPDIF, or measurements of the same, before I'm convinced that is (there is shurely something I havent thought of).
Still I respect the opinions of others if they have a subjective opinion that certain, hard to prove, techniques make a big difference in SPDIF.
Hi Asgard,
I had one of those networks with the coax and terminators. Lose a terminator and that run is down. I got a hub that disconnected those runs so as not to take the rest down.
I'm still thinking pulse shape has a lot to do with it.
-Chris
I had one of those networks with the coax and terminators. Lose a terminator and that run is down. I got a hub that disconnected those runs so as not to take the rest down.
I'm still thinking pulse shape has a lot to do with it.
-Chris
Pulse shape does have a lot to do with it.
SPDIF is data communications, just like LAN. The difference is most people don't have 100m of cable between their CD player and their external DAC.
SPDIF is data communications, just like LAN. The difference is most people don't have 100m of cable between their CD player and their external DAC.
Well the question still stands. Give me something other than jitter as a result of poor pulse shaping. Otherwise were not getting anywhere
Worst case you will get errors.
How many external DACs show the status of the SPDIF decoder error flag?
How many external DACs show the status of the SPDIF decoder error flag?
First off, it's not difficult to check whether you genuinely have a 75 Ohm source. All you need to do is to measure the amplitude of the signal on an oscilloscope with and without a 75 Ohm termination. If adding the 75 Ohm termination drops the level to exactly half that of the unterminated signal, you have a 75 Ohm source. Similarly, if you want to test to see if you have a 75 Ohm destination, loop the signal from the transport through your oscilloscope and see if the destination drops the level by exactly the same amount as a 75 Ohm termination. If you really want to, it's perfectly possible to calculate source and destination impedances from the relative amplitudes of terminated and unterminated signals. I used this method recently to fine tune an AES3 transmitter that I'd knocked up out of some 74 series invertors to make sure it was a true 110 Ohm source.
Asgard: I'm not sure that applying the RF criterion is helpful here. Bear in mind that at RF the signal normally has a very small bandwidth compared to the carrier frequency. As an example, even at MW, and using simple AM (as used for broadcast), a radio station at 999kHz would only have a bandwidth of 9kHz. For RF, it's perfectly reasonable to treat the signal as a sine wave because there really isn't much of a difference between 994.5kHz and 1003.5kHz - it's near enough a 999kHz sine wave.
But the Manchester coding used for S/PDIF means that you have a 2.82MHz square wave, so you need the fifth harmonic at the very least. That makes it a wide bandwidth signal. If you then consider that the signal is modulated by either doubling the frequency or not to signify a 0 or a 1 (I can never remember which way round it is), and you now have a signal with a very wide bandwidth.
I think it's more useful to consider the signal in time rather than frequency, and to think about pulses. Remember that the leading edge of each pulse is not vertical (that would require infinite bandwidth). Since it's not vertical, any distortion of the shape of that edge means that for a given comparator voltage, the timing changes. And that means you've generated jitter.
I'm not suggesting that the jitter is sufficient to cause data errors - far from it. What I am suggesting is that it makes it harder to recover a clean clock from it.
Oh, and finally, remember that by converting from analogue to digital, what you have done is to convert information in the voltage domain into the time domain. Timing becomes all-important and jitter is crucial.
Asgard: I'm not sure that applying the RF criterion is helpful here. Bear in mind that at RF the signal normally has a very small bandwidth compared to the carrier frequency. As an example, even at MW, and using simple AM (as used for broadcast), a radio station at 999kHz would only have a bandwidth of 9kHz. For RF, it's perfectly reasonable to treat the signal as a sine wave because there really isn't much of a difference between 994.5kHz and 1003.5kHz - it's near enough a 999kHz sine wave.
But the Manchester coding used for S/PDIF means that you have a 2.82MHz square wave, so you need the fifth harmonic at the very least. That makes it a wide bandwidth signal. If you then consider that the signal is modulated by either doubling the frequency or not to signify a 0 or a 1 (I can never remember which way round it is), and you now have a signal with a very wide bandwidth.
I think it's more useful to consider the signal in time rather than frequency, and to think about pulses. Remember that the leading edge of each pulse is not vertical (that would require infinite bandwidth). Since it's not vertical, any distortion of the shape of that edge means that for a given comparator voltage, the timing changes. And that means you've generated jitter.
I'm not suggesting that the jitter is sufficient to cause data errors - far from it. What I am suggesting is that it makes it harder to recover a clean clock from it.
Oh, and finally, remember that by converting from analogue to digital, what you have done is to convert information in the voltage domain into the time domain. Timing becomes all-important and jitter is crucial.
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Source
- S/PDIF Jitter: Myth or Reality?