Hello,
in the last months I've been reading a lot about digital sources and related issues, I've also been arguing with some enthusiasts of the CMP2 system.
I'm not an expert, so I could have misunderstood something, but here is what I think about how a D/A converter should work:
(1) store the input data in a buffer
(2) generate a clock signal
(3) make a conversion of the stored data using the internally generated clock
If things were as simple as that, as long as the digital source provides a bit-perfect digital stream, any other variable (i.e. the jitter of the stream) should be irrelevant.
Instead, we're reading a lot of threads about:
- differences between digital buses and interfaces (USB vs. Firewire vs. S/PDIF coax vs. S/PDIF optical)
- differences on the source hardware (with a lot of computer optimization efforts)
- differences on the source software (when it is a computer)
So, how can all this make sense?
If there's the tecnology to obtain a result which is not dependent on the digital source, why aren't enthusiasts going after it, spending time on a lot of optimizations of their computers, instead?
What's the problem with the currently available technologies?
Thank you to anyone will help me answer this question
in the last months I've been reading a lot about digital sources and related issues, I've also been arguing with some enthusiasts of the CMP2 system.
I'm not an expert, so I could have misunderstood something, but here is what I think about how a D/A converter should work:
(1) store the input data in a buffer
(2) generate a clock signal
(3) make a conversion of the stored data using the internally generated clock
If things were as simple as that, as long as the digital source provides a bit-perfect digital stream, any other variable (i.e. the jitter of the stream) should be irrelevant.
Instead, we're reading a lot of threads about:
- differences between digital buses and interfaces (USB vs. Firewire vs. S/PDIF coax vs. S/PDIF optical)
- differences on the source hardware (with a lot of computer optimization efforts)
- differences on the source software (when it is a computer)
So, how can all this make sense?
If there's the tecnology to obtain a result which is not dependent on the digital source, why aren't enthusiasts going after it, spending time on a lot of optimizations of their computers, instead?
What's the problem with the currently available technologies?
Thank you to anyone will help me answer this question
(1) store the input data in a buffer
(2) generate a clock signal
(3) make a conversion of the stored data using the internally generated clock
One small missing piece is the synchronisation between your input data buffer and wherever your data is coming from. If you have asynch USB or feed the source with your DAC clock, you are OK, otherwise you have to do something about the end-to-end synchronisation issue.
If your buffer is small, you will eventually either fill up the buffer or have an empty buffer, as the clock of your source will run slightly slow or fast compared to your DAC. If you have a large buffer, you will have significant delay/latency - a nuisance with pure, recorded audio, but a real problem with simultaneous video or a live situation.
If there's the tecnology to obtain a result which is not dependent on the digital source, why aren't enthusiasts going after it, spending time on a lot of optimizations of their computers, instead?
Lack of knowledge/information/understanding combined with audiophile folklore and superstition?
If things were as simple as that, as long as the digital source provides a bit-perfect digital stream, any other variable (i.e. the jitter of the stream) should be irrelevant.
Bear in mind that a digital source doesn't exist in total isolation. Well actually maybe a few do - a battery powered SD-card player springs to mind. This is the closest you're likely to get to a 'perfect' digital source.
Instead, we're reading a lot of threads about:
- differences between digital buses and interfaces (USB vs. Firewire vs. S/PDIF coax vs. S/PDIF optical)
- differences on the source hardware (with a lot of computer optimization efforts)
- differences on the source software (when it is a computer)
So, how can all this make sense?
Its not just bits which come across the digital interface, its also noise. Called 'common-mode noise' this is an elephant in the room for digital systems. A large proportion of what are popularly called jitter issues are related to this.
If there's the tecnology to obtain a result which is not dependent on the digital source, why aren't enthusiasts going after it, spending time on a lot of optimizations of their computers, instead?
I guess the answer to that one is 'convenience' - battery powered SD-players aren't up to the storage level yet of computers, nor do they have the fancy features.
What's the problem with the currently available technologies?
Not much wrong IME if you choose the right technologies and implement them with plenty of attention to detail.
You may find this post interesting: http://www.diyaudio.com/forums/digital-source/70456-john-westlake-products-info-2.html#post803344
Its not just bits which come across the digital interface, its also noise. Called 'common-mode noise' this is an elephant in the room for digital systems. A large proportion of what are popularly called jitter issues are related to this.
This shouldn't be an issue with optical S/PDIF (TOSLINK), and USB interfaces can be isolated either with magnetics (transformers) or optics.
And if a device does interfere with TV reception it is either badly designed or faulty, that an EMC problem, and anyone involved with real world design will know that such gear would fail the EMC tests and should not be sold, but its a nice comment to feed the fear of digiatl systems.
I second Julf comments earlier.
The other problem is 'Dark Bits' as discovered by Qusp, luckily these only affect digital systems that have to handle audio. Luckily the rest of the systems in the world that rely on digital to run dont and seem to work quite well, such as the internet, transport systems CERN, banking system BSYKBetc etc.
Put your CD's on a hard drive and you get rid of the real week link, just dont move your data from disk to disk (like I do) you may induce low level digital distortion (and yes its the second time I've brought this up, but as no believer will discuss this phenomana with me, I shall keep digging till they do.
Keep on fretting
I second Julf comments earlier.
could you explain this bit of fallacy please, though again it shows the Audiophile view towards digital, non linear PSU's, battery power (not the perfect solution people think, without some regulation and filtering).Bear in mind that a digital source doesn't exist in total isolation. Well actually maybe a few do - a battery powered SD-card player springs to mind. This is the closest you're likely to get to a 'perfect' digital source.
The other problem is 'Dark Bits' as discovered by Qusp, luckily these only affect digital systems that have to handle audio. Luckily the rest of the systems in the world that rely on digital to run dont and seem to work quite well, such as the internet, transport systems CERN, banking system BSYKBetc etc.
Put your CD's on a hard drive and you get rid of the real week link, just dont move your data from disk to disk (like I do) you may induce low level digital distortion (and yes its the second time I've brought this up, but as no believer will discuss this phenomana with me, I shall keep digging till they do.
Keep on fretting
This shouldn't be an issue with optical S/PDIF (TOSLINK)
Agreed - its not a matter of 'should' its not an issue. But those links have other issues - being not very wide bandwidth and non-linear bandwidth to boot, jitter does tend to become an issue IME.
and USB interfaces can be isolated either with magnetics (transformers) or optics.
They can indeed though not with infinitesmally low capacitance, only at DC.
could you explain this bit of fallacy please
If you'd be so kind to point out the nature of the alleged fallacy, I'll endeavour to help
They can indeed though not with infinitesmally low capacitance, only at DC.
Well, yes, but any two boxes that are in the same room will be coupled to each other by "not infinitesmally low capacitance". The question is, why would a circuit be so badly designed that it would matter?
I dunno, why do circuits designed by (presumably) self-proclaimed competent designers suck so badly sound-wise in the presence of HF noise?
No idea. Do they? Analog or digital circuits?
Analog.
Ah, OK, then my guess would be that a lot of those designers still live mentally back in the 60´s, 70's and 80's when pervasive EMF/HF noise was not as much an issue as in the modern world of high-speed digital circuits, mobile phones, wifi, bluetooth and other sources of HF crap.
Yep, undoubtedly that's a big part of the issue. But only in the past two or three years I've noticed the IC designers (National it was, now subsumed into TI) marketing their opamps as having superior RF rejection capabilities. Haven't seen any others do this yet. The opamp designs remain pretty much the same architecturally as back in the 1970s.
You may find this post interesting: http://www.diyaudio.com/forums/digital-source/70456-john-westlake-products-info-2.html#post803344
"....To make matters worse, no attempt has been made to de-correlate or “randomized” the SPDIF Data during transmission, so that the clock recovered by SPDIF receiver is guaranteed to be heavily contaminated by Data correlated Phase Noise – Jitter of the very worst kind."
May not be entirely true as a NZR coding is applied to the s/pdif stream to avoid a DC component on the line.
Any data correlated jitter is due to poor receiver design and not inherently a property of the interface.
Still I agree on that the i/f is unfortunate in many aspects. "We" didn't know better at the time - perfect sound forever - 1 & 0, how can it go wrong?
/
Any data correlated jitter is due to poor receiver design and not inherently a property of the interface.
Dunn and Hawksford's findings do not support this conclusion - http://www.scalatech.co.uk/papers/aes93.pdf
Still I agree on that the i/f is unfortunate in many aspects. "We" didn't know better at the time - perfect sound forever - 1 & 0, how can it go wrong? /
Part of the problem is also that the circuit complexity to implement proper flow control (the major shortcoming of S/PDIF) was way beyond what could reasonably be included in audio gear 20 years ago. A modern USB controller probably has more processing power than many mainframe computers had back then
- Status
- This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
- Home
- Source & Line
- Digital Line Level
- Why worry about the digital source?