XMOS-based Asynchronous USB to I2S interface

Excellent summary.

It is important to keep in mind the design constraints of any existing solution. I2S was never designed to connect two separate systems which do not share an identical ground reference, so I would say that not only does I2S work best over short distances, but it almost absolutely needs both ends running on the same power supply. As distances get longer, ground loops appear or ground potentials increase, and when the distance spans two enclosures you almost surely have a separate power supply, making the problem even worse.

If you use a standard outside of its design constraints, then you're very likely to fail to get the original benefits. Whenever you connect two separate enclosures with different power supplies, you need some kind of interface that is designed to deal with the different references. I2S was not designed for that. But it makes sense that "LVDS balanced I2S" could enhance plain I2S with the balanced additions handling the potential differences in reference levels (ground).

There's no need to get into the drawbacks of SPDIF and AES3, but the one thing those ancient standards have going for them is that they were absolutely designed to connect separate pieces of gear. While it's obvious that we need to look for modern replacements so that we can finally say goodbye to the failures of SPDIF & AES3, that doesn't make it possible to shoehorn something that was not designed to overcome the same obstacles into a solution without considering the additional requirements.

I think it's great that PS Audio, Wyred 4 Sound, and Twisted Pear Audio have pioneered a better solution. Are there any convenient links to the technical details of these interconnects? Sorry if the question has been answered in this thread already, but I recall suggesting LVDS interconnects on diyAudio and someone suggested that it would be a bad idea due to increased jitter. If the above three companies are having success with LVDS I2S, then I'd like to learn more.

The I2S/LVDS solution was originally developed in 2008 by Rockna Audio and PS Audio. A interface schematic is freely available here : I2S lvds interface | AD LABS
For any other info regarding the interface one should feel free to ask.
 
could you please send a link to that? In addition, I would like for this thread to stick on topic as long as for now, WaveIO card does not support any kind of I2S transmission using differential PHYs.
Kind regards,
L

Some insight on how jitter is measured.

I have the legato so its asynch usb - dir9001 - pmd100 - pcm1704. There is no reclocking in the DAC, so if the cable/impedance match is good then the best I am doing is 3ps rms (legato) + 50ps (dir9001 instrinsic) + PMD (?) + PCM1704 (?).

The only galvanic isolation is the spdif pulse transformer but the usb +5V PS isn't used and the power supply is very unique, as is the reclocking prior to transformer to spdif. The sound of hirez (24/96 material) downsampled and dithered with Ozone 4 to 16/44.1 is better than a LiFePO4 Hiface at the musics native 24/96.

I guess what I am saying is SPDIF done right doesn't really add more jitter than i2S GMR's.


I will remove the following paragraph if you feel its off topic, but I think it shows the direction this technology will be heading and LVDS isn't part of it.

The best possible solution is to place the asynch USB clock(s) at the DAC providing synch reclocking (alignment on the critial i2s line for the dac). Then send this clock signal back to the Xmos via GMR, and the remaining I2S GMR.
This way the GMR's add no jitter, neither does the digital filter, and of course their is no receiver with its intrinsic additive jitter. I think the Wave I/O can be setup this way my a competent modder. This is the future for computer as transport, but obviously we don't want just I2s output, we want it split left and right, right justified binary 2s compliment, up to 768khz, get rid of the digital filter, let the computer do the over/upsampling/filtering/data conversion with more precision than a chip. Proper conversion down from the 32 bit output and compatability with from the classic TDA1541 to the PCM1704 is important. This is how things will end up in 5 years or so I hope.
 
This is the future for computer as transport, but obviously we don't want just I2s output, we want it split left and right, right justified binary 2s compliment, up to 768khz, get rid of the digital filter, let the computer do the over/upsampling/filtering/data conversion with more precision than a chip. Proper conversion down from the 32 bit output and compatability with from the classic TDA1541 to the PCM1704 is important. This is how things will end up in 5 years or so I hope.

If this isn't too off topic I'd just like to mention that all the reports from audiophiles I've noticed say that having the computer do less computation sounds better than with more. So they say things like '.wav sounds better than .flac' and 'I stripped down the OS to only the bare minimum number of processes and that improved the sound'. So why would we want to add more things for the CPU to do? Using a dedicated digital filter will generate less noise than asking a sweaty 65W+ Intel CPU (with attendant local buck regulator) to do it, that's for sure.

Why would we want to tie in our audio systems that much more closely in with our PCs when the PC is already on the way out?
 
This is...

"The best possible solution is to place the asynch USB clock(s) at the DAC providing synch reclocking (alignment on the critial i2s line for the dac). Then send this clock signal back to the Xmos via GMR, and the remaining I2S GMR."

how competant USB DACs work now (Ayre, Wavelength Audio), OK, except they use optocouplers instead of GMRs.

It would be really cool if Lorien could add this functionality to the Wave IO: it seems a small board (daughter) for placement right at the DAC, with the two oscillators, necessary regulation, and I2S reclocking could be made. Then a new Wave IO which accepts MC input, and can communicate the clock switching necessary to the daughter board for the two SR frequencies. This approach would also allow the I2S lines to be a little longer without problems, giving more flexibility for placement of the Wave IO in the chassis where it's associated RF field can be isolated from the DAC and analog circuitry.
 
Wolfsin can just imagine the Audio Alchemy designers reading that forecast and getting a good belly laugh. That something as simple as reconstituting digital audio could have taken so many years is testimonial to marketing over engineering. RedBook dates to the early '80s.

Funny thing wolfsin is I still own an I2S input AudioAlchemy DAC, believe me when I tell you that they didn't understand the concept, they is so much jitter added in their handling of I2S input to the DAC it is scary, read that theirs were some of the highest jitter measurement DAC's ever made. Still a good sounding little unit PMD100-AD1862, going to get an upgrade soon.
 
If this isn't too off topic I'd just like to mention that all the reports from audiophiles I've noticed say that having the computer do less computation sounds better than with more. So they say things like '.wav sounds better than .flac' and 'I stripped down the OS to only the bare minimum number of processes and that improved the sound'. So why would we want to add more things for the CPU to do? Using a dedicated digital filter will generate less noise than asking a sweaty 65W+ Intel CPU (with attendant local buck regulator) to do it, that's for sure.

Why would we want to tie in our audio systems that much more closely in with our PCs when the PC is already on the way out?

That was before asynch USB when the computer was supplying the clock and there was some validity to accessing memory and cpu resources affecting jitter. If you go to that forum now one of the most popular DAC's for them does the upsampling in the computer to 16x and what's left of the DAC is an "NOS" DAC. It only makes sense to do the digital manipulations with the computer, the horsepower to perform algorthms is there and only makes sense that a PC is going to have more capability but more importantly flexibility. Digital filter programming is the one area in high end audio that is still fertile for development, as it moves to the computer, more programmers, open source projects, imagine a foobar plug-in with code of the caliber of Berkley's Alpha? Plus you get more rf away from your dac, you have the flexibility to use good but forgotten chips like the PCM56k or the AD1865 with the proper data shifts. Just huge flexibility replacing the digital filter with the computer.
 
That was before asynch USB when the computer was supplying the clock and there was some validity to accessing memory and cpu resources affecting jitter.

But the problem isn't jitter, its common-mode noise. Async USB is still affected by various software issues (people do report sound quality differences with async) so that's evidence which falsifies the 'jitter' hypothesis.

If you go to that forum now one of the most popular DAC's for them does the upsampling in the computer to 16x and what's left of the DAC is an "NOS" DAC.

I'd not touch it with an infinitely long barge pole :)

It only makes sense to do the digital manipulations with the computer, the horsepower to perform algorthms is there and only makes sense that a PC is going to have more capability but more importantly flexibility.

I don't buy that for one moment. Yes the horsepower is there but the future is lower horsepower, with much much lower energy footprint. Why would you wish to exclude the tablet users of today (and tomorrow) from decent sound quality? Horsepower has been too cheap (a horsepower bubble if that's not too mixed a metaphor :p) with the result that software has gotten hugely bloated. This bubble will pop.

Digital filter programming is the one area in high end audio that is still fertile for development, as it moves to the computer, more programmers, open source projects, imagine a foobar plug-in with code of the caliber of Berkley's Alpha?

Agreed that filter development will blossom in future, that's great for DIYers. Its not necessary for it to be on the PC for projects to start up using open source.

Plus you get more rf away from your dac,

Indeed you do get more RF, and its conducted down to the DAC via the cable. So distance isn't really much of an issue.

you have the flexibility to use good but forgotten chips like the PCM56k or the AD1865 with the proper data shifts. Just huge flexibility replacing the digital filter with the computer.

None of the flexibility is lost by having the digital filter done locally.
 
Funny thing wolfsin is I still own an I2S input AudioAlchemy DAC

wannaBuy two more? :) My parcel from Lucian is stashed in the P.O. until Monday but part of my plan was to lash up an AA Dac first to assure bitzRflowing and then go with Opus. Is their jitter caused by their 'dejitter' boxes?

I am not quite ready to return to the digital world right now but am quite anxious to see how LucianSolution + Dual WM8740 compares against four BB1704s and the Apogee clocking.
 
But does it work?

Lucian wraps so that tanks and closely detonated explosives could not damage these little jewels. A friend happened to be at the post office and we had a nice long talk as we struggled to open the "adult proof" packaging :D

It really is beautiful (my friend, formerly with IBM, agrees) so now I need to get serious and hook it up. 'twas worth the wait!
 
I have heard that DSD was designed for lossless translation into PCM at half the sample rate. Is this true?
The frequency response at half the sample rate would be rather constricted, due to the limited slew rate, but that's really just the nature of DSD, not a side-effect of any translation. If you run at one sixty fourth the sample rate then there is more amplitude at the higher frequencies near Nyquist. The reality is that most material that is encoded as DSD is severely band-limited before being sampled, so there isn't really anything there at half the sample rate.

However, DSD can be translated into PCB without loss at any multiple of the sample rate, provided that you have sufficient dynamic range and hopefully some sort of adaptation to find the "zero" in the signal represented by a +1,-1 data stream, otherwise you'll have clipping. DSD is relative and PCM is absolute, so the only tricky translation is going from a relative system to an absolute system.

Looking at Wikipedia, there is an implication that DSD is equivalent to 20-bit PCM at 44.1 kHz (where they say 120 dB dynamic range, that's 20-bit; and where they quote 20 kHz response, that's better than 40 kHz sampling rate). Later in the article, DSD is compared to 20-bit at 96 kHz, but I say that the high frequency response of DSD would be severely limited in dynamic range compared to PCM. The thing to remember about DSD is that it is a significant low-pass filter, because the higher the frequency, the lower the maximum amplitude possible. PCM has no frequency-dependent amplitude limit, although frequencies near Nyquist should ideally be attenuated on input to avoid aliasing.

If you want to run a PCM DAC at 352.8 kHz from translated DSD input, then you'd get very little amplitude at higher frequencies, and practically nothing above 100 kHz.