XMOS-based Asynchronous USB to I2S interface

So I don't understand your complaining. :confused:

I wasn't complaining about anything, merely trying to suggest that the notion of "custom" drivers seems a bit arbitrary to me and thus unhelpful even if soundchekk disagrees. (He of course has the benefit of Linux expertise.)

To revert to topic, I have no issue with buying a device that uses Thesycon drivers - they may not be "stock" but they are certainly mainstream and much more likely to be refined over time and kept current than any drivers supplied with XP (or Vista or Win 7 or Win 8 or . . . ).
 
Windows/OSX

"There's now a thriving market in after-market (custom?) players for the Mac with users anguishing over soundstage (depth, width and height all according), "inner detail", ambience and all the rest with a zeal that makes some of us PC wallahs queasy."

With ASIO, and W7, many of the previous "problems" of Windows with audio are solved, and this brings W7 to the level of OSX regarding software, but hardware is another story...

I would not get queasy... It appears that a good W7 set up is now on a close to level playing field with OSX for sound quality: that is, better try jPlay if you want the best performance (or HQplayer, etc for tweaky options).

My experience is that better audio quality is going to result from a low power computer server running linux (vs W7 or OSX), voyage/mpd in my case. But my experience may be hardware related, as my custom server has a low noise dedicated USB output card running on an isolated linear power supply.
 
With ASIO, and W7, many of the previous "problems" of Windows with audio are solved, and this brings W7 to the level of OSX regarding software, but hardware is another story...

I would be interested to hear what improvements were made by MS in W7 to their audio software that you are basing that statement? Genuinely interested, as I run W7 am v happy with it. But they still do not support USB Audio Class 2 in W7 or I believe Windows 8.

Thank you
 
Great questions. I'll try to answer where I can.

Sorry if I'm being dumb but I can't follow your argument here. What's the essential difference between a USB designer who develops his/her own firmware and drivers on the one hand and one who licences from the likes of Thesycon or Ploytek on the other? (Except of course that the latter are likely to be much cheaper and give the designer access to significant expertise.)
I would say that licensing code is orthogonal to USB Audio Class compliance. The choice to license should not force the choice to comply with standards. I probably should have been more precise in my terminology. Thus, if you don't understand my argument then it's probably my fault!

I am not familiar with the offerings of Thesycon or Ploytek, but if either of them offer UAC firmware and/or drivers then there is nothing wrong with paying for a license versus paying for firmware or driver development in-house. Firmware development is often handled by a different engineer than the hardware design, and so there's certainly nothing wrong with licensing firmware from a third party. Same thing goes for the driver that lives on the computer.

The biggest issue is that Mac OS X supports USB Audio Class with a system driver, whereas Windows does not (to my knowledge). This means that hardware developers targeting Windows are forced to develop a Windows driver as well as their device firmware. My comments were that when a hardware developer fails to create a UAC compliant firmware in order to make their custom driver easier to develop, they've done a disservice to their customers and have made any OSX customers deal with unnecessary third-party system drivers.

Also, taking your word on the fact that the USB Audio specs are all-inclusive (I've read some of them but not carefully), I don't see how it follows that Apple's implementation of those specs is, as you seem to imply, perfect almost by definition.
It's not rocket science. It either works or it doesn't. I'm not saying that Apple is "perfect" so much as I'm saying that it works. I mean, nobody questions whether USB hard drives work "better" on Apple or Microsoft systems. The fact is that when you save a file to a USB drive, your files are bit-perfect whether you're working on Mac or PC. The same should be true for audio, except that Windows does not provide USB Audio Class drivers and OSX does.

So, I turn this question around to you: Are you implying that we should be concerned about whether Apple's USB Storage Class (disk drive) code is "perfect?" Should we be pitting USB hard drive reliability under Windows against the same drive under OSX? If not, then why would there be a vendor battle over USB Audio, except for the fact that Microsoft isn't even trying.

I simply expect it to work as reliably for audio as it does for a file storage device. I've gone so far as to confirm that the audio data is bit-perfect on a Mac, but to be honest those exhaustive tests have been for FireWire Audio more than USB Audio. I just don't see any reason to doubt that it's perfect. On Windows, there is no USB Audio Class driver, so I'm not saying that Microsoft couldn't do an equally perfect implementation, I'm merely acknowledging that they haven't tried (or haven't released anything).

The notion that Apple's audio software was, unlike those clunky PCs, as good as it could ever be was aired almost daily by a small group of Mac users on AA's computer audio forum a year or two back. It took a knock when a music-player program called Amarra - which cost more than my entire PC audio system (inc custom driver . . .) - was released.

There's now a thriving market in after-market (custom?) players for the Mac with users anguishing over soundstage (depth, width and height all according), "inner detail", ambience and all the rest with a zeal that makes some of us PC wallahs queasy.
We're not going to get anywhere with this line of discussion, because those same users probably think that changing the power cord on their computer changes the sound stage. In my estimation, it is impossible for "those" people to ever be satisfied with anything. There will always be snake oil that makes the best technology "better."

Granted, a few iTunes devotees (curiously, mostly programmers) can still be heard muttering under their breath from time to time but the emerging consensus is that Macs are prone to much the same issues as PCs once the user starts to push the boundaries.
I use iTunes and get bit-perfect performance out of it. I have designed hardware tests to confirm that the bits reaching my DAC are exactly the same bits in my audio files, and I know that the clocking is controlled by my DAC, not by iTunes, CoreAudio, OSX, or anything on the computer (which is slaved to the DAC).

Granted, I'm always learning new things, and I would love for someone to prove the merits of Amarra to me. At this time, I do not see the point. Then again, I have Logic and my own audio software for those times when I doubt iTunes' quality.

In short, I'm not saying you're wrong but I'm not convinced you're right either.
I'm struggling with staying on topic versus making a convincing argument that encompasses all possible questions. The challenge is to be concise without leaving holes that raise questions. Hopefully, a conversation that isn't one-sided will be most informative.
 
Well...

As most of the aftermarket playback software available allows for free trials (Amarra, Pure Music, etc) if you are interested in improving the playback performance of your system, it is quite simple for YOU to test for yourself.
Yes, you can get bit perfect playback from Itunes.
But, it is not up to anyone to "prove" anything about the sonic benefits of different playback software, it is simple enough for people to try it for themselves, for free, and come to their own conclusions about the sonic benefits.
I prefer to make decisions for my system based on what I hear, not what someone on the internet might tell me I "should" hear, and I suggest that you might want to do the same. Why not, considering it is free to find out...
 
Everyone agrees the i2s bits from WaveIO are bit-perfect. At issue, it appears, is jitter. Given the stock WaveIO being shipped by Lucien today, using external power but not modified to accept an external clock, is the jitter of the PCM stream affected by which OS is supplying the bits?

This thread is about WaveIO. If the answer to the above is that macs, linux, and windows machines all suffer the same jitter then everything about any application running on any OS is OT because the few picoseconds of measured jitter is inherent in the WaveIO clock.
 
Last edited:
Everyone agrees the i2s bits from WaveIO are bit-perfect. At issue, it appears, is jitter. Given the stock WaveIO being shipped by Lucien today, using external power but not modified to accept an external clock, is the jitter of the PCM stream affected by which OS is supplying the bits?
No.

According to the web site, WaveIO is based on a reference design that is a High Speed USB 2.0 Device that implements Audio Class 2.0 (and 1.0) using asynchronous synchronization. Thus, although the WaveIO master clock may not be perfect, it is not controlled by the OS or the USB Host computer. Using external power, there is very little, probably no opportunity for the computer to affect the on-board clock. In other words, the only way to improve the clock jitter reliably is to change the hardware, not the OS.

This thread is about WaveIO. If the answer to the above is that macs, linux, and windows machines all suffer the same jitter then everything about any application running on any OS is OT because the few picoseconds of measured jitter is inherent in the WaveIO clock.
Thank you for focusing the discussion on the purpose of this thread.
 
Well...

"According to the web site, WaveIO is based on a reference design that is a High Speed USB 2.0 Device that implements Audio Class 2.0 (and 1.0) using asynchronous synchronization. Thus, although the WaveIO master clock may not be perfect, it is not controlled by the OS or the USB Host computer. Using external power, there is very little, probably no opportunity for the computer to affect the on-board clock. In other words, the only way to improve the clock jitter reliably is to change the hardware, not the OS."

Consider, that even when supplying a separate power supply for the USB interface, the computer ground is still connected to the WaveIO ground, and hence the computer's ground is connected to the ground of the oscillator's. It is entirely possible (likely) that noise on the computer's ground does raise the jitter level becuase of this connection.

But, I also think there is more to this than just jitter and bit perfection. I suspect that noise from the computer will couple through the WaveIO interface and into the DAC, perhaps even into the analog circuitry of the DAC. The folowing is speculation: it may be that better playback software allows the computer to run with less noise (most playback software developers remark that their software reduces processor load) and less noise then gets into the output of the DAC. This factor would also be a possible explanation for why low power computers often sound better.
 
WageIO on VoyageMPD

...a Alix board with Voyage mpd as source and for some reason ALSA always mute the WaveIO. When I connected my other DAC it was not muted. To solve this I added the following lines to /etc/rc.local

amixer -c 0 sset 'Luckit Clock Selector',0 100% unmute
amixer -c 0 sset 'Luckit Clock Selector',1 100% unmute

Thought I would post this in case someone else will use ALIX (or other Linux installations) and run into the same problem.

I too use ALIX VoyageMPD. I had the Alsa mute issue on first use (also on Ubuntu). In my case I set mute off using the alsa-util gui. It stayed off without further issue.

I am having a challenge getting MPD DB to update. I tested ALIX with a few WAV files on a USB stick. It remembers the USB but refuses to update with contents of my NAS music folder. It reads the directory and will play, but 2 issues. Only sees about 70% of the tracks, and refuses to update the DB. Anyone else see issues like this? I have RW permissions on NAS directory and other files like .PID and state are being written.

BTW, sound of WaveIO with the ALIX Voyage combo is outstanding. Yes, for some reason OS makes quite a difference. To best of my knowlege I am getting unaltered bit perfect playback on Alix MPD setup.
 
Consider, that even when supplying a separate power supply for the USB interface, the computer ground is still connected to the WaveIO ground, and hence the computer's ground is connected to the ground of the oscillator's. It is entirely possible (likely) that noise on the computer's ground does raise the jitter level becuase of this connection.
You do realize, I hope, what the term "circuit" means?

Current cannot flow from the computer ground to the separate power supply ground unless there is a circuit - i.e. some other connection between computer and power supply besides ground. You have to connect at least two wires before electricity will flow. Ground loops can cause a problem with ground-referenced signals, but when the ground is the only connection there can't really be a problem.

I'd say that it's nearly impossible (unlikely) that connecting just the ground will cause a problem.

If you dive into the details, the amount of noise will be proportional to the amount of current flowing on the ground. Since the USB power line is open circuited, then 0 A of current is flowing along the power lines. The USB data lines are differential, and ideally referenced to each other instead of ground. If there are any slight errors in the differential matching, there might be some ground current, but the noise level should be much lower than a 500 mA power supply demand from the USB Device.

I suppose there might be something I'm missing, so hopefully someone can explain how connecting ground but not power would conduct noise.
 
My thoughts are thus regarding noise. If the smps is generating noise on the USB ground which is present at the end of the ground of the USB cable, since the noise is voltage spikes of varying peaks, the ground at waveio will be a different potential at any point in time. As voltage can be present without current flow, to me it is entirely likely we are getting uA in the ground plane being drained off. The only way to be sure is to disconnect the USB ground at the input and note any differences. Obviously it would need to be connected to the powersupply ground of the wave.
Mine hopefully got posted yesterday so I can't do this yet!

Who's game?

Drew.
 
I'd say that it's nearly impossible (unlikely) that connecting just the ground will cause a problem.

But when is a ground a ground? Obviously, dedicating a good PSU to the WaveIO provides it with, by definition, a clean and steady Ground line. Let's assume (reasonably) that the board is laid out to minimise issues with noise on its ground.

However, the Gnd line on a USB cable, far from being rock steady, fluctuates slightly in line with events on the motherboard, something I believe the engineers call "ground bounce". The reference point for the Data+ and Data- lines is Ground, not the +5-volt line or, as you imply, each other. You can usually bypass the USB's +5v line but you cannot bypass the Gnd - the devices don't have to be connected via the USB cable's Gnd (though that's the usual way) but they do have to be connected.

Noise on a USB cable's Ground line tends to disrupt the timing of data. Why that matters with conventional USB audio is fairly clear - the data are read in real time. Why it matters with an asynch device such as the WaveIO is much less clear but it does nevertheless seem to be an issue. (Why it is an issue is discussed in one of the links I cited earlier.)

One reason why an isolator such as the ADuM4160 is effective in the PC Audio context is that it isolates (though, obviously, not completely) the target device from noise on both the 5-volt line and the Ground. But they too need a clean power supply to work at their best.


. . . the amount of noise will be proportional to the amount of current flowing on the ground.

It's one variable - but one of many.
 
Do remove the Trident from the Dac's VDD_XO circuit. If you don't have the Trident on your Dac then simply remove the ferite bead that controls the VDD_XO which is located on the underpad (L8)mmm not sure it's that one so follow the trace.... which should disconnect the onboard Crystek .

Now locate your R17 pads on buffaloII ? when looking at the Dac with the twisted pear logos facing upright., you locate the right pad from the 2 R17pads to connect your MCLK live wire only, from WaveIO, then you use the ground wire from your MCLK and connect to left pad of R17 .All your I2S connections from WaveIO to buffaloII will be as short and similar lenght as possible .
hope that helps to synchronize and do convey your results please .
regards
rol

My Wave IO board arrived today - many thanks Lucian, and well done for packing it so well. Looks like someone played football with the parcel...
It's a truly lovely-looking board, almost a shame to put it away in an enclosure.

I'm working out how to wire it to my dual-mono Buffalo II but I'm uncertain how to do this. Is it essential to disable the on-board clocks on the Buffalo as per roll's post above?

I'm going to be using a TP "Teleporter" to send the I2S via CAT6 cabling and something I read from the TP people indicated that you don't need to connect the ground, just use DATA, LRCK and BCLK. I'm a novice at DIY Hi-Fi and I find this I2S connection matter very confusing! All help gratefully received!
 
G'day Ryelands,

Query so I understand;

We can disconnect the USB ground and attach a ground from another powersupply, say a shunt reg, does this stop the USB from working? I'm just trying to get a handle on your post. If it does then separating the USB/PC powersupply is much harder(shock!) than I figured. Is the USB ground used to notify the host something is connected? Aside from a data reference you mentioned, what is its purpose?

Thanks muchly!

Drew.
 
We can disconnect the USB ground and attach a ground from another power supply . . .

Sorry for the confusion but, emphatically, No. As I understand it, the signal in a USB lead is the difference between D+ and D- referred to ground. No ground? No signal. Normally, the ground line goes from the PSU via who-knows-where to the USB socket on (usually) the back panel down a lead up to five metres long to the attached device. (There's a proper term for attached device but I forget it for now.) The ground line collects interesting things on the way inc noise and significant resistance.

The same goes for the +5v line except that it isn't a reference point for the data signal though, together with the ground line, it often supplies power to the attached device. Crucially, it also almost always tells it that it has a computer attached.

Where the likes of a DAC or similar is attached permanently to a computer dedicated to audio reproduction, we can safely cut the USB's noisy 5v line and obtain a clean, low-noise +5v signal (for signal is all it usually is) either from the DAC's PSU or, for bus-powered devices, from an external PSU. The latter is AFAIK what the AQVOX after-market USB PSU does. Whatever, it's a pretty common tweak that I've done on several systems and IMHO a good one.

Note that if you are in the habit of connecting and disconnecting the DAC while the computer is running, the cable is best left as spec. Also, WRT to rsdio's earlier comment, there is no point whatsoever in doing the tweak to the likes of a disk drive as the data transfers in that case are not real time. So long as the thing works, it works fine.

It was list member wlowes who first gave me the idea of replacing the leads in a USB cable with a couple of CAT5 twisted pairs, one for the D+/D- lines and another for the Gnd and +5v lines. CAT5 is good quality cable, its impedance is pretty close to the USB spec, it can be made nice and short and it's cheap enough even for a Scotsman. wlowes prefers to screen his lead, I leave mine unscreened partly to keep capacitance low but mainly because I'm lazy. (The audible difference from screening that he reports presumably reflects different conditions.)

So in no time we have a USB lead with D+/D- as normal and a Gnd lead but no +5v as that's obtained separately. My "idea" was to try cutting the ground line as well. I got it from a discussion on the AA Computer Audio forum where a "manufacturer" of very expensive USB cables was baffled why omitting the Gnd lead sometimes markedly improved SQ but as often as not wouldn't work at all. I thought the answer pretty damn obvious . . .

In short, if there is another connection between the computer's PSU ground and the DAC's ditto, you can safely cut the USB cable's Gnd lead and perhaps improve SQ. If there isn't, you can't - the link doesn't work.

The Gnd-Gnd connection is provided either accidentally by the likes of safety earth lines or through the back door by your connecting the Gnd line of the two PSUs together at appropriate places. The reference point for the USB's data signal is thus likely to be significantly cleaner than it is with any USB cable no matter how expensive.

I tried the idea successfully on one system and got a definite improvement in SQ. I also tried it on a system using an ADuM4160 isolator where it didn't work but, before I worked out why, I discovered the WaveIO and didn't bother fault finding. When I get my WaveIO working well (it's sounding pretty damn good even now), I'll try again. It may well be that it doesn't have much effect.

I must stress that this "back-door ground" tweak does not conform to USB specs and thus has a slight element of risk. Unless your system is "set-and-forget", it's best to stick with the Gnd line in the USB cable.

Hope that's clearer.
 
Last edited: