A question for the pc audio experts

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Apparently... USB linked external dac's have an issue related to the synchronisation of the data between the pc and the dac itself. I believe this issue is addressed by the new asynchronous converter technology?
The question is: Do the same limitations apply, in whatever form, to an audio card (PCIA) installed within the pc? What about SPDIF links from audio card to external dac or digital input of CD player?
 
Well I had an interesting reply from a member called "wwenze" but it seems to have appeared only in my emails even though it was not a PM... ?
Here it is anyway:

"Asynchronous USB clocking reduces jitter on the final 48kHz (or its multiple) audio output. Works by making the receiver the "master" and telling the computer when to send the data.

AFAIK, sound cards has always been doing that. Because on sound cards with external clock input, if you change the input clock without telling the system that the sampling rate has changed, the speed of playback changes accordingly (or stop completely if you remove the clock, unless it auto-detects and reverts to internal). Almost all sound cards have a 24.576MHz crystal (some have 22.5792MHz)(or their multiples) for the generation of 48kHz (44.1kHz). In comparison, adaptive-clocking USB solutions usually (but not necessarily) PLL the 48kHz from the incoming data.

So the 48kHz generated from the sound card is clean, but the receiver is the slave which recovers the clock from the SPDIF input, so the jitter is back. Some feel that adaptive USB is worse than SPDIF, some feel vice-versa.

Another issue is guaranteed delivery + bandwidth (or latency) - I have Musiland Monitor which uses asynchronous bulk (as opposed to asynchronous isochronous), a PCM2702 which is adaptive, and a US-144 which is isochonous (although I don't know about the clocking). Isochonous has guaranteed bandwidth, while bulk has guaranteed delivery. But if I enable the motherboard's WLAN adapter the sound still will pop and etc with all of them. The last time I had a sound card do that is with a Pentium 4 and the motherboard sound.

SPDIF gets guaranteed bandwidth since the channel isn't shared with anything else, so the only issue is the sound card getting data to the SPDIF fast enough, which is not an issue."
 
Last edited:
I'll try to give a very simple explanation of how digital audio transfer works to the best of my knowledge. Please feel free to make make corrections.

The digital to analog converter chips usually expect to be fed a digital number which to convert to analog voltage and a control signal telling them when to execute the conversion. The conversions must be accomplished at periodic time intervals. Jitter occurs when the digital to analog conversions happen at irregular intervals.

USB is great for transferring signals from devices like a computer mouse or a hard drive because they don't require precise timing. When such a device needs to send a signal it asks the USB bus for permission. The USB controller will transfer the message when it decides that bus is no longer busy. The USB bus can support up to 128 devices fighting to transfer data. In a computer the USB bus is a part of the PCI bus which has another controller. Current computer Operating Systems can not guarantee precise timing.

Any USB audio implementations that rely upon the Operating System to clock the DAC will not work right. A hardware clock on the receiving end that controls the DAC directly will work better if the computer can guarantee that the data is always available. The best way to implement USB audio is with the bulk transfer method. The computer sends bursts of data over USB to a buffer on the DAC and a clock on the device feeds the data periodically into the converter chip.

Good sound card implementation have hardware buffers and precise onboard clocks. Bad sound cards have no clocks and rely on OS clock and the high speed of the PCI bus. Sound cards in general have worse audio performance than external DACs because of the electrical interference inside a computer.

Proprietary audio transfer protocols like SPDIF are more reliable in theory because the hardware is entirely dedicated for that single task. However they can fall victim to poor implementations as well.

When comparing digital transfer interfaces the correct question is not which format is better but how was each device implemented.
 
The USB controller will transfer the message when it decides that bus is no longer busy. The USB bus can support up to 128 devices fighting to transfer data.

Isochronous packets have a higher priority and can occupy up to 80% USB bandwidth (see the last section of USB in a NutShell - Chapter 4 - Endpoint Types ). Unless there are too many USB soundcards hooked to the USB port, you are guaranteed enough bandwidth every single 1ms. That is the theory, there may be bugs in implementations, that would not surprise me :)


Current computer Operating Systems can not guarantee precise timing.
...
Any USB audio implementations that rely upon the Operating System to clock the DAC will not work right.
...
Good sound card implementation have hardware buffers and precise onboard clocks. Bad sound cards have no clocks and rely on OS clock and the high speed of the PCI bus.

USB is not clocked by CPU-controlled software timers. The USB controller has hardware clock (crystal-based, but of course derived via PLL, like all other clocks on the motherboard). The data are transfered directly from memory via DMA, operated by the USB controller, no CPU is involved, just as in PCI sound card. The controller is using interrupts, again very similar to PCI. The difference is the USB driver (i.e. CPU) must process the memory buffers with samples to prepare the USB packets (URBs) and store them to a different part of RAM for DMA, while in most cases the PCI soundcards transfer directly the memory buffers (i.e. parts of RAM) filled by the player software directly. That's why USB drivers use so called double buffering - one buffer for reading from the player, another buffer for storing the prepared URBs for the USB controller to read via DMA.

The preparation of URBs is not CPU intensive, it is just adding some headers, joining the URBs from various streams etc, trivial for contemporary CPUs. I am talking about audio with its max. megabits per second, processing URBs for fast USB storage can be much more CPU demanding.

Several URBs are prepared in advance, there is always the tradeoff between cpu demand and latency. The lower the latency, the fewer URBS the CPU can prepare in advance and the more often it is asked for work. In linux this parameter is tunable, even a lower powered machine can transfer USB audio without glitches while almost ground to halt with other work - see my test at http://www.diyaudio.com/forums/pc-based/93315-linux-audio-way-go-23.html#post1719044

Profound articles on internet by professional USB DAC manufacturers claiming CPUs must serve the USB bus every 1 ms is sad reading.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.