USB cable quality

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
...the buffer at the receiving end typically has 4+ packets in advance of play, in waiting. It therefore doesn't experience any "resynchronizing" of clocks or anything else.
Not sure what that means. In isochronous mode, the DAC must always (re-)synchronize its sample clock to the rate of packet transmission. The presence of a FIFO only helps to smooth the (re-)adjustments, not eliminate them altogether. A jittery transmission will still cause jitter in the DAC's sample rate.
When the transmission rate is either close-but-higher, or close-but-slower than the receiver's clock...
...which will always be the case using independent boxes linked only by a USB cable, using the isochronous mode.

... there certainly is the possibility of data drop or interpolation (respectively). Very smart DACs have precisely controllable synthetic clock circuits that can adjust the play rate to match the transmission stream by even as much as ±2% ... though of course such adjustment comes also as a shift in tone. This is not desired, for most music. Changing though by a couple of parts-per-10,000 is inaudible, even to persons with finely honed perfect pitch. And that is usually the on-the-ground, real-world difference in clocks between unsynchronized, but certainly crystal-referenced independent systems.

I don't think the actual amount of adjustment matters for this argument. It can be viewed as a pitch shift, or as a series of tiny discontinuities or momentary 'jitters', but the sad thing is that such a system is something of an analogue/digital hybrid and 'non-deterministic'; the playback is never 'bit perfect' over an extended period, and the system behaves differently each time it runs. Unfortunately, the nature of the cable may have some tiny, tiny influence on this. Maybe, just maybe, a 'higher quality' cable with separately shielded sections etc. might be shown to reduce the number of re-adjustments the DAC's clock has to make - it could be measured perhaps.

You and I know it couldn't be audible, but the way is open for people to be persuaded that 'golden ears' could detect it.

(Personally, the most I ever paid for a cable was £15, and I still feel embarrassed at having been so gullible.)
 
Last edited:
I thought USB transfer modes were discussed here extensively :)

USB isochronous means data are generally sent every frame (i.e. 1ms for USB1 and 125us for USB2) and the USB standard reserves about 90% of bandwidth for isochronous transfers. BTW this provision is implemented by the core USB driver, the actual USB controller is a pretty dumb HW :)

Commonly used isochronous modes are

1. adaptive where the computer sends equal (best effort) number of samples (audio, video) in each frame and the USB device must adjust its clock via PLL to the incoming rate.

2. asynchronous where the device has independent clock, features a small buffer and sends control feedback messages to the transmitter to speed up or slow down the transmission rate. Based on this information the usb audio driver increases or reduces number of audio samples in new frames prepared for USB transmission. Standard feedback technology, no rocket science.

Regarding the internal buffer size, every USB audio device is required to report its internal processing delay which is basically the size of its internal buffer. For details see page 59 of http://www.usb.org/developers/devclass_docs/audio10.pdf . My dirt cheap USB soundstick reports bDelay of 1 frame, i.e. 1ms of audio data. It may be true, maybe not (of course it could be verified by measuring loopback latency).
 
yep and very little of this has any effect whatsoever on the output clock of the dac, as mentioned, if there is a link between the USB clock and the dac clock at all, its an incompetent and ancient design

any decent USB audio interface will have a dedicated MCU clock, the audio clock domains for 22.1x/24x are completely independent of this and any timing error/jitter on the clock used for the USB bursts is utterly irrelevant and has no effect on the output jitter, none at all.
 
Last edited:
yep and very little of this has any effect whatsoever on the output clock of the dac, as mentioned, if there is a link between the USB clock and the dac clock at all, its an incompetent and ancient design

Well, all adaptive USB DACs fall into this category, i.e. majority of devices on the market. Asynchronous DACs have become common just recently. The Well-Tempered Computer

any decent USB audio interface will have a dedicated MCU clock, the audio clock domains for 22.1x/24x are completely independent of this and any timing error/jitter on the clock used for the USB bursts is utterly irrelevant and has no effect on the output jitter, none at all.

Decent: yes, standard: hardly :) There are very few USB audio v.1 asynchronous devices on the market and USB audio v.2 which most async dacs on the market use does not even have MS driver in MS OSes yet :)
 
The USB Audio stuff linked from post 82 is interesting, but rather long! However, I note that it says that if people follow its rules then phase jitter is limited to +-1 audio samples. Now in the world of computer audio via tiny speakers when playing games that might be acceptable, but it can't taken seriously for proper audio. So my reading is that USB is not a serious audio interface. Have I missed something?

In any case a PC is not a real-time system, even though NT (the underlying operating system) can trace its roots back to VAX/VMS and RSX-11 which were genuine real-time systems. Even if it were, would you want your audio bit transitions synchronised to either a cheap PC clock or an even cheaper USB interface clock? No, you need to re-clock in the DAC, and as soon as you do that the cable disappears as an issue.

My conclusion is that USB audio appears to be relying on something which cannot be relied on, therefore it is at best a lucky coincidence if it works at all. In this case cable voodoo is hardly surprising.
 
The USB Audio stuff linked from post 82 is interesting, but rather long!

It is the official USB audio v.1 specifications, I am glad it is extensive :)

However, I note that it says that if people follow its rules then phase jitter is limited to +-1 audio samples. Now in the world of computer audio via tiny speakers when playing games that might be acceptable, but it can't taken seriously for proper audio. So my reading is that USB is not a serious audio interface. Have I missed something?

That section talks about keeping separate audio streams synchronized. This synchronization is by design limited to +-1 sample at the driver level, if the driver knows delay (overall to analog output) of the audio function (i.e. stream) in number of samples. It is not about the clock jitter you talk about.

In any case a PC is not a real-time system, even though NT (the underlying operating system) can trace its roots back to VAX/VMS and RSX-11 which were genuine real-time systems. Even if it were, would you want your audio bit transitions synchronised to either a cheap PC clock or an even cheaper USB interface clock? No, you need to re-clock in the DAC, and as soon as you do that the cable disappears as an issue.

Oh not again please :)

First, USB frame clock is generated by PLL at the HW USB controller level, it has NOTHING to do with timing performance of the OS. It is a common audiophile myth I have been trying to dispel for years :)


Second, adaptive DACs do not use the incoming signals for clocking directly (that mode is called synchronous and used to be used by dirt-cheap skype headsets only), but align their own clock to the incoming rate by means of PLL. Sophisticated to various extent, depending on manufacturer. Apparently some do the clock recovery pretty well as there are adaptive DACs with very good performance on the market.

My conclusion is that USB audio appears to be relying on something which cannot be relied on, therefore it is at best a lucky coincidence if it works at all.

Well, it is a conclusion of yours. Not much we can do about...
 
Last edited:
Well, all adaptive USB DACs fall into this category, i.e. majority of devices on the market. Asynchronous DACs have become common just recently. The Well-Tempered Computer

recently? come on! UAC2 is hardly recent when talking computer audio, hell when talking real life, UAC2 is a 3 year old standard minimum



Decent: yes, standard: hardly :) There are very few USB audio v.1 asynchronous devices on the market and USB audio v.2 which most async dacs on the market use does not even have MS driver in MS OSes yet :)
simple solution, use a decent OS instead of that piece of crap...sorry but using the length of time MS has taken to get their **** together with UAC2 as some sort of benchmark is ridiculous. MS has been woefully slow with this, its shameful and it doesnt even seem to be on their agenda, its been standard on MacOS and more recently Linux for years. so yes I would call it standard and besides it really doesnt have any bearing on my words, anything that uses any type of clocking information, be it spdif, usb whatever, for final clocking, be it recovered or directly, is a bad design that isnt worth our attention. a local clock is the only decent option
 
Last edited:
I asked "Have I missed something?". Thank you for responding. If I understand correctly there are two ways of doing USB Audio:
- the cheap and cheerful way can't work properly, so it doesn't matter whether the cable is expensive or not
- the proper way does not rely in incoming data timing, so it doesn't matter whether the cable is expensive or not
Have I got it right now?
 
I asked "Have I missed something?". Thank you for responding. If I understand correctly there are two ways of doing USB Audio:
- the cheap and cheerful way can't work properly, so it doesn't matter whether the cable is expensive or not
- the proper way does not rely in incoming data timing, so it doesn't matter whether the cable is expensive or not
Have I got it right now?

correct, well even the first way can be used to effect if you use a dac clocking method that does not rely on an embedded clock, doesnt have to be async USB to manage that, there are some more recently agnostic methods like that used by Ians Fifo, which uses a local large fifo buffer and clocks the data out of ram using a local high quality clock and a couple FPGAs.

forgoing that then yes you have it about right.

its best if some sort of isolation takes place before the reclocking stage, to sever the ground link, which may or may not be troublesome depending on the quality of the PC and dac PCB layouts. the cable can carry both common mode and differential noise, but there isnt that much a cable can do about that, since its part of the signal/ground. some USB receivers/convertors require a ground connection to be present, some dont
 
Last edited:
I asked "Have I missed something?". Thank you for responding. If I understand correctly there are two ways of doing USB Audio:
- the cheap and cheerful way can't work properly, so it doesn't matter whether the cable is expensive or not
- the proper way does not rely in incoming data timing, so it doesn't matter whether the cable is expensive or not
Have I got it right now?

Technically, yes.

Practically it is always the eternal question of price/benefit ratio. Which everyone has set at different level. I have no problem with buying an inexpensive USB adaptive DAC if it performs good enough for the specific need.

Plus it all comes down to the blind listening testing. If I cannot tell the difference, I will not care about adaptive or asynchronous. I know that adaptive can be made very good. In fact many diy projects on this website (if not most) use adaptive and users are happy :)
 
there are some more recently agnostic methods like that used by Ians Fifo, which uses a local large fifo buffer and clocks the data out of ram using a local high quality clock and a couple FPGAs.

Well, I personally do not like such design. The buffer is either large enough to avoid the buffer under/overflows in a reasonable time scope, preventing watching movies/audio work, or small (milliseconds) with the danger of the above for long-term operation. The buffer may hold for a few hours of glitch-less operation, but it is not a "technically clean" solution. And if the output clock is slowly adapted to follow the buffer fill level, we are getting the adaptive mode :)
 
Last edited:
If you can't rely on incoming data timing, then a local buffer and local clock is the only solution. If you can't rely on (or can't control) even the average data timing then the buffer has to be rather large. A large buffer needs to start about half-full, so there will be an initial delay when starting the audio. In either case the USB is merely delivering bits, so the cable doesn't matter as long as it is good enough for normal IT purposes. Which is where we started!
 
Well, I personally do not like such design. The buffer is either large enough to avoid the buffer under/overflows in a reasonable time scope, preventing watching movies/audio work, or small (milliseconds) with the danger of the above for long-term operation. The buffer may hold for a few hours of glitch-less operation, but it is not a "technically clean" solution. And if the output clock is slowly adapted to follow the buffer fill level, we are getting the adaptive mode :)

arent we talking about music?

weve been trough all of this crap in the fifo thread, in addition to my own use over the last 18 months, not a single user out of many hundreds has reported an over/under-run, not one, yet periodically we get someone like you poking their heads in to point out this rather obvious factor that was known since the initial stages of prototyping....

it has smart logic that watches the memory, its large enough that its a complete non issue. if you must, there are video players that allow you to delay the video, so that it syncs. with the si570 clock board there is 2 way communication possible via USB that queries exactly how full the memory is, what the samplerate is and communicates the delay needed to the computer over an isolated connection.

this fifo allows a total of ~1.2-1.4ps phase jitter at the output of the clock buffer (galvanically isolated), which has impedance controlled layout and connectors. this is verified and consistent across toslink, AES3, BNC, USB->i2s

it doesnt really get much better; you can pick all you like on a theoretical level, but unless you actually present a better solution, its just noise...

not something involving adding low frequency jitter (like wow/flutter) either to keep levels constant; meaning that an MCU is on clock ground creating noise and ground bounce. this when there is no actual problem evident with the current system, is pointless complexity/expense in an already complex system

If you can't rely on incoming data timing, then a local buffer and local clock is the only solution. If you can't rely on (or can't control) even the average data timing then the buffer has to be rather large. A large buffer needs to start about half-full, so there will be an initial delay when starting the audio. In either case the USB is merely delivering bits, so the cable doesn't matter as long as it is good enough for normal IT purposes. Which is where we started!

yep
 
Last edited:
When you look at USB AC1, you're driven to wonder why on earth it was designed like that, but then to marvel at the fact that despite being an imperfect system it has been honed to operate with remarkable success.

Then when you look at UAC2, you see a system that is sensibly designed with timing controlled locally at the DAC and data delivered on demand as is possible in the older (admittedly multi-wire) RS232 serial port, but then you're driven to wonder why Microsoft won't make a commitment to support it.

...and then, of course, you remember that there seems to be a commitment on the part of OS designers of all flavours to make the lives of 'users' as difficult as possible short of forcing them to give up and revert to using an abacus.

I see you've posted again in the meantime, qusp. It's not really fair to say that 'it doesn't really get much better'. A FiFo is just a kludge, and if Microsoft would introduce a UAC2 driver, all this stuff would go away.
 
When you look at USB AC1, you're driven to wonder why on earth it was designed like that, but then to marvel at the fact that despite being an imperfect system it has been honed to operate with remarkable success.

Then when you look at UAC2, you see a system that is sensibly designed with timing controlled locally at the DAC and data delivered on demand as is possible in the older (admittedly multi-wire) RS232 serial port, but then you're driven to wonder why Microsoft won't make a commitment to support it.

...and then, of course, you remember that there seems to be a commitment on the part of OS designers of all flavours to make the lives of 'users' as difficult as possible short of forcing them to give up and revert to using an abacus.

I see you've posted again in the meantime, qusp. It's not really fair to say that 'it doesn't really get much better'. A FiFo is just a kludge, and if Microsoft would introduce a UAC2 driver, all this stuff would go away.

no it wouldnt, it has nothing to do with OS... and its perfectly fair to say. MS getting their **** together would not result in all UAC2 interfaces outputting just over 1ps jitter on an isolated connection no matter what source... thats a ridiculous statement! :rolleyes: clearly you know not of what specifically I speak...

noise on clock ground alone if left unaddressed can create significant jitter and isolation adds jitter, which necessitates a buffer/reclocker

there may be other ways to achieve a similar thing (for example if you build the chipset onto the dac PCB and use the same clock domain for USB->i2s conversion as the dac and reclock it after isolation, for which you would need fifo memory), but nothing on our landscape here is as successful, as flexible, or as immune. XMOS UAC2 interface by itself has over 1.5ns jitter, thats 1000x higher than Ian's fifo.

to be clear, i'm not talking about the USB fifo, rather an i2s fifo local to the dac and using the same clock domain. these elements can be arranged in a number of ways for the same result, but regardless, speaking objectively, it doesnt really get better than 1-2ps jitter, at that point its best to shift attention back towards speakers...
 
Last edited:
weve been trough all of this crap in the fifo thread, in addition to my own use over the last 18 months, not a single user out of many hundreds has reported an over/under-run, not one, yet periodically we get someone like you poking their heads in to point out this rather obvious factor that was known since the initial stages of prototyping....

I am glad the large buffer solution exists and its users who do not mind the large latency are happy. I would not use it as I need the flexibility of watching movies too.

it has smart logic that watches the memory, its large enough that its a complete non issue. if you must, there are video players that allow you to delay the video, so that it syncs. with the si570 clock board there is 2 way communication possible via USB that queries exactly how full the memory is, what the samplerate is and communicates the delay needed to the computer over an isolated connection.

I am glad it works for you. However facing such technical workaround I would go with the USB async solution. Its built-in feedback is WAY less complicated and stays encapsulated in realm of the USB driver.

this when there is no actual problem evident with the current system, is pointless complexity/expense in an already complex system

Well, I would call complicated what you describe above with the second feedback link communicating with the playback application. But we all have different views and needs. Furtunately for all of us :)
 
I am glad the large buffer solution exists and its users who do not mind the large latency are happy. I would not use it as I need the flexibility of watching movies too.



I am glad it works for you. However facing such technical workaround I would go with the USB async solution. Its built-in feedback is WAY less complicated and stays encapsulated in realm of the USB driver.



Well, I would call complicated what you describe above with the second feedback link communicating with the playback application. But we all have different views and needs. Furtunately for all of us :)

yet you are stuck with USB and USB only... and without having a chunk of local memory and an isolated clock domain, you will have higher jitter. i'm not aware of any USB audio interfaces that adjust video delay, they just have a smaller buffer so that the delay is very small. I believe the EXADEVICES board has a custom driver that allows this, its not part of the UAC2 standard AFAIK. you can know what the delay will be, with pro audio interfaces of course and this can be adjusted for in the driver or in audio editing software based on the samplerate, but I dont think thats built into turnkey UAC2. I could be wrong I havent paid much attention to commercial audio kit for a while.

is it the only solution? no, but its the lowest jitter solution and to match it or come close, you would need the very same building blocks.

myself I have a second ESS dac (USB) for video so it doesnt bother me and I would need to align video anyway due to digital crossover delay.

counter culture: what you are prepared to do or not is irrelevant to the argument, I mentioned it doesnt really get any better objectively and it doesnt, suggesting solutions that have orders of magnitude worse performance isnt an argument. you can play the wont hear a difference card if you choose as well; that too is irrelevant to an objection to my statement, which was purely objectively based.
 
Last edited:
Given access to suitable test equipment I could design a device meeting or exceeding the jitter performance you quoted, and under UAC2, considerably less complicated, less expensive and with lower latency.

You will see such devices appear in the future, when you do, you can come back and I will explain them to you.
 
When you look at USB AC1, you're driven to wonder why on earth it was designed like that,

Then when you look at UAC2, you see a system that is sensibly designed with timing controlled locally at the DAC

UAC1 offers adaptive and asynchronous modes. Asynchronous DACs for UAC1 The Well-Tempered Computer


UAC2 did not introduce any major changes in the actual transmission protocol. The main change is official support for 192+ samplerates and expanded specifications for communicating and setting the device features and controls. Look at occurences of the UAC_VERSION_2 identifier in linux kernel source code LXR linux/
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.