USB cable quality

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
OK, I will go and have another look but I understood that UAC1 asynchronous was available using bulk mode, but this requires a custom driver and there is no guaranteed bandwidth, and that UAC2 will support true bursty download controlled from the DAC end under the standard. If it doesn't then it damn well ought to, because that is the nub of the problem we are discussing.
 
Last edited:
Given access to suitable test equipment I could design a device meeting or exceeding the jitter performance you quoted, and under UAC2, considerably less complicated, less expensive and with lower latency.

You will see such devices appear in the future, when you do, you can come back and I will explain them to you.

I doubt that very much...

the jitter I mention is actual period jitter measurement of the sum total of the clock buffer and clock (just the 2 parts jitter in series); at the output of the buffer, thats all the jitter there is, still you dont understand the mechanism after several iterations of the function blocks, so its a bit laughable that you think you can do better.

but if you are developing a commercial product.... then I shouldnt explain further someone elses IP that isnt mine to give away.

predicted jitter will not be accepted, i'll be very interested to see you even produce a meaningful measurement below 1ps...
 
Last edited:
yet you are stuck with USB and USB only... and without having a chunk of local memory and an isolated clock domain, you will have higher jitter.

?? Do you really understand the way usb asynchronous audio works? There is a local buffer and there is isolated clock domain, just like in your solution. The major and key difference is the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled.

As a result the local buffer can be kept short, allowing for low latency operation.


i'm not aware of any USB audio interfaces that adjust video delay, they just have a smaller buffer so that the delay is very small.

In one of my previous posts I talked about bDelay UAC parameter which reports the delay incurred within the device itself.

Parameters like that (together with usually much larger delays caused by DMA buffers) can be reported upstream - see function snd_pcm_delay ALSA project - the C library reference: PCM Interface of linux alsa API. And it gets used in video applications for delaying the video

mplayer2 - master mplayer2 repository

mplayer2 - master mplayer2 repository

The reality is most USB interfaces report either 0 or 1 frame (followed from google search of lsusb outputs). Which may be correct or incorrect. And as a result the snd_usb_delay method does not add the value into the total delay calculation https://git.kernel.org/cgit/linux/kernel/git/tiwai/sound.git/tree/sound/usb/pcm.c#n42 . I will ask about it on the alsa-devel mailing list.
 
Last edited:
Going back to the topic, however, is it not the case that in the isochronous mode, the cable will have some effect on the timing, no matter how slight? Maybe it only results in 1 ppm clock change every hour, but it's hard to guarantee it absolutely, and that's all that's needed for some people to justify spending hundreds on the cable.
 
If 1 PPM is enough for people to justify spending hundreds on some tinned copper, insulation and post-manufacturing spruced up connectors (gold plating, of course!) ... then the only thing to remind one's self is, "a fool and his money are soon parted". How many wonderful systems have I seen, with little pointy cones raising speakers off the ground to supposedly squash doppler shifts, and with little pyramidal foam supports to lower the capacitance of their $100/foot speaker cables, their wicked-cool monoblocks and all separate stuff, including totally re-capacitored high end pre's. ... and their music collection is just pathetically thin. One guy I know had spent most of his adult life ever tuning things, being a real DIYer, but had fewer than 50 great recordings. Had he spent 50% on the system, it still would have been utterly awesome. The other 50% could have gone into buying thousands of great performances to actually use the system to play back.

GAS. Gear Acquisition Syndrome. At some point a lot of us slump into it, and get a kick out of it. I've bought cameras for years ... that I've barely used, and hit myself over the head for ever buying ... yet, I look at photography magazines, and the GAS lust comes back. Same for audio. Same for computers. And sadly, I think GAS lies at the very core of the current fantasy/mania that has gripped so many "kids" with their iPhones and iDevices.

Like my brother-in-law... who has literally camped out overnight to get each successive new iPhone and iPad and iBook. Yet, analogous to the above, he hardly uses the devices for any but their most banal purposes. His sound system is top shelf, and his recordings are all on the bottom shelf. His camera almost has no peers, yet his shots are with remarkably few exceptions, banal, trite, badly framed and overwrought.

GAS

All the rest of the foregoing re: USB cables, isochronous, asynchronous, and so on ... is just abstruse. Bottom line is - there is no difference for nominally made, "working to the optimum spec" cables. Good cable, good wire, good connectors ... not fancy, but just "made right" by large manufacturers ... delivers the bits. There is NO "jitter" that the wire somehow gives that is improved upon by having reconstituted Unicorn horn insulation, unobtainium wire, or blessed-on-the-thighs-of-virgins connector-ends.

GoatGuy
 
argh, we are not talking about the memory for buffered USB packets here, we are talking about buffered i2s/PCM directly at the dac clock and that buffer is asynchronous wrt to the input, but clocking the dac synchronously and clocking that data out of local fifo memory using that same dac master clock to clock the flip flops after galvanic isolation.

the USB clock that controls the async bursts is a different matter and often an order of magnitude less accurate. the buffer can be kept small yes with USB due to feedback, but the same elements i'm talking about are needed to have i2s and MCLK that is the sum total of the MCK at the buffer/flip flop that drives the dac. all of the available USB interfaces and interface chips on the market, aside from custom solutions that include a memory block and flip flops as well, are higher jitter. XMOS, CMEDIA etc are all significantly higher jitter without further cleaning up.

I suggest you read my post again keeping in mind i'm talking about sample accurate video/audio sync. read the part where I say the same elements can be arranged in a couple other ways for the same result, but the same basic elements are needed.

the reason for the large local buffer is because it was designed to work with all manner of sources USB, Spdif etc.

but look, I suggest you check it out, i'm not selling it, I have no need to explain this all again or argue it.

?? Do you really understand the way usb asynchronous audio works? There is a local buffer and there is isolated clock domain, just like in your solution. The major and key difference is the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled.

As a result the local buffer can be kept short, allowing for low latency operation.




In one of my previous posts I talked about bDelay UAC parameter which reports the delay incurred within the device itself.

Parameters like that (together with much usually larger delays caused by DMA buffers can be reported upstream - see function snd_pcm_delay ALSA project - the C library reference: PCM Interface of linux alsa API. And it gets used in video applications for delaying the video

mplayer2 - master mplayer2 repository

mplayer2 - master mplayer2 repository

The reality is most USB interfaces report either 0 or 1 frame (followed from google search of lsusb outputs). Which may be correct or incorrect. And as a result the snd_usb_delay method does not add the value into the total delay calculation https://git.kernel.org/cgit/linux/kernel/git/tiwai/sound.git/tree/sound/usb/pcm.c#n42 . I will ask about it on the alsa-devel mailing list.
 
Last edited:
OK, I will go and have another look but I understood that UAC1 asynchronous was available using bulk mode, but this requires a custom driver and there is no guaranteed bandwidth

The protocol you are talking is not USB audio class v.1 standard which has always been isochronous. I already posted a link to UAC1 async devices on the market (some unavailable anymore). A well known example of UAC1 async (with minor deviation from the standard) is EMU 0202/0404 USB soundcard.

You are talking about proprietary protocols developed by some DAC producers, a typicall example being M2Tech HiFace1 and its "clones".


and that UAC2 will support true bursty download controlled from the DAC end under the standard. If it doesn't then it damn well ought to, because that is the nub of the problem we are discussing.

UAC1/2 runs isochronous, i.e. continuous fluent delivery of data every frame to the device. Bursty download is typical for bulk mode transmission and has never been a part of the UAC standard.
 
Last edited:
argh, we are not talking about the memory for buffered USB packets here, we are talking about buffered i2s/PCM directly at the dac clock and that buffer is asynchronous wrt to the input, but clocking the dac synchronously and clocking that data out of local fifo memory using that dac master clock to clock the flip flops after galvanic isolation. the USB clock that controls the async bursts is a different matter and often an order of magnitude less accurate. the buffer can be kept small yes with USB due to feedback, but the same elements i'm talking about are needed to have i2s and MCLK that is the sum total of the MCK at the buffer/flip flop that drives the dac.

Sorry, I do not understand which clock you are talking about. The crystal-based clock in async usb receiver drives the I2S bus, I do not see any other.


I suggest you read my post again keeping in mind i'm talking about sample accurate video/audio sync. read the part where I say the same elements can be arranged in a couple other ways for the same result, but the same basic elements are needed.

Sure there are always many ways. I gave you my reasons why I prefer the encapsulated feedback of async USB. You like your solution, fair enough. I just do not see any way your solution is technically superior to async USB. It is perfectly possible the ready-made existing implementations which all manufacturers use to avoid the USB stack details are not optimal and your specific implementation measures better. But there is no technical reason a USB async implementation could not improve to reach the same level.
 
sorry yes, my use of burst above is incorrect for this case too, but the rest stands, just remove that word, thought i'd post that here rather than another edit, which could be missed, as I think that may have happened with at least one of my previous posts as well, based on your reply
 
sorry yes, my use of burst above is incorrect for this case too, but the rest stands, just remove that word, thought i'd post that here rather than another edit, which could be missed, as I think that may have happened with at least one of my previous posts as well, based on your reply

That may be possible since I honestly have no idea which clock you are talking about. Your solution is USB async audio with no feedback and much longer buffer to compensate for the missing incoming bitrate control.
 
Sorry, I do not understand which clock you are talking about. The crystal-based clock in async usb receiver drives the I2S bus, I do not see any other.




Sure there are always many ways. I gave you my reasons why I prefer the encapsulated feedback of async USB. You like your solution, fair enough. I just do not see any way your solution is technically superior to async USB. It is perfectly possible the ready-made existing implementations which all manufacturers use to avoid the USB stack details are not optimal and your specific implementation measures better. But there is no technical reason a USB async implementation could not improve to reach the same level.

its not my solution, iancanada here created it, I was just part of prototype testing and made some suggestions. clearly you arent aware of the project/product. heres the fifo wiki that hochopeper put together, has all important links.

it was developed as a purely audio solution and was dac and source agnostic, to basically enable bitperfect extremely low jitter i2s and master clock to your dac of choice. most are using it with ESS, but it doesnt have to be. it produces the same quality output no matter what you plug into it (within reason).

I never said USB couldnt improve to the same level, but its pretty much impossible to better it, its limited only by the quality of the clock used. but to match it you need most of the same parts. you need galvanic isolation or otherwise isolated from USB ground, this will throw the possibility of matching it out the window. you need local memory/fifo, you need clock, you need flip flops, you need an impedance controlled, preferably buffered connection to the dac and you need a very low noise clock power supply. but you would still be stuck with USB only, not spdif.

it doesnt have a USB input, it has i2s input and i2s + MCLK output and there is a matching spdif module.

I think we are actually mostly on the same page, except you havent seen the project i'm talking about before. its pretty physically large with all the options, which is a pain. but the pricing is pretty much a gift, dacs with this sort of tech cost pretty serious dollars.

check out the wiki and the linked manuals for an overview
 
Last edited:
Do you really understand the way usb asynchronous audio works? There is a local buffer and there is isolated clock domain, just like in your solution. The major and key difference is the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled.

UAC1/2 runs isochronous, i.e. continuous fluent delivery of data every frame to the device. Bursty download is typical for bulk mode transmission and has never been a part of the UAC standard.

I fail to see what useful distinction you are seeking to draw here. 'the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled.' How is this distinct from 'bursty' and how is it 'isochronous'?

May I ask a few questions? Please feel free to ignore any of them.

1. "Asynchronous USB 2.0 input module"
Does this mean "USB Audio Class 2.0" compatible or simply "USB 2.0 (High-speed)" compatible?
If it is "USB Audio Class 2.0" compatible, it does require a driver software in Windows environment and does not in Mac OS environment. Is it correct?

Thanks! I am happy to answer.

1) USB Audio Class 2 :)

Now, perhaps I am misled, but here we have an "Asynchronous USB 2.0 input module" described as operating under 'USB Audio Class 2' by its developer.

What useful contribution are you seeking to make to the conversation?
 
I see, it is an asynchronous FIFO. Hats off to everyone involved, certainly useful device. But it is not really related to USB.

USB async works basically the same way as if this FIFO was fed by standard adaptive USB, but is less complicated (only one buffer involved, no PLL in the adaptive USB part) and enjoys the luxury of having a rather fast control feedback, allowing it to keep the buffer size much shorter. As a bonus it offers standardized support for switching sample rates, sample sizes, support for various controls (volume, switching), and lots of other options. All that supported by one standard driver. In reality some devices deviate from the standard and specific quirks have to be coded into the driver.
 
I didnt relate it to USB... you made that connection and i've been fighting with that confusoin ever since due to the confusingly similar architecture I imagine ;)

I mentioned it only because of the excellent end result and because it uses similar techniques as USB and is completely impervious to the effects of different USB cables...

there is no PLL, its strictly what goes in comes out as far as samplerate. is separated from its clock domain if it had one and clocked out of memory using a fresh, local very high quality clock of the same clock multiple 22.1x or 24x (44.1->384kHz). its only async wrt the input, not async wrt the samplerate. it reclocks, doesnt resample


@ CC regarding the TP device

its i2s output is asynchronous wrt the input and its USB2, while also being UAC2 compliant.
 
Last edited:
I fail to see what useful distinction you are seeking to draw here. 'the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled.' How is this distinct from 'bursty' and how is it 'isochronous'?

I am trying to avoid a confusion. UAC1 and UAC2 are a different categorization to adaptive and asynchronous. There can be adaptive UAC2 devices (not async) and there are UAC1 async devices. Just clearing up the terminology :)

Bursty and isochronous are basically opposites. In burst mode (bulk) the device is trying to transport as much data as possible in one frame. In isochronous mode the amount transported in each frame is kept constant at best, each frame transferring only amount corresponding to the frame time length. The async feedback causes the amount fluctuate only by little. Actually the linux alsa driver uses notions like "current rate" and reports values such as 44.050 Hz.
 
Last edited:
OK, I think we're all pretty much on the same page now, sorry for any friction.

Given that it were indeed the case that 'the device sends feedback messages to the host telling it to speed up/slow down the data transmission rate to keep the buffer optimally filled' under some regime regardless of how described, then the size of the buffer and the latency could be arbitrarily reduced with a consequent saving in complexity and cost.

This is how the damn thing should have been organised in the first place and if such an arrangement is not supported under UAC2 then all I can say is 'Why the hell not?'

Were it not (as you correctly surmised, qusp) that I believe any difference to be inaudible, or at least of no consequence, I would write the driver and build the device myself.
 
This is how the damn thing should have been organised in the first place and if such an arrangement is not supported under UAC2 then all I can say is 'Why the hell not?'

The async mode is supported in UAC1 as well as UAC2.

The adaptive mode is supported in UAC1 as well as UAC2.

IOW:

UAC1 supports adaptive as well as async mode.

UAC2 supports adaptive as well as async mode.

Because UAC2 is a refined advancement of UAC1.

I agree the async mode should have been used from the very beginning. But we should realize most manufacturers use ready-made chips. And there was none implementing async mode until the relatively recent XMOS for UAC2. And manufacturers in general do not want to complicate their life too much which coding their own async mode support would certainly bring.

And there is always the question of actual audible difference :)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.