What am I missing (async reclocking)?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Yeah - they do a nice reclock but then blow it totally by using OP27 in the I/V stage. I can't think of a worse choice at the price :eek:

Hello

No need to use those opamps, I have some OPA627, AD825, LM4562. Or maby a good discrete I/V output circuit.

There is the WM8741 dac, but some find it less musicaly engaging than the TDA1541.

Thank

Bye

Gaetan
 
Last edited:
Yes all of those suggestions are considerably better than the OP27s. They also are using them with no feedback capacitors and gain stealing resistors, so I reckon they'll be in slew limiting every time the DAC output updates (each 5.5uS). If they made those gain stealing resistors (R3,R103) a little smaller they could have chosen OP37.
 
I currently listen to an AD1955 that I bought cheap on Taobao and heavily modified. You can find some details on it on my blog. I don't think the DAC chip itself though is as important as the circuitry around it - like clocking, power supplies and output stage. I have listened to other DACs - enjoyed TDA1541 in NOS (but not with SAA7220), don't much care for TDA1543 in the same as it lacks depth.
 
There is an old thread about reclocking here:-http://www.diyaudio.com/forums/digital-source/28302-asynchronous-reclocking.html

I have been looking at the DDDAC:- DDDAC 2000
It uses a Tent clock but as far as I can tell it simply feeds this into the receiver CS8414 and ignores the timing differences which result in the loss of a sample or the repetition of a sample from time to time. Evidently this is considered to be preferable to jitter.

I also looked at the d-type reclockers. They also have their problems, both when run from clocks close to the transmit clock frequency and clocks unrelated to the transmit frequency.

How does the Arcam work? I can't see the schematic in enough detail to tell.

w
 
How about if one of the packets is corrupted? Then it will be retransmitted at a latter date, no?
Real-time streams like audio can never tolerate retransmission delays. USB-Audio uses Isochronous exclusively. Isochronous packets have error detection, but no automatic retry like Bulk packets (but, as I've discussed, Bulk is entirely inappropriate for real-time).

P.S. Isochronous is the common packet type for all USB-Audio, but then the clock master or slave status is determined by whether the audio is synchronous to the USB host or asynchronous. The USB and USB-Audio Specifications are available for free download, but they are not easy to decipher without practical USB Device design experience.
 
The majority of USB audio is implemented as isochronous mode.
All USB-Audio is isochronous, by definition. If you have a USB device which handles audio without isochronous packets, then it is not compliant with the USB-Audio Specification.

A small buffer is used in isochronous mode but it is a FIFO. The information is not delivered in bursts. although the rate at which it is delivered may fluctuate.
You misunderstand how USB works, particularly Isochronous packets. USB operates on a 12 MHz clock (for Full Speed), and the Isochronous packets actually are delivered in bursts. The bursts are basically the same in High Speed USB except for the 480 MHz clock. The USB host determines when the USB device gets its time slot, which is equally important for ADC or DAC. These isochronous bursts are smoothed out using buffering. I'm not sure why you say "but it is a FIFO," because there is nothing about a FIFO that makes it inferior to any other kind of buffering, i.e., there isn't really any special kind of buffering that is superior to a FIFO. (any buffer that is not equivalent to a FIFO would distort the audio)

The clock in the DAC must synch with the transmit clock which is controlling the overall bitrate. In this sense, adaptive mode USB is indistinguishable from SPDIF
I hope that nobody here is seriously considering adaptive mode USB-Audio. That technology is why USB-Audio has a deservedly bad name. It is, indeed, no better than SPDIF. A big problem is that consumers cannot "see" whether any particular USB-Audio product is adaptive or asynchronous, which is a serious problem for audiophiles or anyone interested in quality.

I'm talking about a true bursty system where the buffer filling is controlled by the receiver and the download bitrate greatly exceeds the playback bitrate, which I understand is the case with asynch USB, although I haven't dug into it that deeply. If it doesn't break the link between tx and rx clocks though, what's the point?
Again, it seems you're getting confused about clocks - perhaps because there are so many. The clock rate of USB, whether Full Speed or High Speed, has nothing to do with the clock rate of the DAC in asynchronous USB. As far as you're concerned, the USB is writing down the audio samples on punch cards and delivering them by carrier pigeon (although there would be more latency if that goofy analogy were actually implemented). Also, there is no requirement that the bus bitrate exceed the playback sample rate, other than that the usual USB overhead must be accounted for.

Basically, asynchronous USB-Audio changes the number of samples per packet in order to force the media source to match the DAC rate. If the clock rates aligned such that the DAC might otherwise need to skip a sample (e.g. reclocked SPDIF), then async USB-Audio causes an extra sample to be delivered before its needed. Conversely, if the clock rates aligned such that the DAC might otherwise need to repeat a sample, then async USB-Audio causes a smaller packet to be delivered at the right time. By and large, the isochronous packets usually have a constant number of samples, but every once in a while there will be +1 or -1 samples in the packet to assure long-term matching of the media rate to the DAC rate. Asynchronous USB-Audio establishes a second communication link from the DAC back to the USB host, and somewhere in the audio software system there is flow control to make sure that the correct amount of data is delivered without repeating or dropping samples, ever.

So, I guess it goes without saying that the USB bit rate must be at least high enough to allow one or two extra audio samples per isochronous buffer, as needed. Other than that small percent overhead, there is no need for the "download" bit rate to greatly exceed the playback bit rate.
 
You are correct that my understanding of USB Audio Class 2 was and is incomplete, but I did point out that I had not dug into it very deeply.

All USB-Audio is isochronous, by definition. If you have a USB device which handles audio without isochronous packets, then it is not compliant with the USB-Audio Specification.

Isochronous is generally understood in contrast to asynchronous. USB Audio Class 2 is generally characterized as asynchronous, inferring that the data transfer is on an irregular basis as opposed to isochronous which means that data are delivered within time constraints, and synchronous, which means that data must be delivered at a specific time.

In Audio Class 1 adaptive mode the packets are sent willy-nilly, the receiving device has no control, in asynchronous mode the client device controls the transfer, and clocks the conversions. Consequently Class 2 is described as asynchronous, it is petty and misleading to insist that it is isochronous.

You misunderstand how USB works, particularly Isochronous packets. USB operates on a 12 MHz clock (for Full Speed), and the Isochronous packets actually are delivered in bursts. The bursts are basically the same in High Speed USB except for the 480 MHz clock. The USB host determines when the USB device gets its time slot, which is equally important for ADC or DAC. These isochronous bursts are smoothed out using buffering.

No. You misunderstand how the clock is derived in adaptive mode.

The datarate in adaptive USB is entirely dependent on the transmit device. The packets are sent at 1mS intervals. In adaptive mode the receive device estimates the rate of delivery of bits and synchronises its clock to that. This is the clock synchronisation to which I am referring, and the clock synchronisation which requires to be broken in order to reduce DAC clock jitter.

I'm not sure why you say "but it is a FIFO," because there is nothing about a FIFO that makes it inferior to any other kind of buffering, i.e., there isn't really any special kind of buffering that is superior to a FIFO. (any buffer that is not equivalent to a FIFO would distort the audio)

Yes, this is a bad choice of words on my part, and reflects a misapprehension on my part. If I had been designing the interface, I would have done it differently. Latency is of minor concern in playback, and minimizing it without eliminating it does not eliminate it as a problem when recording

(any buffer that is not equivalent to a FIFO would distort the audio)

There is no reason why buffers should not be conventional buffers. While one is being filled at high speed under the control of the transmit clock, the other could be read by the DAC. Yes, this would be equivalent to a FIFO in practice, but would not require the maintenance of pointers as would be the case for a FIFO.

I hope that nobody here is seriously considering adaptive mode USB-Audio. That technology is why USB-Audio has a deservedly bad name. It is, indeed, no better than SPDIF.

Your advice about adaptive mode is hardly timely.

All current USB DACs and USB->SPDIF devices (such as the Hiface, a very well regarded audiophile choice), except for those specifically advertised as asynchronous mode devices are adaptive mode. Asynchronous devices only appeared in the last year or so, there is currently no Microsoft driver available for Class 2, only proprietary drivers. Even under Linux the driver is not fully functional.

A big problem is that consumers cannot "see" whether any particular USB-Audio product is adaptive or asynchronous, which is a serious problem for audiophiles or anyone interested in quality.

I hold no candle for audiophiles myself, but if you imagine that they do not know the distinction you are sadly mistaken.

Again, it seems you're getting confused about clocks - perhaps because there are so many.

No more so than you. Your understanding of adaptive mode is far from perfect. I'll point out some other confusion on your part, if you like.

Basically, asynchronous USB-Audio changes the number of samples per packet in order to force the media source to match the DAC rate.

Yes, unfortunately it seems that once again a simple and robust playback system has evaded the designers.

If the clock rates aligned such that the DAC might otherwise need to skip a sample (e.g. reclocked SPDIF), then async USB-Audio causes an extra sample to be delivered before its needed. Conversely, if the clock rates aligned such that the DAC might otherwise need to repeat a sample, then async USB-Audio causes a smaller packet to be delivered at the right time.

I think you’ve got a couple of things assbackwards (confused) here.

The clock rate of USB, whether Full Speed or High Speed, has nothing to do with the clock rate of the DAC in asynchronous USB.

Yes, that’s what I said.

there is no requirement that the bus bitrate exceed the playback sample rate, other than that the usual USB overhead must be accounted for.

…the USB bit rate must be at least high enough to allow one or two extra audio samples per isochronous buffer, as needed.

So which is it?

w

To be fair, I said this:

I'm talking about a true bursty system where the buffer filling is controlled by the receiver and the download bitrate greatly exceeds the playback bitrate, which I understand is the case with asynch USB, although I haven't dug into it that deeply. If it doesn't break the link between tx and rx clocks though, what's the point?

I was mistaken as to how asynchronous mode operates, to a degree.
 
Last edited:
Isochronous is generally understood in contrast to asynchronous. USB Audio Class 2 is generally characterized as asynchronous, inferring that the data transfer is on an irregular basis as opposed to isochronous which means that data are delivered within time constraints, and synchronous, which means that data must be delivered at a specific time.

In Audio Class 1 adaptive mode the packets are sent willy-nilly, the receiving device has no control, in asynchronous mode the client device controls the transfer, and clocks the conversions. Consequently Class 2 is described as asynchronous, it is petty and misleading to insist that it is isochronous.
What I wrote is not misleading in the least, because it is 100% accurate. There are only 4 types of transfers defined for USB: Control, Bulk, Interrupt, and Isochronous. You cannot have a USB Device without using 1 or more of those 4 - there are no other possibilities. As I said, all USB-Audio Class Devices are Isochronous. On top of the basic Isochronous transfer, USB-Audio implements Asynchronous, Synchronous, and Adaptive protocols. These terms are all defined clearly in the USB Specifications.

By the way, the correct word is 'implying' instead of 'inferring' ... unless USB Audio Class 2 is a sentient life form.

No. You misunderstand how the clock is derived in adaptive mode.
I made no statements at all about adaptive mode other than to say that I hope nobody is excited about discussing it in a topic with 'async' in the title. Adaptive is Off Topic, and I'm sorry that you did not understand my sense of humor.

There is no reason why buffers should not be conventional buffers. While one is being filled at high speed under the control of the transmit clock, the other could be read by the DAC. Yes, this would be equivalent to a FIFO in practice, but would not require the maintenance of pointers as would be the case for a FIFO.
There is no requirement for maintenance of pointers with a FIFO. There have been hardware FIFO chips since the eighties, e.g., in the Mac II audio hardware. Again, you seem to be making blanket statements about technology where your understanding is limited.

All current USB DACs and USB->SPDIF devices (such as the Hiface, a very well regarded audiophile choice), except for those specifically advertised as asynchronous mode devices are adaptive mode. Asynchronous devices only appeared in the last year or so, there is currently no Microsoft driver available for Class 2, only proprietary drivers. Even under Linux the driver is not fully functional.
Lack of drivers in those operating systems has no effect on the specifications. Besides, the title of this topic establishes Asynchronous Audio as our subject.

I hold no candle for audiophiles myself, but if you imagine that they do not know the distinction you are sadly mistaken.
Considering the incredible difficulty that you're having with the concepts, why is it that you think audiophiles would have any better understanding?

No more so than you. Your understanding of adaptive mode is far from perfect. I'll point out some other confusion on your part, if you like.
Is that a personal threat? You are hilarious, sir. Either you have not read all of the relevant USB documents, or if you did then you did not understand them. At the very least, it is clear that you have no practical experience with the actual details of USB operation. I have designed 5 commercial USB products, including schematic design, board layout, and firmware development. If you think I am confused, then I would find it highly entertaining to see you try and point out any example.

Yes, unfortunately it seems that once again a simple and robust playback system has evaded the designers.
Exactly what is wrong with asynchronous USB-Audio? It could be simpler if it were designed like FireWire Audio, but it's quite simple enough as it is. Perhaps not simple enough for Microsoft to get it right, but it would not be the first USB Specification that they failed miserably to implement correctly.

So which is it?
You seem to be unaware that audio requires hundreds to thousands of bytes per frame. Adding one or two audio samples in asynchronous mode amounts to much less than a 0.6% change. The normal overhead of USB is enough to cover that without requiring anything additional. Besides, an Isochronous endpoint is not allowed to allocate more than about 68% of the USB bandwidth, so quibbling about overhead semantics is pointless.

To be fair, I said this:

I'm talking about a true bursty system where the buffer filling is controlled by the receiver and the download bitrate greatly exceeds the playback bitrate, which I understand is the case with asynch USB, although I haven't dug into it that deeply. If it doesn't break the link between tx and rx clocks though, what's the point?
I remember exactly what you wrote. I made the specific point that your characterization: the "download bitrate greatly exceeds the playback bitrate" is confused, because that is hardly the case.

What you tried to describe is already available in asynchronous USB-Audio, but without the excessive requirements that you imagined. You had not taken the time to research the topic thoroughly, so I tried to be helpful by explaining how it actually works. I hope you will complete your research before you write another lengthy reply, and perhaps start another topic if you really want to discuss the finer details of adaptive audio.
 
Last edited:
What I wrote is not misleading in the least, because it is 100% accurate. There are only 4 types of transfers defined for USB: Control, Bulk, Interrupt, and Isochronous. You cannot have a USB Device without using 1 or more of those 4 - there are no other possibilities. As I said, all USB-Audio Class Devices are Isochronous. On top of the basic Isochronous transfer, USB-Audio implements Asynchronous, Synchronous, and Adaptive protocols. These terms are all defined clearly in the USB Specifications.

Ralph Waldo Emerson said:
A niggling consistency is the hallmark of a small mind

You are correct, I should have said implied, however it is mean-spirited to draw attention to this in these circumstances.

Your points are trivial. You have demonstrated ignorance of the devices currently in the marketplace. You point out more than one feature as though it were a revelation, when it is a commonplace, e.g.

There is no requirement for maintenance of pointers with a FIFO. There have been hardware FIFO chips since the eighties, e.g., in the Mac II audio hardware.

So what? This is disingenuous. A hardware FIFO must rotate data or rotate addresses or maintain pointers. All these things are equivalent, even if invisible to the user, and they have a cost.


You seem to be unaware that audio requires hundreds to thousands of bytes per frame.

Hundreds to thousands? Well I never!

Do you really imagine that any of the correspondents in this thread are incapable of simple arithmetic?

Again, you seem to be making blanket statements about technology where your understanding is limited.

Please confine yourself to the issues.

Lack of drivers in those operating systems has no effect on the specifications.

Nobody said it affected the specifications. What I said was that you have an incorrect view of the numbers and qualities of systems already deployed.

Considering the incredible difficulty that you're having with the concepts, why is it that you think audiophiles would have any better understanding?

I'm not having incredible difficulties. If your argument were stronger you would not resort to such ad hominem asides. Audiophiles certainly have a better understanding than you credit them with. But then you seem to have a low opinion of everybody except yourself.

I would find it highly entertaining to see you try and point out any example.

I have pointed out some inconsistencies in what you have posted. You have ignored them in your haste to attack me. You should acknowledge your errors, as I have.

If the clock rates aligned such that the DAC might otherwise need to skip a sample (e.g. reclocked SPDIF), then async USB-Audio causes an extra sample to be delivered before its needed.

If the DAC needs to skip a sample, i.e. miss out a sample, then the last thing the system needs is for an extra sample to be delivered.

Conversely, if the clock rates aligned such that the DAC might otherwise need to repeat a sample, then async USB-Audio causes a smaller packet to be delivered at the right time.

If the DAC needs to repeat a sample, then there is a missing sample and you are suggesting that the system would respond by delivering less samples.

Exactly wrong. Twice. I hope you found that as entertaining as you expected.

I tried to be helpful by explaining how it actually works.

No you didn't. You have gone out of your way to be offensive.

I said:

I'm talking about a true bursty system where the buffer filling is controlled by the receiver and the download bitrate greatly exceeds the playback bitrate, which I understand is the case with asynch USB, although I haven't dug into it that deeply. If it doesn't break the link between tx and rx clocks though, what's the point?

So in the asynchronous system as I have now taken the time to discover:

1 The system is bursty, by your insistence
2 The buffer filling is controlled by the receiver
3 The download rate greatly exceeds the playback rate, otherwise the system could not be described as bursty.

I have acknowledged your point. I made an inaccurate guess as to how the system worked based on what I wanted from it and the cheapest and simplest way of achieving that. You act as though I had committed lèse majesté. I disagreed with you, I did not offer you any personal slight. I wish I could say the same for you. Evidently to disagree with you is to offend you.

w
 
If the DAC needs to skip a sample, i.e. miss out a sample, then the last thing the system needs is for an extra sample to be delivered.

If the DAC needs to repeat a sample, then there is a missing sample and you are suggesting that the system would respond by delivering less samples.

Exactly wrong. Twice. I hope you found that as entertaining as you expected.
You got me! Yes, that was entertaining, but I think it was I who provided the entertainment by goofing my examples. But you promised to find fault in what I had written earlier, so I don't think it counts.

As to the rest of your points, I will try to summarize. You accuse me of being mean spirited after you threatened to attack me in an earlier message. You tried on several examples to segregate certain technologies as being different, but then when I point out your mistakes you try to lump everything into a big general category to avoid being wrong. You ask me to confine myself to the issues, and yet you insist on discussing adaptive protocols in a thread titled 'async' - why can't you just start a new thread to discuss adaptive? My biggest complaint about your responses is not that you keep getting the USB terminology wrong, but that you present your view as if it were authoritative. Call me pedantic, but I have to correct those kinds of mistakes to avoid others from reading false statements about USB that go unchallenged. I don't even blame you for getting it wrong, since you probably think what you think because you read it somewhere else where such statements were not challenged.

Basically, for a thread about "async reclocking," we sure seem to be off topic.
 
Basically, for a thread about "async reclocking," we sure seem to be off topic.

Yeah, asynchronous reclocking of SPDIF. I think we were working towards the conclusion that the existing schemes are no good.

I've got an idea. It uses 3 or more conventional buffers.

As data starts to arrive at the DAC it is buffered in a conventional memory. The receiver estimates the rate at which the data is arriving as follows.

A VCXO is synched to the clock using a PLL. The receiver sets the local clock to a rate very slightly slower than the data rate by recording the VCXO control voltage, disabling the PLL and applying an offset control voltage to the VCXO.

Once data starts to be clocked into the second buffer the local clock starts to clock data out of the first buffer to the DAC. By the time the first buffer is empty the data is being clocked into the third buffer, but before the third buffer is full the first buffer is empty and available for more data.

If the clock rate differential and the buffers are suitably dimensioned an arbitrary play length can be accommodated. The play length can be increased by increasing the size or the number of buffers.

w
 
That's a variation on what Dan Lavry suggests in one of his white papers. Its fine assuming we know what a suitable size of offset is to apply to the VCXO and its an audio-only application which doesn't care about lip sync. Lavry's solution uses a micro to update the VCXO voltage on a long time scale - can't recall how long but the fastest I think was every 10s. Doing this means smaller buffers and no loss of lip sync in AV applications.
 
Actually I found Dan Lavry.

It occurs to me that you could monitor the buffer state and shift your local clock alternately faster and slower than the incoming clock as it approaches overflow and underflow. This'd be unlikely to be heard, and it wouldn't be at all difficult to arrange.

w
 
You don't even need a PLL in fact. You just run the VCXO control voltage off a DAC, once you know the control signal gain (Hz/Volt) you can adjust the local clock frequency pretty close on your first estimate of the incoming clock speed. I can probably build the whole thing in a couple of CPLDs as a pair of interlocking finite state machines without even a micro.

Does anybody know if you get a clock from an SPDIF receiver when the link is idle?

w
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.