Ping: John Curl. CDT/CDP transports

Status
Not open for further replies.
A good dac receiver will remove transport jitter. I thought we had put that to bed?

Listening experience is based on real world gear. Specific DACs combined with specific transports.
So it´s possible to "put" anything "to bed" , but does it help wrt to real life listening expierences?
"Removing jitter" is a very questionable claim for usual DACs (even the good ones); as stated before, it depends on the jitter transfer characteristics, which will have a corner frequency, a slope and attenuation and perhaps might show even a slight gain somewhere around the cornerfrequency (therefore the standards set a limit on that).

For practical reasons (or convenience) the best jitter (i.e. remove incoming jitter) performance isn´t available as otherwise listeners might have to wait a couple of seconds after starting a track before the music begins and/or the DACs must react to the same commands as the transport, which could be (nowadays not so hard to achieve) for remote control but is more difficult for direct switching at the transport.

But in any case i usually don´t have the luxury to "put" something "to bed" before having confirmed the fact by measuring/testing.
Please keep in mind, that older gear doesn´t disappear just before "we" knew something better now.....


Not like you to put forward an untested 'belief'.

Maybe i don´t get the phrase "i don´t see how.." correct, as i thought it means "i don´t know about any mechanism this could happen", hence i provided another view point.
Usage of "think" should (as i thought) make clear, that there isn´t enough experimental evidence up to now to use "know that".....
 
For practical reasons (or convenience) the best jitter (i.e. remove incoming jitter) performance isn´t available as otherwise listeners might have to wait a couple of seconds after starting a track before the music begins and/or the DACs must react to the same commands as the transport, which could be (nowadays not so hard to achieve) for remote control but is more difficult for direct switching at the transport.

A 'couple of seconds'?

Are cd players still that slow?

I haven't used one in anger for over ten years but I do use a transport (computer HD) and a DAC (a 12 channel A-D D-A convertor) connected by a 5m Firewire cable.
I assume that there is no transfer related jitter since the DAC reads into a buffer and then clocks its own data from there when needed.
I can set the size of that buffer in number of samples and I use 256.
Thus I would assume that the maximum time difference between data leaving the HD and the same data being converted to analogue is 256/44 100 of a second.
Sounds pretty instant to me when I press play.
 
Hi Charles,
No, there is a time delay. I think they clock in 2048 bytes into memory before the data is released from the DSP. When you reclock (and remove jitter), it would be a similar process. Remember that processing time is needed for the error correction process and any math that takes place (interpolation) before anything can hit the D/A converter.

Of course, when you press the stop, skip or pause keys, the process simply terminates. When play is resumed, it follows the original plan and a delay occurs.

There are some machines that use a CDROM drive that will read in an entire disc and play it from memory. Play begins in a similar time frame as a 1X speed player, but it's not long before the reading in of data is far ahead of what you can hear. What's interesting is that those machines are ripping the CD and copying the disc is simply that the data is stored on physical media after it hits the memory, and these machines just dump the memory once play time is over. That would be loading another disc, or turning it off.

-Chris
 
A 'couple of seconds'?

Are cd players still that slow?

I haven't used one in anger for over ten years but I do use a transport (computer HD) and a DAC (a 12 channel A-D D-A convertor) connected by a 5m Firewire cable.
I assume that there is no transfer related jitter since the DAC reads into a buffer and then clocks its own data from there when needed.
I can set the size of that buffer in number of samples and I use 256.
Thus I would assume that the maximum time difference between data leaving the HD and the same data being converted to analogue is 256/44 100 of a second.
Sounds pretty instant to me when I press play.

The context was given by using a CD transport with an external DAC; if you assume the S/P-DIF (AES/EBU) as link then you are bound by the standards. Of course it depends on the current state you´ll fullfill but for maximum compliance you have to remember the old days where 50ppm clock variance was considered to be high precision and 1000ppm considered as normal.

billshurv assumed "removing of jitter" which means the DAC with its sampling clock is totally isolated from the source. So for CD real time replay you have to consider the maximum clock spread between source (delivering the data to the FIFO) and the internal DAC clock (reading the data from the FIFO).

To guard against overflow or underrun during a complete CD replay you need a minimum buffer size and a minimum fill level, which to reach takes a couple of seconds before replay starts. And if the DAC does not respond to the stop switch music will still play a couple of seconds after the transport has stopped.
 
But wouldn't be the first order of day to somehow determine that yes there is indeed a difference, repeatable and all that.

Repeatable problems where the easy ones. The infrequently occurring intermittent ones where were the hardest, and tended to be very frustrating for everyone involved. If something only ever occurred once, we might be able to write it off after some reasonable investigation. Once it happened a second time, there was no end to the pressure to find it. In some lines of work, something only needs to happen once, and the investigation may never end. Think of old murder cases that are solved years later, for example. At least for me, once a system was permanently retired from service, no need after that to worry about when it might do something bad again.
 
Just had a look, the minimum buffer size on my DAC is 32 samples.
Apparently to reduce latency when recording but it is not all that reliable when using all the 24 input and output channels and I haven't tried that setting when using 192kHz SR.
In fact I've never used 192 at all.
As recommended I use the DAC as my masterclock.
 
The context was given by using a CD transport with an external DAC; if you assume the S/P-DIF (AES/EBU) as link then you are bound by the standards. Of course it depends on the current state you´ll fullfill but for maximum compliance you have to remember the old days where 50ppm clock variance was considered to be high precision and 1000ppm considered as normal.

I've never read the spec, but 50ppm is a non precision generic spec where I come from, and was in the 80s too; the buffer needed for 50ppm is only short. Not that it would matter anyway, surely - quality audio is worth a brief wait! 😀
 
Another thing that I observed many times as that when users would report problems, engineers familiar with the equipment design would be completely sure that the problem was with the users and not the equipment. It just couldn't happen, they would say. In fact, it turned out to be the default response of the engineers whenever a problem was reported only once for serious problems, or a few times for less serious problems. It had to be caused by anything but their design, because their design couldn't possibly do that. Sometimes it turned out to be the design after all, and finding it ended up being a pull-out-all-the-stops engineering challenge. If the trying to find the cause failed, the system would have to be resigned. Something similar to the situation with lithium batteries in Boeing 787's, in the latter respects.

I learned to take the users seriously.
 
(1) The red book spec presented earlier (don't remember who posted it) called for a 2048 X 8 bit buffer. This is what was present in the Sony chip sets I Looked up and presented.

At 44,100Hz filling the entire buffer would only take 0.186 seconds.

There is probably more delay from the time one pushes the play button until the mechanism has spun up and started tracking that it takes to fill the buffer.

(2) There is no processing time for error correction and interpolation. This has been explained in detail by others.

It is done in a hardware gate array that performs the logic at each clock edge plus propagation delay (measured in Ns).
 
Last edited:
Hi TheGimp,
However small, even hardware takes some time to execute and make decisions. I don't have any idea how long that could take, so you might be correct comparing it to the time taken to fill the buffer. Somewhere in the DSP, it has to determine that the original data can't be saved (taking a worst case and best sounding system), then "look" at surrounding values to create a mid-range number to insert. The normal flow of information through the DSP must allow this process to happen in parallel and have time for everything to execute, then modify the data before it heads out to the D/A chips. Like I said, I don't know how long this might take, but the number might surprise even you. The "Red Book" will not have any spec for this as it falls outside of the scope of that document. You're going to have to either measure this directly, or you're going to have to talk directly to an engineer who designed these systems and can remember the details. One thing I am pretty sure of is that it represents a fraction of the total time required assuming the data goes from the DSP into memory and back, then directly to the D/A converter. I just don't know how long, and I'll admit that.

-Chris
 
Chris -- "no processing time for error correction and interpolation" is better interpreted as "no additional processing time", as the pipeline is built such that the resulting PCM stream will arrive on time, every time.

Which means that every bit coming off the CD is subject to the same amount of delay (from all the attendant steps you elucidate) which is predominately fixed to the buffer size (plus the ever minute amount of time to get it off buffer).

Semantics perhaps, but hopefully brings everyone into alignment. 🙂
 
Hi Fred,
That's what I figured. But the overall delay must have been designed to take that time into account so it could process in parallel. So while there is no additional processing time due to that process, the main flow probably has some delay built in to accommodate that.

-Chris
 
Chris, it sounds to me like you are thinking in terms of a processor preforming tasks where a decision is made when a flag is set.

Consider a system in which the data comes in and feeds a pipeline. The Reed Solomon logic processes the data and generates an output which is clocked into an output latch.

The flag generated by one stage of the pipeline feeds the next stage as an input with the data to determine the behavior of that next stage.

Interpolation is done in the last stage if all the previous stages fail to correct the data error.

Each block of data is shifted serially from the CD and loaded in parallel into the decoder. Once loaded, the data is clocked in. The next block of data is converted from serial to parallel and set up at the input. While this is going on, the first block is propagating through the logic and the output is ready to be clocked into the next stage of the R/S decoder. When the second block of data is clocked into the decoder, the first block results is latched into the second stage of the decoder. This sequence continues until the first block of data has completed conversion through successive stages until the output is complete which may include interpolation.

The parallel output is moved by the microprocessor to the buffer to feed the output parallel to serial converter.

The microprocessor is there to move data, but does no decoding of the actual data. That is done by the dedicated logic block which performs the Reed Solomon decoding.
 
Hi TheGimp,
it sounds to me like you are thinking in terms of a processor preforming tasks where a decision is made when a flag is set
.
Nope. Hardware still takes time to execute. I am expecting parallel execution through it's own pathway. This amount of steps will still take longer to execute than the normal "everything is okay" pathway. And yes, everything is clocked through or you would end up with a logic race condition (= garbage at the output). Given the increased number of steps for possible error correction / replacement, the normal path has to be designed with worst case timing with data being interpolated. So there will be no op steps built in to delay the stream so that corrected data meets up and can be inserted (simple switch off the error flag) just in time. Everything has to run at the time of greatest delay so your output hides any problems. There can't be any extra wait states at the data output to the rest of the world. In this way, flags can arrive sooner than the data so that the rest of the circuitry routes it properly.

Parallel execution at the rate of the slowest path.

-Chris
 
Hi TheGimp,
Probably not so much, but at that time, the DSP was commonly broken up between two chips and the separate memory IC as well. I got in right on the ground floor and did a lot of business. Back then, pretty much every function lived on it's own bit of silicon. Even the clocks were each tune-able, which made things difficult when working on a Sony for example. They should have realized that some of the rates were related and could be generated with one single clock - which they eventually did do. Even the slice level was adjustable. Discs had to be within the Red Book standard. Reflectivity of the CD was a big problem back then, and pit shape too. Those affected the slice level.

-Chris
 
I worked on PLCs in the early 80s, including support on the Texas Instruments 5TI which was designed in 1974. I mostly worked on TI-520/530 and later TI500 series I/O.

I think the complexity of the 5TI sequencer is along the same level of complexity as the S/R decoder. Although the sequencer was a sequential Boolean state machine with a lower pipeline complexity.

Texas Instruments 5TI - The Vintage Technology Association

This was a deterministic system, so I don't see why the S/R decoder in a transport wouldn't be.
 
Status
Not open for further replies.