lets try this one more time.... on the record side of things.... Not playing back a CD.
Is there anything which can change the data before it gets SR encoded?
yes or no.
TTHx-RNMarsh
Is there anything which can change the data before it gets SR encoded?
yes or no.
TTHx-RNMarsh
Reality notwithstanding. I see SY and Chris are wrong because they are not in the right posse. Did you ever think to find out how many CU (uncorrectable errors) occur in normal use of a decent CD player?
I gather it is infrequent and interpolations are rarely needed. But that isnt my concern.....
I am concerned about the signal before it gets SR-ified. After it has been encoded, it is tough to get errors. I get that.
-RNM
I gather it is infrequent and interpolations are rarely needed. But that isnt my concern.....
I am concerned about the signal before it gets SR-ified. After it has been encoded, it is tough to get errors. I get that.
-RNM
Then why did you say this, this comment has only to do with error correction after the fact. I give up you need to do your own research. IMO someone said something that sounded like something you wanted to hear.
Good stuff stvharr. THx. Make a lot more sense than-- it comes out exactly as the input intended. That sounded like magic.
It does come out exactly as the input was intended, mucking up the input has nothing to do with the transport medium and all this has nothing to do with one CD player different from another, they both see the same mucked up input if that is the case.
Last edited:
Nope. I guess because this forum is something about transports... there are some assumptions about my question. I'll try else where.
THx-RNMarsh
THx-RNMarsh
The "bit perfect" folks get this bit wrong every time.
No not at all, you mix up things. Sure there are uncorrectable errors and the drive will try to conceal this by interpolation (which is probably why that process is called error concealment).
You can typically see uncorrectable errors much less than once on a typical CD, most just don't have them at all. When it happens, it is clearly audible, but it's a transient type of thing.
Correctable errors, those that give still perfect bits, are much more frequent. Many years ago I built a kit that counted them and I had CDs with scratches that actually showed 1000's of errors within a few seconds! And al perfectly corrected and inaudible.
Jan
I am concerned about the signal before it gets SR-ified. After it has been encoded, it is tough to get errors. I get that.
-RNM
Richard I am no expert on that but I gather that during the mastering process the music data gets packed within all the frames and titles and preambles and packetized and error correction stuff added, resulting in a file that can be put on a CD.
I guess you are concrned about errors in that process, because that would cause the file on the CD to be in error to begin with.
I assume that similar error correction protocols would be used to insure the source is correct. How do they do it with generating a software distribution? There too, faultless sources are required.
But maybe someone else knows about this?
Edit: the sourcing work takes place within a computer environment of course, which is pretty much perfect. The issues with CD erros are a result of the imperfect mechanical issues of the mechanism and carrier material. The performance of the sourcing process is infinitely more reliable and correct it seems.
Jan
Last edited:
lets try this one more time.... on the record side of things.... Not playing back a CD.
Is there anything which can change the data before it gets SR encoded?
yes or no.
TTHx-RNMarsh
No
Software has even better error correction than redbook.
No data comparison takes place. When you first asked the question I assumed you must be talking about some comparison between the original input data and the recovered output data for data integrity checking. I now see that you are falling into the same misunderstanding as others.RNMarsh said:Re CD player; those data which is being compared for possible errors.
Completely meaningless question - no comparison takes place.Simple question was.... how do you know the actual 'input' data which will be used and compared with the output for errors is in itself accurately represented?
Precisely. People seem unable to grasp this. Perhaps they think there is a little man inside the CD player; every few seconds he says "Dang! Another batch of bad data - now I have to go off and do error recovery. I hate that; its much easier when the data is correct and I just have to pass it on."Waly said:Not a single CPU cycle is wasted for correcting the encoded data, according to Reed Solomon algorithm. The right output is always spit out of the decoder in exactly the same amount of time (or the same number of clock pulses, if you prefer).
So you are talking about the original input data. The only way to check that the input data is correct is to compare it with where it came from, but how do we know that is correct? At some point you have to assume that something is working correctly and just make sure that you don't corrupt the output. Strangely, it is possible to transfer data from here to there along a wire without corrupting it for both audio and the much less demanding applications such as defence, IT, medicine, aviation, metrology etc.RNMarsh said:I know that.
I am - again- saying that the signal (data) BEFORE it is encoded can be inaccurate in some way before SR is applied.
Nothing whatsoever to do with CD players, of course.
If I understand that correctly, what he calls C1 is what everyone else calls error correction i.e. decoding. His C2 is what everyone else calls interpolation - which happens rarely. Or is he confused and so confusing me?stvnharr said:From Post #32 in the above thread:
"The fact remains, the instant the C2 flag is raised (any of your Ex.2 stages), the original data has been marked as bad, it's over for that packet. The C2 "corrector" now does it's best to send data packets out that have a good CRC so the DAC can make an intelligent output level. But the data is no longer what was on the CD. All it has going for it is a data packet with a matching (good) CRC bit. The data is valid, not what was on the disc."
The "bit perfect" folks get this bit wrong every time. But I can see where the confusion comes from.
-Chris
Strangely, it is possible to transfer data from here to there along a wire without corrupting it for both audio and the much less demanding applications such as defence, IT, medicine, aviation, metrology etc.
It really requires very little contemplation. For instance, do you realize what convoluted pathways and chopping up and gluing together the data goes through that ends up as posts on this forum on your screen?
Yet, how often have you seen this forum misspell your username? 😎
Jan
http://cloversystems.com/wp-content/uploads/2014/04/DVX-4-Manual-v3.1.pdf
Skip to page 36 for info on C1, C2 and the limits of error correction.
Skip to page 36 for info on C1, C2 and the limits of error correction.
Interesting that it says that some earlier players had less effective error correction than later ones, and there is still some variation. This would mean that a less effective modern player playing a bad disc might be forced to use interpolation more often than a better player, although both would be equally good on a good disc. Given the low price of silicon, it seems surprising that anyone bothers to use an inferior decoder chip in a modern player.Mark Whitney said:Skip to page 36 for info on C1, C2 and the limits of error correction.
However, this suggests that a hi-fi magazine should not tell us whether a player sounds 'sweet', 'clinical', 'warm' or 'musical'; instead it should attach a data logger to the internal uncorrected error flag and play a known marginal disc. 'Interpolations per hour' would be a useful number to know. I imagine that deliberately faulty discs could be made for this purpose.
well in the old days they did test on a disk with calibrated errors and gave up when every drive could read a 1.5mm scratch with no dropouts.
In the early days CD was a hi-fi medium so it could be assumed that players were getting better, as the state of the art allowed. Once CD became a mass-market medium and really cheap players became needed for some market segments it is possible that things went backwards, and the short cuts needed to reduce prices may have found their way into more expensive players too. We can't assume that the path of all technology is 'onward and upward' - especially now that high price alone can be a selling point for some consumers.
Thanks for asking! I've offered to help anyone who wants to test this. I got one request via PM, got distracted and now need to follow up. 😱I might as well ask the obvious question for others:
How do we know the 'input' data being compared to the output data is itself correct?
With a computer and some simple software, all of it free or trial, you can test this. It does take a bit of work, but it's an interesting test. With a soundcard that has SPDIF in, you can confirm that the audio data from your transport is bit perfect, or not.
NOTE: This does not measure jitter.
There was a paper linked from earlier in this thread which may explain this. Part of CD pressing involves a data conversion (EFM) from 8 bits (music data etc.) to 14 bits (on the disc). A given 8 bit pattern can be coded by several different 14 bit patterns;....
Sort of, but it didn´t work that way.
The possible 14 bit codes were choosen to meet the timing length requirement for T3 - T11. 267 codes met the requirements, as only 256 were needed to encode the original 8 bits (half an audio sample) 10 still problematic codes were thrown out and one additional was choosen by random. The remaining 256 codes were listed in the IEC 60908 .
..... the appropriate one is chosen to eliminate DC. How this is done depends on the details of the algorithm used in that plant. Hence identical music data can end up as different disc data. At the playing end the music data is recovered exactly, whatever EFM was used.
Merging bits are added to the 14 bits to adjust the DC level and to meet the timing requirements even better. Although some strict rules exist that shall not be violated there was/is still room for some variations in the actual bits included as merging bits.
This did not harm, as these merging bits did not contain any information and were discarded during read out. But if you were looking (via an ERM) at all the pits and lands on the same disc content but being pressed in different plants (with diverging equipment) there might occur differences.
Is the checksum based on the 8 bit data or 14 bit data? Does the checksum include metadata - such as data of mastering? (I'm guessing here).
The additional 4 parity bytes per frame are added before the EFM process and are based only on the audio sample content. The additonal subcode information is not included in the parity calculation (and the TOC neither).
I might as well ask the obvious question for others:
How do we know the 'input' data being compared to the output data is itself correct?
-RNM
It´s an interesting question. 😎
Usually you don´t know as you can only rely on that the producers of the encoding and manufacturing equipment did ensure that the encoding process does meet (at least, usually it should be better) the quite impressive error rates of the decoding part. (Would not make that much sense to specify an decoding error rate that is several orders of magnitude better than the encoding process)
But in case of measurement CDs, as mentioned earlier, we do know, as most of the signals were artificial signals carefully constructed with software and we therefore exactly know what the codes are (or should be).
Yes, in a sense, the circuitry has to work harder to track a damaged disc. I say 'in a sense' because I suspect a whiff of anthropomorphism here: we would work harder to track a faulty disc, so we assume a box of electronics would have to too.
Sometimes it really helps to actually probe signals inside a cd transport/player and to look at the waveforms on an oscilloscope screen. That way it´s much easier to grasp what the meaning of "circuitry has to work harder" might be.... 😉
Do they? That is not my understanding, as I have repeatedly said.
Yeah, but what number of repetitions of a questionable assertion is needed to transform it into thruth? SCR 🙂
I have some doubts, because right from the beginning ICs provided the circ error correction/decoding process including de-interleaving.
A rough description of the two stage process used in the CD system would be, that the symbols reach the C1 decoder where some errors were detected and corrected (means the actual codes were changed) while if uncorrectable (in the C1 decoder) errors will be detected, the symbols remain unchanged but were send down the delay lines while being "flagged" as erroneous (and this "flag" information will also be send through delay lines) after which they reach the C2 decoder (now de-interleaved) where detection and correction (when possible) takes place, which means the symbols will be changed.
If correction is impossible at the C2 decoder, the codes remain unchanged and will be "marked" as erroneous which will invoke the so-called error concealment.
In the aforementioned case of "ideal read out" no error will be detected in the C1 decoder, the symbols remain unchanged and sent down the delay lines (no "flag" operation) reach the C2 decoder (now de-interleaved), no error will be detected, the symbols remain unchanged and will be passed to the next stage.
All of this hard wired in a LSI.
Last edited:
In the early days CD was a hi-fi medium so it could be assumed that players were getting better, as the state of the art allowed. Once CD became a mass-market medium and really cheap players became needed for some market segments it is possible that things went backwards, and the short cuts needed to reduce prices may have found their way into more expensive players too. We can't assume that the path of all technology is 'onward and upward' - especially now that high price alone can be a selling point for some consumers.
It was introduction of the CD-ROM that really got things moving, with rom titles outselling audio.
If you only look at error correction, then the drive in your laptop will outperform any of the old CD-players. Not a very popular opinion around here. 😉
Last edited:
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- Ping: John Curl. CDT/CDP transports