CD Error Correction. Is it Audible ?

Status
Not open for further replies.
I've had chance to have more of a look today... it all makes more sense when its in front of you and you can get a feel for what is happening however there is a puzzle.

Looking at the truth table and taking the data at face value we should see all error flags at logic 1 when a non correctable error is present. This is the last entry on the table. Using a four input NAND to detect this and feeding the output to a monostable shows this situation never occurs under any possible condition.

Also, I wondered at the meaning of the phrase 'uncorrectable error'. The first entry for this refers to the C1 data. So does that mean it is simply a non correctable C1 error that may be corrected by applying the C2 data and thus outputting a correct sample ? When we get to the last two entries in the table (both classed as uncorrectable) are we into the realm of substituting previous data samples and then moving toward average values of data for the final most severe uncorrectable sample/s.

Even with all this doubt there is still much that can learned from looking at the behaviour of the flags. One useful tool in addition to the monostable is a high brightness LED, a Cree device of the kind that burns your eyes out at 50 paces when fed from a couple of milliamps. The monostable shows the presence of a pulse or change of data but gives no clue as to how often this occurs. The LED comes into its own here. For example playing a 'perfect' disc it flickers very very dimly at around (guestimate) 5 to 10 random pulses a second. This is only present on the MNT0 line for 99% of the time, however monitoring the MN2 line shows a very occasional one off pulse. These must be true random errors form whatever reason.

I'm going to look at more detail at what happens when playing known defects (it all seems to behave as you would expect with the exception of the MNT3 flag)

The way I am reading this is that IF there is an uncorrectable C2 error, either of two things can happen: previous data is inserted to 'cover up' the error, or an average of previous data is inserted.

The data value 0xE would be output on the four lines in the first case, data value 0xF for the 2nd case.

Question is when would which case take place? It may be a general description of the two options possible wiht this chipset, and that the actual one chosen depends on the particular implementation.

So if the manu decides to implement 'use previous data' you would never see the 0xF value (use average data), and vice versa.

So to be sure you should decode both cases.

Jan
 
...Also, I wondered at the meaning of the phrase 'uncorrectable error'. The first entry for this refers to the C1 data. So does that mean it is simply a non correctable C1 error that may be corrected by applying the C2 data and thus outputting a correct sample ?

Hello Mooly,
Yes. See the Studer paper (SwissSound) in my previous post.
Sorry for dragging you into this 😉
 
Lol, thanks, I'll have another read.
Yes, it is good. Also, if one reads about comparative quality of CD replicators/manufacturers it seems some claim a few C1 errors/sec as a figure of merit, whereas seems 100's per second can be considered normal. It's bandied that C2 errors shouldn't happen in normal media, and CU (unrecoverable) shouldn't happen either. Think that should be what you see, in principle Mooly, but would be fascinating to confirm.

I suppose one issue is how frequent is miscorrection of C1 errors if they are common enough......of course that isn't flagged so one would never know.
 
Other creditable books seem to consider C2 error rates of up to 80/sec or so might be found on normal production CDs. But CU (unrecoverable) apparently should not.

Maybe you could modify your detector, Mooley, to look for C1 or C2 error conditions, both for interest and to test it ?
 
Yes, it is good. Also, if one reads about comparative quality of CD replicators/manufacturers it seems some claim a few C1 errors/sec as a figure of merit, whereas seems 100's per second can be considered normal. It's bandied that C2 errors shouldn't happen in normal media, and CU (unrecoverable) shouldn't happen either. Think that should be what you see, in principle Mooly, but would be fascinating to confirm.

I suppose one issue is how frequent is miscorrection of C1 errors if they are common enough......of course that isn't flagged so one would never know.

I think this is a fundamental misunderstanding. C1 errors are never miscorrected. 2 bytes per block can be corrected, otherwise the block is flagged as an erasure. Each erased block redistributes to one byte per secondary block, where the code can deal with 4 flagged erasures per block.
 
I think this is a fundamental misunderstanding. C1 errors are never miscorrected.......
There is a small chance that random burst data could produce the same check result as a single error in good data. This would cause C1 to miscorrect, with no error flag. However the C2 decode should then detect the miscorrection, after de-interleaving, and fix it if possible. According to Watkinson! Who also writes "The power of CD correction is such that damage to the disk generally results in mistracking before the correction limit is reached. Thus there is no point in making it more powerful."

So yes, seems there should be negligible chance overall of miscorrection so long as it's not assumed that C1 OK flag means good data per se, and C2 is always in play.
 
When analysing a disc for uncorrectables it is more helpful to know E22 and E32.
If E32 is zero then all was corrected. If E22 and E32 are zero then there is still some room for additional read errors. A CD manufacturer should produce discs with E22 and E32 of zero.
 
Status
Not open for further replies.