Something is eating my CD substrate

I can confirm that ripping is time consuming and certainly classical CD's are a PITA to rip. I'm going through my collection and now I arrived at the classical part. It takes far longer than the others. Certainly if you want to get the labeling correct en making sense.
 
EAC in "paranoid mode" can perform daylong rips
Sure, but it is the only mode that will get you what you want. Ripping software doesn't do the error correction a hardware cd-player does. Sometimes I have to stop the ripping and just play and record that track. That plays without any hitch. But you only can do that in 1x speed and have to work on it afterwards.
 
when I converted my CD collection to Flac and MP3 I had the occasional CD track that would not read. I used a CD cleaner and if that didnt work the last resort was a polishing product that I used for wood. The original product was called Rottenstone by Behlem. A fine powder from volcanic rock I believe. Rottenstone was used for polishing lacquer or other wood finishes. When Rottenstone was no longer available I now use a product called Polarshine. Polarshine comes in different grades from semigloss to mirror finish.
 
  • Like
Reactions: svp
Sure, but it is the only mode that will get you what you want. Ripping software doesn't do the error correction a hardware cd-player does. Sometimes I have to stop the ripping and just play and record that track. That plays without any hitch. But you only can do that in 1x speed and have to work on it afterwards.
Error correction or error averaging ?
 
Ok 🤣
 

Attachments

  • 1980s-uk-sony-magazine-advert.jpg
    1980s-uk-sony-magazine-advert.jpg
    283.6 KB · Views: 44
  • Like
Reactions: availlyrics and svp
Multiple reading and comparing is a more of a gimmick for audio CD, as you still will not know which of multiple copies is the right one.
Not quite. Audio CD has error protection and correction before resorting to interpolation (which you don't want it to do for a ripped file because that's an error that has been baked in). The error protection and detection used on CD is quite clever. Essentially, bursts of errors are expected so the data is distributed on the disc to convert burst errors (hole drilled through CD) into random errors spread over much more data, and crossword error detection and correction is able to correct these random errors. But at fast ripping speeds, the noisy data (because of the ripping speed) increases the possibility of increased random errors that might read correctly (and be known to be correct) at lower speed. That's why you see such ripping programs gradually speeding up as they read a CD (and find they can get away with higher speed) then slowing down when they encounter a problem. The crossword error protection means you do know when you've got it right.
 
Not quite. Audio CD has error protection and correction before resorting to interpolation (which you don't want it to do for a ripped file because that's an error that has been baked in). The error protection and detection used on CD is quite clever. Essentially, bursts of errors are expected so the data is distributed on the disc to convert burst errors (hole drilled through CD) into random errors spread over much more data, and crossword error detection and correction is able to correct these random errors. But at fast ripping speeds, the noisy data (because of the ripping speed) increases the possibility of increased random errors that might read correctly (and be known to be correct) at lower speed. That's why you see such ripping programs gradually speeding up as they read a CD (and find they can get away with higher speed) then slowing down when they encounter a problem. The crossword error protection means you do know when you've got it right.

Ok, I will say in other words: there may be 2 ways to make some error correction work: one is built in into CD track meta-information, for example CRC code, RAID (1, 5, 10, 50 and some other), ZFS file system RAM ECC, which is additional 1/8 of chips required and many more examples. Basically that is multiple copies or partial copies from which we can be sure if the info in the media is correct or not. I have no idea if such info exists in CD or Audio CD in particular. Another way is lets say "interpretation" where we are basically making smart guessing what the correct info is without having extra meta-info about the data - we do little investigation about the bits: that can be smart software algorithms, multiple reads or maybe some smart signal processing. The difference is that for the first way we need additional info copies or meta-info, for the second - we rely only the data we have.

I was talking about the first way, you - about the second.
 
No, I wasn't talking about clever guesses - that's interpolation (which we hope not to need). CD has very strong error protection (Reed-Solomon) built in that can detect and correct random errors. A lot more data comes off the disc than goes out of (say) the S/PDIF port. But it's not infallible, and that's why interpolation is used when uncorrectable errors occur. I tried looking for a simple explanation of how Reed-Solomon works, but although the Reed-Solomon Wikipedia page notes that CD uses it, its explanation is unfathomable and unfortunately, I can't remember exactly how the CD crossword/Reed-Solomon implementation works. But it does.
 
  • Like
Reactions: Halauhula
I worked for eight years as an engineer in an optical disc replication plant in the 2000's.
We had high volume injection molding lines for both CDs and DVDs.
There are many variables in the manufacturing process that can make the metalized layer of a CD prone to rot as shown in the previous posts.
But the bottom line is that, a CD that does rot could have been poorly made to begin with, as well as being stored in a humid environment.

Here is a some what abreviated description of the process so you can see where things could go wrong.
First, the disc is made by injecting molten polycarbonate into a mold cavity that has a removable "stamper" mounted on one side. The stamper is a thin metal plate that contains the data.
Second, the disc is sputtered or metalized with aluminum on the top side where the data is.
Third, the metalized side is coated with a protective lacquer that is applied and distributed by spinning the disc at high speed.
Fourth, the lacquer is cured quickly by an intense UV light.
Fifth, the disc is inspected by an optical scanner for physical properties like correct thickness, warpage, bubbles in the polycarbonate or lacquer, and pin holes in the metalized layer. Discs are also statistically sampled and inpected for readability at a separate QC station.
Sixth, the disc is printed on top of the protective lacquer by one of several different technologies. Most common was screen printing with UV curable inks. Most discs are first printed with a white background or "flood" and then have text and images printed on top. Our printer had six print stations with a UV cure station between each.

All steps have numerous adjustable parameters that affect both quailty and the consumption rate of the expensive consumables. For example with respect to rot, if the protective lacquer is applied too thinly and/or the UV curing lamp is weak from use beyond its rated life then the ptotective layer will be weak and can wear off, especially if there is no white flood in the print process.
Inspection parameters can also be dumbed down and/or the lines run too fast to increase yields and reduce costs, but usually at the expense of quality or durability.
So like most things, making high quality discs wasn't easy or cheap.
 
Last edited:
No, I wasn't talking about clever guesses - that's interpolation (which we hope not to need). CD has very strong error protection (Reed-Solomon) built in that can detect and correct random errors. A lot more data comes off the disc than goes out of (say) the S/PDIF port. But it's not infallible, and that's why interpolation is used when uncorrectable errors occur. I tried looking for a simple explanation of how Reed-Solomon works, but although the Reed-Solomon Wikipedia page notes that CD uses it, its explanation is unfathomable and unfortunately, I can't remember exactly how the CD crossword/Reed-Solomon implementation works. But it does.
Data interleaving is also a big part of the CD error correction scheme.
 
  • Thank You
Reactions: EC8010
Data interleaving is how interpolation works. If a whole block is lost (uncorrectable), you drop to 1/2 bandwidth using the partner block (I'm oversimplifying here). There is a small chance that errors are not detected or corrected, so a block can occasionally get through garbled. There's no higher-level checksum or error correction (as on CD-ROM) to catch that, so in practice CD-DA reproduction will degrade with increasing error rate, both with more interpolated blocks and more actual undetected errors, but will keep going. CD-ROM will tolerate more errors before failing hard with an unrecoverable read error. CD-ROM is designed so data files that don't generate a read error are extremely likely to be perfect. CD-DA doesn't give any such promise.
 
@MarkTillitson: I understand that data interleaving is a form or strategy for error correction. By interleaving data with Reed Solomon code, data errors are completely corrected.

By contrast, data interpolation is a form of sophisticated algorithmic guesswork, which works for audio data because the ear/brain is relatively insensitive to very short term minor errors (ie, the interpolated values).

Here is a Linn article which makes it clear that interpolation occurs only if error detection and correction is unsuccessful.https://docs.linn.co.uk/wiki/index.php/CD_Ripping_Terminology

As an aside, it always amazes me how many of these technological systems developed prior to the 1990s are so incredibly sophisticated. They are very resilient, yet efficient and elegant in their parsimonious approach to solving issues. I salute those engineers and scientists.