I've been trying to study CD Audio (Red Book) error detection, flagging and correction strategies in audio cd players (and cdrw drives).
I have already waded through Pohlman's "Principles of Digital Audio", Watkinson's "Introduction to Digital Audio" and Sony's "Digital Audio Technology" books on relevant issues.
However, I find the whole C1 / C2 error detection, flagging and correction strategies a very mysterious arena.
It seems that different manufacturers use different ways to detect especially E21, E31 and E22 and E32 errors (and consecutively, a large number of errors per symbol).
According to what has been told to me in another forum, some can accurately detect and flag up to 3 errors and some up to 4 errors per symbol depending on chosen strategy. However C1 level strategy may have further effect on C2 detection efficiency (and hence, correction accuracy).
Is there anywhere in available literature a decent explanation of how these things work?
I can understand that not all detection/correction circuits are made equal and that esp. in cdrw drives some chipsets show superior accuracy in practical tests. I'd just like to get a little more information on the issue (no, I'm not designing/building my own transport, at least not yet
I'm really curious, but apparently not curious enough to start building my own CIRC implementation
Anyone?
regards,
Halcyon
I have already waded through Pohlman's "Principles of Digital Audio", Watkinson's "Introduction to Digital Audio" and Sony's "Digital Audio Technology" books on relevant issues.
However, I find the whole C1 / C2 error detection, flagging and correction strategies a very mysterious arena.
It seems that different manufacturers use different ways to detect especially E21, E31 and E22 and E32 errors (and consecutively, a large number of errors per symbol).
According to what has been told to me in another forum, some can accurately detect and flag up to 3 errors and some up to 4 errors per symbol depending on chosen strategy. However C1 level strategy may have further effect on C2 detection efficiency (and hence, correction accuracy).
Is there anywhere in available literature a decent explanation of how these things work?
I can understand that not all detection/correction circuits are made equal and that esp. in cdrw drives some chipsets show superior accuracy in practical tests. I'd just like to get a little more information on the issue (no, I'm not designing/building my own transport, at least not yet
I'm really curious, but apparently not curious enough to start building my own CIRC implementation
Anyone?
regards,
Halcyon
Nobody knows?
I think this information could be more useful than dimishing jitter from 1000 picoseconds to 100 picoseconds.
I mean, jitter can often be almost imperceptible noise at 90 dB below signal, but gaps or badly interpolated pieces of original audio signal during transients... urgh!
I know there *must* be information on this out there, I just haven't been able to find it.
regards,
Halcyon
I think this information could be more useful than dimishing jitter from 1000 picoseconds to 100 picoseconds.
I mean, jitter can often be almost imperceptible noise at 90 dB below signal, but gaps or badly interpolated pieces of original audio signal during transients... urgh!
I know there *must* be information on this out there, I just haven't been able to find it.
regards,
Halcyon
- Status
- This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.