The Ultimate Sound Improving for Compact Disc's through Patent-Pend.CD Sound Improver

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
AX tech editor
Joined 2002
Paid Member
[snip]Thank you sirs. :)
[snip]-Chris

Chris,

There does exist a possibility that errors on the CD (even correctable ones) do inpact the sound quality. For instance, 'wild' servo excursions could cause lots of junk on the servo supply lines. If this supply also powers (analog) stages down stream, conceivably that supply junk could inpact sound quality. A long shot, and heavily dependent on the player topology; better players, with better (separate) power supplies would be pretty immune to it.

jd
 
Hi cbdb,

Hi Scott,
Thank you.

All data transmission is encapsulated and has error correction built in, so the SPDIF signal is more reliable. To the best of my knowledge though, the data gets spit out and is not repeated.
-Chris

Chris,

The SPDIF data once recoverd is (in the ideal case) EXACTLY the same data as on the CD. You simply use a recorder that takes SPDIF in and makes an uncompressed .wav file. I noticed elsewhere that some developer has done this with the expected result.
 
Administrator
Joined 2004
Paid Member
Hi Scott,
The SPDIF data once recoverd is (in the ideal case) EXACTLY the same data as on the CD.
I'm always reticent to disagree with you as you do know what you are talking about. However, in this case I can't agree with you on this.

This statement has ignored the one point that has been made several times. The fact is, if you have but one C2 flag, the data is no longer the same as what was encoded onto the CD. It may be exact, but this is quite by chance because the internal DSP has "made the value up" from the best information it had to go on. The C1 flag signals the DSP that this data is bad. So flagged, the data is discarded and an attempt is made to reconstruct what the value was from the spread out information encoded in neighboring data sets. So a C1 error indicates that particular value, or data set, is bad and can not move beyond the C1 error corrector stage where it is replaced with information that was spread in other local data. This also reduces the ability to correct other bad blocks of data for that distributed information. Once the C2 flag has been set, the data can not be repaired or figured out from distributed data. C2 means that the value is gone with recovery impossible - end of story. This will occur with a hair or fingerprint, and these are transient errors. So, the interpolated value may be correct, or it may not. All it has to be is close to disguise that one single error. If the next couple data sets are also damaged, the C1 correction system has no way to restore the data, this is a C2 flag all the way and things are getting ugly. In no way will the information be identical to what was encoded on the CD.

The only possible way to compare what comes out a digital data port with what is on the CD is to have access to the formatted CD data image. That's the one in the recording studio - the only accurate representation of what is supposed to come off the disc. Don't forget about the very creation of a CD (pressing the blank, then applying the metalization) will not be perfect. there is a metric to indicate production quality, and that is the average BLER figure. Even the master stamper will have errors and a BLER rate.

What you can do is compare what data the CD player has come up with and compare that to what goes "down the pipe" from the SPDIF to your other device. Additional error correction and a more robust delivery system reduces data errors to a tiny fraction of what is normal for the CD as a data delivery system. However, if you want to use the CD as your reference, you will never get a "bit perfect" transfer. That is realistically unattainable as a goal.

The thing that should grab your attention about the state of the art in CD player technology is the general absence of those C1 and C2 error flag test points. Consider that the earlier machines generally had more stable data coming off the disc. Less jitter (I had a jitter meter, one of those Leader things that cost me too much at the time) and a less noisy eye pattern (RF pattern) that was stable in it's amplitude. These machines had those test points (C1 & C2) available as well - or most of them). Look at today's transports! They are complete junk compared to first and second generation machines. Test CDs were available (and expensive, Philips 5A was about $250). These are not test tones. These CDs were made to a high quality with known RF amplitude and very low defects in other areas. Then there were installed defects to a standard. Each defect had known properties and were thus suitable for checking and setting up a CD player. These discs were also used to prove to a customer that their CD player was in fact operating correctly. These days, real test discs are difficult to find. The ones with controlled defects and signal quality. It would be notable if there were access to the C1 and C2 type flags. So we now have transports that generate very poor signals, lower quality CDs as media and no error indication available to quantify the signal quality. There is one way though. Access the RF or Eye Pattern test points and examine that. The waveform at this point will tell you a great deal about the quality of the CD player and /or quality of the media (the disc).

Sorry for the continuity with this post, it's time to lay down a bit.

-Chris
 
Administrator
Joined 2004
Paid Member
Hi Jan,
There does exist a possibility that errors on the CD (even correctable ones) do inpact the sound quality.
Yes, I agree.

Everything from supply noise, to mechanical shock and even temperature changes (big ones like from very cold to warm). There are many documented cases where high RF levels do interfere with everything from servo operation to even the microcontroller logic. All this is in addition to the errors already mentioned.

Someone earlier on made the comment that if you were to place your fingers near the cable carrying the signals from the diode detectors in the head to the RF amp on the circuit board, you would cause severe data errors and may even be able to stop the CD from playing. This is true and I have demonstrated the effect several times over the years. Lead dress inside the CD player may cause no end to problems.

For instance, 'wild' servo excursions could cause lots of junk on the servo supply lines.
That's nothing until you've seen a player running with a defective disc motor. Commonly, the commutator might fill with debris and short. The motor then has a dead spot in it's rotation, and the disc motor servo draws a huge spike of current every time the defective section is energized. Same thing with a feed motor (runs the head mounting assembly, like a sled). This motor turns much more slowly, so the CD player may simply skip back in that location. The mechanism you're concerned with may be addressed very effectively though proper PCB layout and proper power supply design. Even using the same transformer winding for servo, digital and analog circuits shouldn't be a problem. It's less expensive to use separate windings or transformers (two are common) than to build a good power supply. You know you're in trouble when you have servo noise or clock spikes on your analog power supply. Most CD players use things like 7805 and 7905 regulator ICs. These are known to have very poor high frequency rejection. Digital circuits anyone? :D The servo supply is normally not carrying too much in the way of higher frequency components as noise. Still, the current drawn may cause the voltages at the transformer to droop, allowing the voltage regulators to suffer dropping out. Even if you don't see dropouts, high frequency hash may appear on the regulator outputs that are coincident with the line frequency. This you will hear as well.

If this supply also powers (analog) stages down stream, conceivably that supply junk could inpact sound quality.
The servos are often run off the unregulated portion of the power supply on the analog side. The signal processing occurs in the digital area since they are derived from the RF amplifier (focus and tracking) and DSP chip (disc motor speed). The analog sections will always have voltage regulators of some kind between the servo supplies and the analog supplies. Of course, any high frequency noise from the servos may ride right on through to the analog section without much attenuation. That should also be audible I would think.

A long shot, and heavily dependent on the player topology; better players, with better (separate) power supplies would be pretty immune to it.
That is very true, except for the immunity to noise if you are talking about toroid transformers. Noise could be coupled between a pair of toroid power transformers via the primaries if there is a filter between the transformer primaries and the outside world. Depending on how the mains filter is constructed, it may encourage the noise to enter the other transformer instead of being drained to the AC ground. You can't depend on anything it seems. The amount of noise coupled should be small. Someone is sure to be able to hear the problem. Low AC mains voltages would be a worst case situation.

The assumption that an expensive CD player will be less subject to problems like these may not be true. Like many expensive, low production items, the design and execution of the design could be inferior to a decent mass produced "mid-fi" make. I dislike the term "mid-fi" as it's often applied to any high volume brand. In CD player land, Denon machines are designed far better than many of the "high end" machines sold at considerably higher money. For instance, many machines that use Philips transports are hampered by the performance of newer mechanisms. Many also use a Philips design re-badged for marketing. After all, how many companies actually design complete CD players these days? There's Sony, Yamaha and now Cyrus. Toshiba, Sharp and Hitachi may also still do this. NEC was one OEM manufacturer, and so is Sony and Philips. I'm not too sure about the others out there. My own preference runs to something like the Denon machines, they normally use a Sony transport and design the rest of the machine themselves. Marantz would be in a similar position and I would expect similar performance between these two. The more expensive machines should be designed better, but one never knows for sure. I have seen $150 Sony machines in a new case that list for over $1 K, you just never know. No exaggeration with this example, I was a warranty service shop for this brand.

Good points Jan, Chris
 
Hi Scott,

I'm always reticent to disagree with you as you do know what you are talking about. However, in this case I can't agree with you on this.

This statement has ignored the one point that has been made several times. The fact is, if you have but one C2 flag, the data is no longer the same as what was encoded onto the CD.

-Chris

I thought I made the point, in the case of no C2 errors, the data is identicle. AFAIK the CD player does not use DSP for normalization or equalization. Also I was not talking about comparing files with checksums, I was talking about lineing them up and subtracting. One or two ticks in a 72 min CD does not constitute a "sonic coloration".

Some of us have done the experiment, there are plenty of CD's our there with no C2 errors.
 
Administrator
Joined 2004
Paid Member
Hi Scott,
I thought I made the point, in the case of no C2 errors, the data is identicle.
You are absolutely correct. If there were no C2 flags set inside the DSP section in the CD player's decoding section, the data coming out has zero errors. The only problem is to find the C2 flag and monitor that. Once the data does exit th DSP chip, it will show no errors. This has nothing to do with whether the data is correct or not though, it only means that the data output does not violate the encoding structure.

I'm only trying to make the point that from the outside, short of comparing each and every bit with the master file, there is no way for you to know if a major data error existed in the process. Not unless you can monitor the hardware C1 and C2 flags.

AFAIK the CD player does not use DSP for normalization or equalization.
Surprise!
The DSP does in fact process the EQ if the pre-emphasis flag in the sub-code has been set. In early machines, this is carried out in the analog domain. Most newer discs do not use pre-emphasis anymore and there are inexpensive machines out there that simply do not implement this in any way. Now, that's a cost cutting move!

I was not talking about comparing files with checksums, I was talking about lineing them up and subtracting.
Yes, that I understand. This is about the only way to make the comparison.

All,
One question to ask though. Digital values may differ by a count or two with no effect on the sound quality at all. Does the comparison software allow for this? If so, what are it's limits? Some CD files do have pre-emphasis, and that should trigger a string of errors once it's processed, if compared to the original file.

One other thing that tells me that something might be amiss here. When ripping a CD, choosing a high quality rip takes far longer than one at a lower quality. If error correct does in fact eliminate bit errors, there should be zero difference in the time for rip. I have to believe that the idea of a singe pass "bit perfect" copy is an illusion.

Another piece of evidence that should really concern those believing in "bit perfect" audio files is the behavior of the actual error flags. If anyone can get access to the hardware test points for C1 and C2 and a clock, set the clock for reset on a counter. Gate 1 can count C1 errors and gate 2 can count the C2 errors. Or, just run in totalizing mode if you want. If you only have a single channel counter, one at a time. What you will find is that the C1 flag is an active little guy, and the C2 flag is not sleeping either. You would also notice that you get a different number each time you play the same track, some errors are transient (CDROMs take advantage of this with audio CDs). Admittedly, these experiments were performed by myself over 15 years ago. However, the CD transports were better with a cleaner data stream than todays garbage. Do consider that we can use the same media and it also uses the exact same error correction coding as anything stamped out today. I also suspect that the early quality could have been better as well in CD manufacture.

If you have a high number of correctable errors (C1), the chances for uncorrectable errors occurring go way up (C2). In a perfect environment, there is expected a certain number of C1 errors, and that's without any staining, scratches, hair or dust with a properly aligned CD player and no vibrations and a perfect CD (no pinholes or any other defects). All of this adds up to the total number of errors seen in real use.

Hi again Scott,
One or two ticks in a 72 min CD does not constitute a "sonic coloration".
Well, it's far greater than one or two ticks ( a mute I guess?). However I completely agree that unless horrible digital distortion occurs, or a muted section, you are not going to hear anything wrong. Horrible digital distortion will only ever occur with a cheap (probably old) CD player. The current DSP chips will not allow this to happen. A really large area of damage will probably cause a skip. Now, if your CD player skips, I'll guarantee you that there were tons of permanent errors in that vicinity that you didn't really notice.

If you compare the finished signal from a CD or DVD, that's far better than what you get from most other sources - assuming a good equivalent system for each medium you want to compare.

Hi cbdb,
I am confused. Anatech says lots of C2 errors, SY says there very rare.
That is a point of view thing, and also an interpretive thing. There are many C1 errors per revolution (for purposes of imagining the situation, "X" number of frames would have been more accurate). You most probably will not hear these issues. Consider that a bad bit represents 1/44,100 seconds. The reconstruction filter will pretty much eliminate a one bit error. In fact, do the math and figure out how large an error has to be before you could hear it if it were muted, we're talking about a large number of successive errors here. Interpolation will effectively mask all of this - within reason.

There are two ways of looking at this. What will you hear, and is there, or is it a perfect, error-less reproduction? For most of you, you're looking at this from a "I haven't heard any problems at all, that anatech guy is full of @#(*&@$&!". Well, you're both right and wrong. You normally will not hear problems with a properly adjusted CD player and good media. But, the playback is certainly not bit perfect or error free. Certainly, the large number of people who have heard improvement in the sound of their CD player after service proves most of us can not hear when there are problems and high error rates. Sorry, but that is more than true. Measuring instruments we aren't.

I'd also like to add that any discussion of any digital output from a CD player has no bearing on what is being discussed as far as digital errors and correction of data from reading a CD. Everything has already happened before serial data is even prepared to leave. All you can do at this point is hurt it more.

-Chris
 
Posted by Anatech;

One other thing that tells me that something might be amiss here. When ripping a CD, choosing a high quality rip takes far longer than one at a lower quality. If error correct does in fact eliminate bit errors, there should be zero difference in the time for rip. I have to believe that the idea of a singe pass "bit perfect" copy is an illusion.

Hello. I will leave it to you all to argue about read errors, in disco era audio technology.

But, I wanted to point out that, to the best of my knowledge, "accurate" rippers take so long because they turn off the cache & C2 error correction and substitute brute force error correction, with multiple reads (as I recall at least 8 consecutive identical passes for EAC). Not really sure about other rippers, but I would guess they are similar.

Eric
 
Hi Scott,


Surprise!
The DSP does in fact process the EQ if the pre-emphasis flag in the sub-code has been set. In early machines, this is carried out in the analog domain. Most newer discs do not use pre-emphasis anymore and there are inexpensive machines out there that simply do not implement this in any
-Chris

By now you could have posted the experiment;). I guess I'll have to, unfortunately this weekend is shot. BTW I assumed that no one used pre-emphasis anymore (ever did? I never saw the bit set). Under Linux you can use built-in functions to byte wise compare two files. I posted a long time ago that With a CD in good condition an EAC rip and an 8X rip with the software that came with a $5 keychain MP3 player produced a full disk of bit by bit exactly the same files. The only difference was a different length of lead in which certain anal-retentive types carry on about. I have also run CD's with no visible C2 errors, I admit 10 or 15 min was enough for me.

Not really arguing here, just a different experience. Most claims of audibility come with the usual flowery language (my favorite of course "no bass";) ) This would require ALL the bits to be different.
 
Administrator
Joined 2004
Paid Member
Hi Eric,
I wanted to point out that, to the best of my knowledge, "accurate" rippers take so long because they turn off the cache & C2 error correction and substitute brute force error correction, with multiple reads (as I recall at least 8 consecutive identical passes for EAC).
Well, yes. That would be my entire point. They do not get an accurate read in one pass. So, if you think about this, what conclusions do you come to?

Hi Scott,
By now you could have posted the experiment
Only if I'm set up to do that and I'm not busy. Unfortunately, neither are true.

I assumed that no one used pre-emphasis anymore
They did, and we had to check for it. The idea was to allow more bits to define the low level high notes. It think the idea is valid, but as always, the execution was probably poor in many units. Even switching the time constants by using a FET or BJT has problems as you know. Relays were used by a very few. This, more than anything, lead to a drop in the use of pre-emphasis. However, it exists in the standard, and it was used. Not that it's relevant to a younger music lover that has no old CDs - and never will. It will only bother you if you realize that the reason that a CD that sounds off (bright I guess) is because the encoding is no longer supported. The EQ is free in the DSP though.

Under Linux you can use built-in functions to byte wise compare two files.
DOS as well. Remember those days? I've compared many files way back when, those 360 K floppies were far more error prone than even today's (if you can find them). I must admit that I have never attempted to compare the actual digital files as read off a CD player. It's not really something that is needed. After all, if you are getting read errors and mis-tracking, C2 will be lit up like Las Vegas at night. Even if the C2 flag goes high occasionally, it never mattered as long as it was an occasional thing. No way a customer would ever hear that. For us, making the eye pattern the best it can be for that model is all you can do. Adjusting the 4.3218 MHz clock affected tracking on larger drop outs, so we are looking at a playability issue. Possible distortion is way, way down the list of concerns.

BTW, CD players are not optimized new out of the box people. However, the newer ones use active servos to "align to the disc". If anything wears, it compensates until the unit suddenly doesn't play well. Older machines (some) had better transports that produced a cleaner data stream (fewer errors). Adjusted properly by a good technician, they will outperform a new unit easily. Welcome to the world of fewer warranty service calls and mediocre performance for all. The cheap ones have improved and the better ones have de-evolved.

Did you know that some "walkman" types were only 12 or 14 bit? My Nakamichi OMS-7 is a 14 bit machine, but with one heck of a reconstruction filter. The less expensive earlier machines were single D/A converter - at 14 bit and a couple I have seen were less. There are many things that the average person does not really want to know.

Scott, there is an easier and faster test. Just have a look at the eye patterns off various CDs and from a few CD players. That's what I've been looking at for years and years. Set up for 0.5 V/div, AC coupling and 0.5 uS/div. That should give you an eye pattern the first time you try. You may possibly have to play with the trigger level if the signal is noisy. If the scope can't trigger, what can?

Most claims of audibility come with the usual flowery language (my favorite of course "no bass" ) This would require ALL the bits to be different.
Which drives me crazy! Audio improvements should have run out of veils a long time ago! As for the improvements to one specific frequency range - especially the bass, yes. It would take a complete change in the digital values. That entire concept cracks me up, it shows how little is understood by the snake oil people - (the green marker guys). :p

I think we are coming at this from two completely angles Scott. Aside from a quality issue, I don't think odd errors matter at all. I just hate the phrase, "bit perfect". This is especially true when I see the C2 flag going high. ;)

-Chris
 
They do not get an accurate read in one pass. So, if you think about this, what conclusions do you come to?

That depends- is the criterion the delivery of an accurate set of bits to the D/A or the copying of all the bits (including error correction and all checksum) to make a "mirror" of the original disc?

Like Scott, I used an LED to look at the "uncorrected" error flag in my old Magnavox player. And unless the disc was really in bad shape (with skips and silences), there were rarely any, and for well-used discs, maybe one or two on the entire disc.
 
Administrator
Joined 2004
Paid Member
Hi SY,
Can you see a blink that lasts 1/44,100 sec?

That's why we used counters and storage oscilloscopes. Watch both flags. When C1 gets busy, expect C2 to chime in now and again.

In the end, no one notices. My only point is that "bit perfect" is a fable, a story. Life just isn't that clean, and our equipment ages, as do our discs. So put in realistic terms that most of us experience, you are not getting a "bit perfect" performance. And ... it doesn't seem to matter much anyhow.

-Chris :)

Edit: I'd be happy if we were talking about the corrected end result. That is the point of view I'm taking with this. For ripping, I sincerely hope that it's only working to eliminate C2 errors. Otherwise, it would never, ever finish with no C1 flags. -ever!
 
Last edited:
Posted by anatech
Well, yes. That would be my entire point. They do not get an accurate read in one pass. So, if you think about this, what conclusions do you come to?

The point is 8 passes takes time. Who says they aren't accurate?

My software tells me about error counts. I have ripped over 400 cds into my PC. Only twenty to thirty (did not keep notes) reread more than 8 times. So I would say most of them not only read correct the first time, but they read right 8 consecutive times.

As to the tracks that read up to 84 times, they probably have suspect data. And all of those cds had visible damage, mostly corrosion/delamination (AFAIK) on cds from the 80's.

Actually, in my limited experience with a $39 Lite-on drive, the odds of a perfect read the first time are incredibly high. As to the data from the damaged discs, I would think the data derived from so many reads would be far more accurate/listenable than any cd player, even if it is not perfect.

Eric
 
Administrator
Joined 2004
Paid Member
Hi Eric,
Knowing what I know about audio CD players and CDs, I have no illusions about what is really going on. I'm actually very good at setting CD players up.

The one thing I'll agree with you on is : it doesn't matter because people do not seem to notice. My only goal is to correct what I see as false information.

BTW, the CDROM firmware will not re-read a track unless there has been an uncorrectable error. I guess software can over-ride that, but given that rippers are marked on how fast they are, do you not think they would avoid wasting time at all costs? Just from a logical viewpoint. If it could read in one pass, it would. Whenever I make a CD or DVD, I always have it scan after burning. Good blanks create a one pass checkout. You can hear it when it has to reread a track.

-Chris
 
....BTW, the CDROM firmware will not re-read a track unless there has been an uncorrectable error. I guess software can over-ride that, but given that rippers are marked on how fast they are, do you not think they would avoid wasting time at all costs?....

Fast rippers, which use the drives firmware to rip at high speeds for encoding to your mp3 player are a completely different topic then secure rippers.

Again, my point is to eliminate the problems of primitive optical encoding, by ripping the data to a proper error corrected format. How else does one get accurate data from cds without redundant rereads? (Who cares how long it takes? It is done once and done unattended.)

Either way, my experience with actually ripping data is that mis-read data is an occasional aberration (read: inaudible). Your vast knowledge of disc players and their possible flaws notwithstanding, the fact is the errors are rare (with the exception of damaged discs, which probably sound better ripped anyway.) and cd players based on obsolete technology are easily bypassed.

The only shame is that music is still distributed a format that requires extra effort to recover. But again, it is only done once, preferably with a new disc.
 
One more thing. I have an early 80's copy of Boston's "Third Stage" album. When you hold it up to a light, it looks like a star map, not a cd. It took EAC over 4 hours to rip it. It IDed about 8-9 C1 errors. The software points the errors out, tells you where they are and offers to do C2 for you. I was unable, even while listening really hard with headphones, to the exact spot the error was reported, to distinguish any of the errors. I am sure a cd player would refuse to play the disc at all.

So even my worst ("unplayable") cd has fewer than a dozen C2 corrected errors in 45min. of music. Although they are clustered in tiny sections. They are still not audible. (to me.)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.