Some are suggesting that this timing could need to be on the order of 10-20 ps to be inaudiable.
Suggestion - as in: I think I can but cannot verify my statement?
Suggestion is what a mentalist does. Or Uri Geller. Or the "cold readers" of all sorts. So called "psychics".
And there is this:
http://mixonline.com/recording/mixing/audio_emperors_new_sampling/
The main reason it knocked the wind out of me was its conclusions. It was designed to show whether real people, with good ears, can hear any differences between “high-resolution” audio and the 44.1kHz/16-bit CD standard. And the answer Moran and Meyer came up with, after hundreds of trials with dozens of subjects using four different top-tier systems playing a wide variety of music, is, “No, they can't.”
ionomolo said:I meant the jitter on the records/input themselves, like if one single expensive update on the ram timming was to solve timming issues on all legible records.
There is not anything we can do about jitter produced by the recording process. The information is lost.
dave
audio-kraut said:
The BAS study... a lot of people are using that as an illustration of just how poor the ABX test is.
Another good example is all the ABX testing that went on to validate one particular audio reduction scheme, and it got blown out of the water by one listener (Bert Locanthi) picking out a very audible artifact that all the ABXing missed.
dave
The information is lost.
Since when does jitter lead to loss of information?
Afaik jitter lead to "noise", information neutral as noise is usually not considered information and an increase in THD. But maybe you have different information.
The BAS study... a lot of people are using that as an illustration of just how poor the ABX test is.
Care to name a few? Or are you just "suggesting"?
Something like this:
So how did the audio community respond to this? Meyer tells me that he got a lot of “thank you” and “it's about time” responses. He also says that the article passed through the Journal's rigorous review process without any argument. But some loud screams were heard from various members in the audio-tweak community, and a number of heated and sometimes nasty flame wars erupted on several audio forums within hours of the article's release — many of them started by people who hadn't bothered to read it first
audio-kraut said:Since when does jitter lead to loss of information?
Jitter is a loss of information. Digital sampling assumes exact spacing between samples. Jitter is the loss of information wrt the timing of the samples.
dave
Sorry, but what is the purpose of this thread? And why involve jitter? And why pay any attention to something as obviously loony as the idea that two identical files reproduced on the same system can consistently sound different. Or that by converting from uncompressed to lossless and back to uncompressed can consistently change the way a file sounds. I underline "consistently" as disk fragmentation may be a random factor.
And don't forget: not all subjectivists are clueless morons 🙂
And don't forget: not all subjectivists are clueless morons 🙂
analog_sa said:Or that by converting from uncompressed to lossless and back to uncompressed can consistently change the way a file sounds.
That that would be possible i find a real stretch. The data can't tell where it is from if it is the same.
dave
What's the purpose of this thread? Um...there isn't one, as such. It was rather late in my timezone, and I was annoyed at one radical example, and I posted when I probably should have just gone to bed and slept it off.
Hi Planet10: I'm not sure how you figure that jitter comes into it, when we're discussing the playback of two files which have been verified to be identical. Jitter is an entirely separate, very much real, but much overstated IMHO, issue.
Hi analog_sa: I have no problem with subjectivity, provided that there is actually a difference to get subjective about. It's up to the listener to decide whether A is better than B, but when A == B, the debate is ridiculous.
For instance, I personally assert that most amplifiers sound the same, provided they have low distortion, a high input impedance and a low output impedance, and sufficient power to avoid clipping.
However, it's not up to me to say whether or not such amplifiers universally sounds better than a single-ended triode which may measure very poorly by the above criteria. That's subjective. That's up to each listener to decide. Where there's a difference, subjectivity is welcome.
Hi Planet10: I'm not sure how you figure that jitter comes into it, when we're discussing the playback of two files which have been verified to be identical. Jitter is an entirely separate, very much real, but much overstated IMHO, issue.
Hi analog_sa: I have no problem with subjectivity, provided that there is actually a difference to get subjective about. It's up to the listener to decide whether A is better than B, but when A == B, the debate is ridiculous.
For instance, I personally assert that most amplifiers sound the same, provided they have low distortion, a high input impedance and a low output impedance, and sufficient power to avoid clipping.
However, it's not up to me to say whether or not such amplifiers universally sounds better than a single-ended triode which may measure very poorly by the above criteria. That's subjective. That's up to each listener to decide. Where there's a difference, subjectivity is welcome.
TheSeekerr said:
For instance, I personally assert that most amplifiers sound the same, provided they have low distortion, a high input impedance and a low output impedance, and sufficient power to avoid clipping.
You have opened pandora's box here. There is something in amplifiers that make them sound different even if they have low distortion. That's why low-end high-feedback designs don't sound good. I belive it has to do with dynamic behaviour but i won't get into this. BTW it's obvious that you havn't built a lot of low-distortion amplifiers.
For instance, I personally assert that most amplifiers sound the same, provided they have low distortion, a high input impedance and a low output impedance, and sufficient power to avoid clipping.
I think it's more correct to say, "When a difference between the sound of two amplifiers is demonstrated, it is always due to straightforward and easily measurable engineering explanations."
Hi SY,
I guess my only concern with the OPs premise is the method where the identical files are indeed "bit perfect" I know enough about data errors from CD playback to be very suspicious of a "bit perfect" rip.
-Chris
Now that is a statement that I'm entirely on board with.I think it's more correct to say, "When a difference between the sound of two amplifiers is demonstrated, it is always due to straightforward and easily measurable engineering explanations."
I guess my only concern with the OPs premise is the method where the identical files are indeed "bit perfect" I know enough about data errors from CD playback to be very suspicious of a "bit perfect" rip.
-Chris
MJL21193 said:Gee, I wonder who you're referring to? 😀
I can't possibly imagine. 😉
Netlist said:Now I'm confused:
My references to timing have to do with his assertion that discs burned from different devices sound different despite being "bit-perfect". One of the papers i found while looking for a reference to human time acuity measured a huge range of jitter in differing hardware. From reading that one would almost expect CD-Rs burned from different devices to sound different... then there are the stories we here of a set of bits going off to 2 mastering facilities and getting back 2 bit-perfect CDs that sound different.
dave
People don't want the party to end. DIY something has to best SONY or who ever. I recently read a teardown/review by an extremely sophisticated ham radio operator of the SONY HDFM ($99) radio. He concluded that hands down it beat every audiophile tuner ever produced on every technical spec that mattered.
Hi Hugo,
I think we are skirting around the issue of how a "bit perfect" rip is reported. I am convinced that this basic concept is flawed, and that software available to most people is flawed in that it reports "fixed" files as "bit perfect". This is not what a valid data block is though. A valid data block only means that no illegal values exist. It has nothing to do with what was done to correct a data error that corrupted that block.
It seems to be a little known fact that reading from an audio CD has an expected error rate. In fact, CD DSP chips at one time had error flags available to monitor. They are called the C1 and C2 error flags. I know you know some, if not all of this, but I want to put everyone on the same page.
The C1 flag is set for any data error read from the CD after demodulation. The information comes off as a modulated current from the photo diodes. After I-V conversion, we have an analog RF wave form that is encoded for error recovery, and to ensure a maximum "0" and "1" pattern to set the minimum frequency, and the maximum frequency received. The exact coding is not of great importance except to say that it is different from what is used with computer data CDs.
The C2 flag is set whenever there is an unrecoverable error in the data stream. This is actually going to indicate a string of invalid data. What this means is that the distributed data can not be read, so that that data block (or more) has experienced data that is not repairable, or gone . Never to return, history, impossible to be "bit perfect" - ever. Period.
The C2 flag is actually set more than you want to know about, and I think that is why it's no longer with us. Now, this flag is also set in the data stream to mark the bad blocks. At this point in time, further data processing can be done. The purpose of this is to conceal the bad data block. We now have some choices to make, some of which depend on the history and future of the data. Right now you should begin to feel uneasy. This is also why the older practice of buying a cheap transport and good D/A was so silly.
So, what do we do and what choices do we have now? Well, we can attempt to make an educated guess as to what the data should have been by looking at surrounding good data. This is the nicest way of handling lost information and may actually be very close. This is called interpolation and the better DSP chips do this, when they can. The next attempt would be to reinsert the previous good data. Okay, not perfect but we are now attempting to avoid nasty noises. Say "bye bye" to the concept of "bit perfect", we are getting heavier into error concealment. What if we can't do that? Yuck, the best thing we can do right now is to insert a digital word for zero output. You might hear it, maybe not. Remember we are talking about slices at 44.1 KHz (for the non-oversampling crowd) or higher if it's oversampled. Yes, we will now have more zero output, or maybe a nice reduction to and gain from to the next value. I hope. Again, it all depends on what the DSP complexity is. Okay, so what do those cheap and nasty DSP chips do? Simple, they do nothing. Yup, they pass the corrupted data straight on to the D/A converter stage. The converter then tries to output whatever value the data is. Either that or the output is muted (drastic) when errors continue for some time.
We do have some good luck on our side though. If the C2 errors are of short duration, the filter should bring whatever value the bad data is closer to the analog values on either side through the wonderful integration characteristics of a low pass filter. Wait! What happens if our plucky DIYers had modified their CD player and removed this filter? I guess the muting circuit then operate when C2 errors go on for a while. What, you say you removed the muting transistors and did nothing to replace that function? I guess those people get to listen to screeching and other rather unpleasant noises. All you can now hope for is that the system isn't turned up loud and that the amount of high frequencies contained in that noise is not high. Problem is, the random values will be noise rich in high frequency content.
Sorry for the aside there, but I wanted to cover that aspect.
Now wait. We are not processing the signal to it's analog final destination. We are sending the signal out in a digital format. What does that mean to us? It means that the C2 error flag does not appear at all. All further attempts at repairing the damage will not occur (to the best of my knowledge anyway). The bad data is put into a "legal" form so that the CRC value agrees with the (now corrected?) data. I think the better DSP chips will go down the list of interpolation, then reinsert the previous good data and finally mute digitally (insert a zero value, which is not a digital zero which would be a negative peak). Of course, the cheap DSP would simply flag and output the rotten data. Yahoo!
Now, are computer CDROMs equipped with good DSP chips for audio, or really cheap ones? I'm thinking they are going to run with the cheap DSPs, since their intended use is computer data CDROMs. Here the data correction is rather more important.
Knowing this, what exactly does a program mean by "bit perfect"? I am at a loss to see how the lost information can reappear.
-Chris
I think we are skirting around the issue of how a "bit perfect" rip is reported. I am convinced that this basic concept is flawed, and that software available to most people is flawed in that it reports "fixed" files as "bit perfect". This is not what a valid data block is though. A valid data block only means that no illegal values exist. It has nothing to do with what was done to correct a data error that corrupted that block.
It seems to be a little known fact that reading from an audio CD has an expected error rate. In fact, CD DSP chips at one time had error flags available to monitor. They are called the C1 and C2 error flags. I know you know some, if not all of this, but I want to put everyone on the same page.
The C1 flag is set for any data error read from the CD after demodulation. The information comes off as a modulated current from the photo diodes. After I-V conversion, we have an analog RF wave form that is encoded for error recovery, and to ensure a maximum "0" and "1" pattern to set the minimum frequency, and the maximum frequency received. The exact coding is not of great importance except to say that it is different from what is used with computer data CDs.
The C2 flag is set whenever there is an unrecoverable error in the data stream. This is actually going to indicate a string of invalid data. What this means is that the distributed data can not be read, so that that data block (or more) has experienced data that is not repairable, or gone . Never to return, history, impossible to be "bit perfect" - ever. Period.
The C2 flag is actually set more than you want to know about, and I think that is why it's no longer with us. Now, this flag is also set in the data stream to mark the bad blocks. At this point in time, further data processing can be done. The purpose of this is to conceal the bad data block. We now have some choices to make, some of which depend on the history and future of the data. Right now you should begin to feel uneasy. This is also why the older practice of buying a cheap transport and good D/A was so silly.
So, what do we do and what choices do we have now? Well, we can attempt to make an educated guess as to what the data should have been by looking at surrounding good data. This is the nicest way of handling lost information and may actually be very close. This is called interpolation and the better DSP chips do this, when they can. The next attempt would be to reinsert the previous good data. Okay, not perfect but we are now attempting to avoid nasty noises. Say "bye bye" to the concept of "bit perfect", we are getting heavier into error concealment. What if we can't do that? Yuck, the best thing we can do right now is to insert a digital word for zero output. You might hear it, maybe not. Remember we are talking about slices at 44.1 KHz (for the non-oversampling crowd) or higher if it's oversampled. Yes, we will now have more zero output, or maybe a nice reduction to and gain from to the next value. I hope. Again, it all depends on what the DSP complexity is. Okay, so what do those cheap and nasty DSP chips do? Simple, they do nothing. Yup, they pass the corrupted data straight on to the D/A converter stage. The converter then tries to output whatever value the data is. Either that or the output is muted (drastic) when errors continue for some time.
We do have some good luck on our side though. If the C2 errors are of short duration, the filter should bring whatever value the bad data is closer to the analog values on either side through the wonderful integration characteristics of a low pass filter. Wait! What happens if our plucky DIYers had modified their CD player and removed this filter? I guess the muting circuit then operate when C2 errors go on for a while. What, you say you removed the muting transistors and did nothing to replace that function? I guess those people get to listen to screeching and other rather unpleasant noises. All you can now hope for is that the system isn't turned up loud and that the amount of high frequencies contained in that noise is not high. Problem is, the random values will be noise rich in high frequency content.
Sorry for the aside there, but I wanted to cover that aspect.
Now wait. We are not processing the signal to it's analog final destination. We are sending the signal out in a digital format. What does that mean to us? It means that the C2 error flag does not appear at all. All further attempts at repairing the damage will not occur (to the best of my knowledge anyway). The bad data is put into a "legal" form so that the CRC value agrees with the (now corrected?) data. I think the better DSP chips will go down the list of interpolation, then reinsert the previous good data and finally mute digitally (insert a zero value, which is not a digital zero which would be a negative peak). Of course, the cheap DSP would simply flag and output the rotten data. Yahoo!
Now, are computer CDROMs equipped with good DSP chips for audio, or really cheap ones? I'm thinking they are going to run with the cheap DSPs, since their intended use is computer data CDROMs. Here the data correction is rather more important.
Knowing this, what exactly does a program mean by "bit perfect"? I am at a loss to see how the lost information can reappear.
-Chris
The discussion wasn't about playback from a CD. It was about playback of a track ripped with substantial software error-correction (through re-reads among other things), then compared to a database of other users rips of the same track (the odds of two sets of errors from two different systems resulting in the same checksum are vanishingly small) played back against another rip of the same track from a different type of optical disk also resulting in exactly the same data with exactly the same checksum.
Most certainly not about perfect realtime playback from the CD spinning at 1x.
Most certainly not about perfect realtime playback from the CD spinning at 1x.
The two rips truly are bit-perfect to the intended data on the CD, but even if they weren't it wouldn't change anything. The checksums are identical, whatever errors may have occurred in ripping, they beat the stupendously improbable odds and matched exactly. Even if they are both imperfect, they are identically imperfect.
Well thanks.
If mako and Tom__x (among others) have it right, it would put an end to my confusion.
Sure Chris has posted the most detailed and accurate explanation but it didn't solve the problem:
“I am at a loss to see how the lost information can reappear.”
Indeed, I keep believing that the checksum algorithms are the most solid tools we have at hand.
If hundred, sometimes thousands of rips worldwide created in mostly all different conditions generate an identical checksum, I can not believe that all these files would be different, given the fact that the drives are well calibrated and in good working condition.
Otherwise, the whole effort of ‘checksumming’ would be useless. (Which it perhaps is)
I’m referring to the AccurateRip plugin here:
http://www.exactaudiocopy.de/en/index.php/overview/basic-technology/accurate-rip/
Also, if I deduct two identical files with Audition and the net result is zero, what is there to hear?
/Hugo
If mako and Tom__x (among others) have it right, it would put an end to my confusion.
Sure Chris has posted the most detailed and accurate explanation but it didn't solve the problem:
“I am at a loss to see how the lost information can reappear.”
Indeed, I keep believing that the checksum algorithms are the most solid tools we have at hand.
If hundred, sometimes thousands of rips worldwide created in mostly all different conditions generate an identical checksum, I can not believe that all these files would be different, given the fact that the drives are well calibrated and in good working condition.
Otherwise, the whole effort of ‘checksumming’ would be useless. (Which it perhaps is)
I’m referring to the AccurateRip plugin here:
http://www.exactaudiocopy.de/en/index.php/overview/basic-technology/accurate-rip/
Also, if I deduct two identical files with Audition and the net result is zero, what is there to hear?
/Hugo
Hi Tim,
Consumers rarely get the truth about anything. Computer people tend to really be sure about just about everything with no real knowledge. I questioned the basic truth of a "bit perfect" music file. This would seem to be confirmed by the act of ripping a CD. The more perfect you want the rip to be, the longer the process can take.
When you stop to consider how people perceive the "redbook standard" on CD standards, be aware that the first CD players designed to adhere to these standards rejected most CDs when play was attempted. I know because we had to deal with the customers on this. The Nakamichi OMS-5 and OMS-7 had a major servo amp redesign and modification to deal with this reality. I performed these mods on several machines with complete success. The point I'm making is that most CDs manufactured do not comply with the "redbook standard". Yet, it is this very standard that people hold up when confronted with player problems.
Truth when it relates to consumer products and computer operating systems is a complete joke. Understand that the C2 hardware error flag is no longer accessible to the best of my knowledge. This is true for CDROMs as well as consumer CD Players. Re-reading tracks is about the only way to recover from soft errors, temporary ones caused by mis-tracking or dust. A hard error from an information layer defect is permanent no matter how many times that poor CDROM reads over the affected areas.
Last thing to think about. A defect in the master will appear on all copies until a new metal master is created. I'm sure all the copies should agree, but there are other defects on individual CDs. They should not all agree. This should bother you - a lot.
-Chris
Then, what is your source for the data? The only correct representation of any song on CD is the master file created at a CD Mastering suite and any digital data type copies of same.The discussion wasn't about playback from a CD.
I stated my deep suspicions about this.It was about playback of a track ripped with substantial software error-correction (through re-reads among other things), then compared to a database of other users rips of the same track (the odds of two sets of errors from two different systems resulting in the same checksum are vanishingly small) played back against another rip of the same track from a different type of optical disk also resulting in exactly the same data with exactly the same checksum.
Consumers rarely get the truth about anything. Computer people tend to really be sure about just about everything with no real knowledge. I questioned the basic truth of a "bit perfect" music file. This would seem to be confirmed by the act of ripping a CD. The more perfect you want the rip to be, the longer the process can take.
When you stop to consider how people perceive the "redbook standard" on CD standards, be aware that the first CD players designed to adhere to these standards rejected most CDs when play was attempted. I know because we had to deal with the customers on this. The Nakamichi OMS-5 and OMS-7 had a major servo amp redesign and modification to deal with this reality. I performed these mods on several machines with complete success. The point I'm making is that most CDs manufactured do not comply with the "redbook standard". Yet, it is this very standard that people hold up when confronted with player problems.
Truth when it relates to consumer products and computer operating systems is a complete joke. Understand that the C2 hardware error flag is no longer accessible to the best of my knowledge. This is true for CDROMs as well as consumer CD Players. Re-reading tracks is about the only way to recover from soft errors, temporary ones caused by mis-tracking or dust. A hard error from an information layer defect is permanent no matter how many times that poor CDROM reads over the affected areas.
Last thing to think about. A defect in the master will appear on all copies until a new metal master is created. I'm sure all the copies should agree, but there are other defects on individual CDs. They should not all agree. This should bother you - a lot.
-Chris
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Line Level
- I need a hand - Rally the scientists