Is the 44.1khz in WAV quality actually suppose to directly translate into 44.1khz into sound frequency, I always thought it was something else? If that is so I can't explain why I can hear difference between 44.1khz/48hz and 96hz, not to mention the huge difference between 22khz and 44khz.
I haven't much experience of audio oversampling, but I have of video and still images being a bit of a home cinema and photography buff. In those 2 cases do result in a much more analog rendition.
The case of recording studios compressing the used dyanmic range, or to make recordings "hot", has become the norm over the last 3 decades. It has 2 causes;
1) the general public make the mistake of thinking that something is better sounding because it is louder, even if they do not recognise it is such. So no record company wants to have their songs sounding worst than their competitors
2) Songs are tailored to the lowest common denominator in playback equipment, studios will not release songs that sound worst than their competitors on cheap boomboxes and car stereos.
It would be great for studios to release recordings that are remixed for us audiophiles, and preferable in a high quality than redbook. But for studios the risk is too great. That is ultimately why SACD and DVD-A failed. The market is too small for these products, in the age where 128bit MP3s are perfect fine for most people, and the risk of no return or a loss are too great.
The solution is not my idea at all, I've seen it proposed on other forums in relation to movie production. The internet can break down the traditional supply chains between producers and consumers. If interested consumers were to able to "pre-order" a wanted item, put a deposit in a holding fund, and once the quality got up to preset amount that the studio would be interested in producing for production be started for the said product. This would eliminate the risk to the studio, many middle men and holding costs on all fronts of the supply chain.
PS I'm one of those freaks that can see beyond 72fps 😀 returns start diminishing for me rapidly around 90fps
I haven't much experience of audio oversampling, but I have of video and still images being a bit of a home cinema and photography buff. In those 2 cases do result in a much more analog rendition.
The case of recording studios compressing the used dyanmic range, or to make recordings "hot", has become the norm over the last 3 decades. It has 2 causes;
1) the general public make the mistake of thinking that something is better sounding because it is louder, even if they do not recognise it is such. So no record company wants to have their songs sounding worst than their competitors
2) Songs are tailored to the lowest common denominator in playback equipment, studios will not release songs that sound worst than their competitors on cheap boomboxes and car stereos.
It would be great for studios to release recordings that are remixed for us audiophiles, and preferable in a high quality than redbook. But for studios the risk is too great. That is ultimately why SACD and DVD-A failed. The market is too small for these products, in the age where 128bit MP3s are perfect fine for most people, and the risk of no return or a loss are too great.
The solution is not my idea at all, I've seen it proposed on other forums in relation to movie production. The internet can break down the traditional supply chains between producers and consumers. If interested consumers were to able to "pre-order" a wanted item, put a deposit in a holding fund, and once the quality got up to preset amount that the studio would be interested in producing for production be started for the said product. This would eliminate the risk to the studio, many middle men and holding costs on all fronts of the supply chain.
PS I'm one of those freaks that can see beyond 72fps 😀 returns start diminishing for me rapidly around 90fps
zBuff said:Is the 44.1khz in WAV quality actually suppose to directly translate into 44.1khz into sound frequency, I always thought it was something else? If that is so I can't explain why I can hear difference between 44.1khz/48hz and 96hz, not to mention the huge difference between 22khz and 44khz.
The CD standard prescribes a sampling frequency of 44.1 kHz and a data width of 16 bits for each channel. Note that this is the sampling frequency, that is the data rate of the sample data. According to the Nyquist theorem, the maximum usable frequency, in theory, is half that frequency, ie. 22.05 kHz. This means that we cannot record higher frequencies than that on a CD.
A WAV file does not have any particular sampling frequency or data length. First, it can contain several different subformats, of which one (which is called RIFF, if I rememeber correctly -- it has been a while since I wrote any program working on wav files) is used to store PCM (pulse code modulated) data, which is what is also used on CDs. However, in the WAV file it is possible to specify different data lengths and sampling frequencies and also different numbers of channels (more than two channels too, if desired). Hence a WAV file need not correspond to a CD at all. But, if we rip a CD to a WAV file it will normally be stored in 2 channel, 16 bits per channel and 44.1 kHz sampling frequency format, just as on a CD. There will thus be a true one-to-one correspondance on the data level between the original CD and the ripped version in the WAV file (if no uncorrectible errors occured leading to bit errors).
When you say you hear differences between 44.1, 48 and 96 kHz sampling frequency, I suppose you are listening through a computer sound card. I think there could be several reasons for this (if we assume there is a real audible difference).
First, most soundcards work only with 48 and/or 96 kHz natively, so any source with different sampling frequency (like 44.1 kHz) is upsampled by the soundcard. It could be the case that you get a better result if you instead use a good program to convert the file itself to 48 kHz, doing the upsampling in software without real-time constraints. Soundcards might use simple solutions for the upsampling to save money.
Second, if you hear a difference between 48 and 96 kHz it could be because the soundcard, especially the digital filters, can do a better job with a 96 kHz data stream than 48 kHz one, even thoguh there isn't really more information there than was in the original data stream.
Thanks for explain that, makes perfect sense. Funny I've never realised WAV was an encapsulation format much as AVI is.
When I was in highschool, in physics class the teacher one day brought in a tone generator, and successively played higher and higher pitched tones and asking which students could still hear the sine wave. I got to 23 or 24 khz myself but one guy reckoned he could hear up to 30khz !!!!!
As a result I've never had much belief in the theotrical limit of human hearing being only 20hz to 20khz.
Even now in my much older age I still get bothered by TVs, lights, mosquitos and sometimes electrically wiring, despite many years of abuse 😀 though I know my sensitivity at thoses higher frequencies have diminished quite a bit, but I still wouldn't consider myself a golden ear of any sort. Need more training perhaps, but I'd rather not, might spend too much time analysing the music rather than just laying back and enjoying it 🙂
Yes and I adhere to the school of thought that believes the hifi component that results in the greastest improvement in sound is a good bottle of wine, or two 😀
....or three?
When I was in highschool, in physics class the teacher one day brought in a tone generator, and successively played higher and higher pitched tones and asking which students could still hear the sine wave. I got to 23 or 24 khz myself but one guy reckoned he could hear up to 30khz !!!!!
As a result I've never had much belief in the theotrical limit of human hearing being only 20hz to 20khz.
Even now in my much older age I still get bothered by TVs, lights, mosquitos and sometimes electrically wiring, despite many years of abuse 😀 though I know my sensitivity at thoses higher frequencies have diminished quite a bit, but I still wouldn't consider myself a golden ear of any sort. Need more training perhaps, but I'd rather not, might spend too much time analysing the music rather than just laying back and enjoying it 🙂
Yes and I adhere to the school of thought that believes the hifi component that results in the greastest improvement in sound is a good bottle of wine, or two 😀
....or three?
Re: Re: thanks for clairification
Christer said:
there is no technical difference between oversampling and upsampling ... the terms tend to be used for two different purposes also in audio playback. Correct me if I am wrong.
There are no firm definitions of the terms and trying to pigeonhole them is futile.
Oversampling usually refers to increasing the sampling frequency by an integer factor (usually on the form 2^k for some k).
Only because in the late 90s marketing people adopted 'upsampling' to try to distinguish fractional oversampling (e.g. 44.1k->96k) from till-then traditional oversampling, suggesting the advent of something new (which it wasn't, except for the money in their pocket).
This can either be because the DAC has fewer than 16 bits and the
Syntax error: 'either' not followed by a matching 'or' 😉
For instance, most early CDPs had only 14 bit DACs, since 16 bit DACs were still very expensive. However, fast 14 bit DACs was still reasonably priced, so it was common to oversample by a factor 4 in an attempt to move the precision of the two least significant bits into the time domain.
Not 'most early CDPs' but just all early CDPs that were built on Philips basics.
Japan Inc employed slow 16-bit DAC chips with analogue-domain reconstruction (anti-imaging) filtering. Philips opted for 4x oversampling with digital-domain reconstruction filtering, and once oversampled decided they could get away with 14 bit DAC silicon, adding noise shaping to the formula to retain 16 bit performance. (Or the other way around). This was their TDA1540 DAC partnered with whatever the digital filter chip was called.
BTW, the very earliest literature describing Philips oversampling DAC system used the term 'upsampling', so there you go ...
(That shortly after everyone named it 'oversampling' has probably its origins in Dutch language only having this one term, while English has both 'up' and 'over'; the system was developed in Belgium and Holland, so ...)
Nowadays oversampling is either used because 1-bit DACs are used or to move the signal up to a higher frequency band to allow for more effective digital filters.
Nowadays it is ALWAYS used for digital-domain reconstruction filtering, and this is its most relevant application to us.
It is also used when a low-bit DAC architecture is employed, for decimating long source data words to short words, or just one bit, while pushing the information into a faster time domain (crappy explanation). This is not very relevant to us as it ties in with particular DAC silicon architectures and this is all fairly tedious and boring stuff.
Upsampling (and correspondingly downsampling) seems to be used when there is simply a need to convert a signal from one sampling frequency to another.
True enough, these are the generic definitions.
For instance, most computer soundcards work with 48 kHz only (or a
I don't agree, unless you mean 'most computers have a Soundblaster and SBs only work at 48kHz'. Now an SB is a poor example of a sound card ...
multiple of that, like 96 kHz). Hence, all 44.1 kHz signals, like wave files with ripped CDs must be converted to 48 kHz, which is done automatically by the soundcard and/or the soundcard drivers.
... or by Windows, and then with its quality depending on some arcane system settings you have (not) done.
up-/downsampling is not commonly seen in CD players, although I
That just depends on your definition of the moment, which carries no weight whatsoever.
And anyway, we don't even need such definitions. All one has to remember is that any operation that modifies the effective data rate or sampling rate of a digital audio stream, be it named 'over', 'up', or 'down', has to obey to certain laws and thus has to implement certain mathematical operations. These laws and operations are immutable, and while their implementations may differ in details (blown up by marketeers!) their fundamentals are carved in stone.
pesky said:
This does however raise an interesting point, since you can oversample any digital audio signal (in order to perform some sort of function, whatever it is) but oversampling does not necessarily change anything, it just means that you are reading the signal at a faster rate than it was recorded at.
While correct to the letter this definition of oversampling is quite useless in our context of audio replay, and thus it can be considered wrong.
If you 'oversample', let's say with a factor of 2, an audio signal contained below original half sampling frequency Fn1 (Nyquist frequency is half of sampling frequency, Fn1 = Fs1 / 2), either by replicating data samples or by zero padding (i.e. 'doing nothing') then the baseband signal changes (i.e. 'not nothing done') as its first image in the frequency range Fn1-Fs1 now falls in the post-oversampling baseband frequency space Fn2 = 2 x Fn1.
In order to make this into a valid operation (as always in the context of digital audio replay) the dry act of oversampling has to be followed with a filter action, in the new frequency space, at the old Nyquist frequency. That filtering is a massive mathematical operation. Since these go together as Fred and Ginger, as Tom and Jerry, as me and a good bottle of wine, 'oversampling' often means the two together, 'upsampling' means just the same, and 'downsampling' means the same with the sequence of operations swapped (filter first, then decimate).
upsampling implies a change in one of the parameters of the signal, whether it's bitdepth or frequency, so there is some sort of mathematical operation performed as well.
Let me eliminate a misconception.
Changing wordlength (I hate the term 'bitdepth', especially when there exists such a clear term as '(sample) word length') per se has NOTHING to do with over/updownsampling. *****sampling is an operation that changes data RATE (i.e. speed, tied to temporal resolution), while wordlength is a data FORMAT (tied to amplitude resolution).
One could imagine taking an original 44.1kHz/16 bit audio data stream and reducing it to 44.1kHz/12bit. No-one would call this downsampling. The correct term is requantisation.
One could imagine taking an original 44.1kHz/16 bit audio data stream and enhancing it to 44.1kHz/24bit. No-one would call this downsampling. The correct term would be requantisation if not for the operation being quite useless on its own, despite clever (?) schemes like Denon's Alpha/AL24 processing or Wadia's tacking-on of lower-level dither (pretty useless procedures on their own, read on).
---
Now let's go back to audio replay *****sampling and the required low-pass filter action. Filtering is equivalent to convolving two time series of data samples. One series holding the audio, the other holding the filter's impulse response (a series of 'samples' that 100% characterises the filter's frequency and phase response).
Convolving two streams, each with 16 bit sample words, yields one stream with up to 32 significant bits per sample. Presto, wordlength enhancement for free!
(Example, if you convolve the series [3 10 7] with [1 2 1] and then normalise back to a peak level of 10 you'll get [1.0 5.3 10.0 8.0 2.3], so suddenly there's life beyond the decimal point.)
And while the convolution/filter action cannot create new information, truncating the result back to 16 bit would destroy information, which cannot be allowed (up to a certain extent).
That's why oversampling (with filtering) a 16 bit signal has to be converted back to analogue with a DAC chip that can handle more than 16 bits of resolution, to preserve the original 16 bit signal accuracy.
any time there is a change in the signal, errors are present, and these add up to artifacts at some point (or not) depending on the strategy and method of conversion.
Yes, it is all in the details of the implementation. That holds for any domain where competence is required.
Remember that first Philips digital oversampling filter that read 16 bit data and outputted 14 bit data to the DAC?
Christer said:
No, in both cases you must add new samples, since you are going up to a higher sampling frequency. One way to do this is to compute new samples by some form of interpolation, which I think is necessary if the new sampling frequency is not an integer multiple of the source frequency. In the case it is an integer multiple, you can either compute the new samples by interpolation or let all the new samples have the value zero. In neither case are you adding any new information. The difference, I think, is that if you insert zeroes the digital filter has to do also the work corresponding to the interpolation you do in the other case.
The digital filter has in ALL cases to do a lot of work as it has to implement the prescribed brickwall low-pass at Fn1. Low-pass filtering is an interpolating action, and vice versa.
However, with non-integer *****sampling an initial round of interpolation is required in order to map the source time frame of the data onto the target time frame, simply because the two frames are not nicely aligned onto each other.
Zero One said:need for some serious threads just devoted to the conversion of LPs to digital and post processing.
When given some time I intend to write a series on this for www.tnt-audio.com
the maths involved in audio ... as a digital imaging industry guy I can see the parrallels.
[/q]
And the parallels are mostly valid, with the proviso that near-ideal filtering (like 1025-tap Sinc filtering) is possible in audio, and utterly impossible in imaging/video.
Hi Werner
I am looking forward to your articles, this is a exciting area, I am currently assembling a range of gear/software specifically to get the most out of this process, I like many others wish to do something considerably better than straight transfers (which I have done many many times and with basic editing).
There really is a lack of real heavy duty info out there for those of us with more serious intents with computer based sound and like many I lack your depth of background knowledge in the area of digital processing of sound......and I always tell my students if you want mastery you need knowlege of what is going on under the surface.
Thanks Werner for scraping away a little of the surface for me.
Many thanks
Zero One.
I am looking forward to your articles, this is a exciting area, I am currently assembling a range of gear/software specifically to get the most out of this process, I like many others wish to do something considerably better than straight transfers (which I have done many many times and with basic editing).
There really is a lack of real heavy duty info out there for those of us with more serious intents with computer based sound and like many I lack your depth of background knowledge in the area of digital processing of sound......and I always tell my students if you want mastery you need knowlege of what is going on under the surface.
Thanks Werner for scraping away a little of the surface for me.
Many thanks
Zero One.
simplification or not
it seems like the necessity for upsampling and the attendant chance for artifacts is greatly reduced when the source material and the bitdepth and frequency (of the a/d) are the same. (PLEASE correct me if I'm wrong) at this time, for cd reproduction if you had a d/a that operated at 44.1-- 16 bits, that there would be no conversion necessary, and so no error. If the signal was oversampled so that it fell precisely on the clock pulse, then that would be the best setup.
I realize that in the past dac's have not been available to achieve such a result, but with the state of chip foundries such as they are, creating a chipset that would achieve that should be easy, and would seem to be best option when buying a cd player, unless there is some sort of companding or pre-emphasis going on to create more dynamic range (or whatever) as a REQUIREMENT to be able to listen to cd information, like the riaa curve that is necessary to hear an album properly.
simpler is better, unless you're an idiot.
it seems like the necessity for upsampling and the attendant chance for artifacts is greatly reduced when the source material and the bitdepth and frequency (of the a/d) are the same. (PLEASE correct me if I'm wrong) at this time, for cd reproduction if you had a d/a that operated at 44.1-- 16 bits, that there would be no conversion necessary, and so no error. If the signal was oversampled so that it fell precisely on the clock pulse, then that would be the best setup.
I realize that in the past dac's have not been available to achieve such a result, but with the state of chip foundries such as they are, creating a chipset that would achieve that should be easy, and would seem to be best option when buying a cd player, unless there is some sort of companding or pre-emphasis going on to create more dynamic range (or whatever) as a REQUIREMENT to be able to listen to cd information, like the riaa curve that is necessary to hear an album properly.
simpler is better, unless you're an idiot.
further reading
I read the about the nyquist theorem, and it seem that the reason for additional processing is lost data due to imperfections in the source material, ie bad disk sectors due to dirt or incomplete "burning" and that the reconstruction of the lost data is accomplished by averaging the two signals, and inserting the "lost" data into the stream, rather than silence, so the better your source (clean, well rendered disks) would also reduce the necessity for "guessing" at the missing word (s)
I have to wonder if there are any programs that would tell you the percentage of "lost" data on a disk, so that you would know how much was being "reconstructed" and how much actual source material is there.
am I still missing something?
pesky
I read the about the nyquist theorem, and it seem that the reason for additional processing is lost data due to imperfections in the source material, ie bad disk sectors due to dirt or incomplete "burning" and that the reconstruction of the lost data is accomplished by averaging the two signals, and inserting the "lost" data into the stream, rather than silence, so the better your source (clean, well rendered disks) would also reduce the necessity for "guessing" at the missing word (s)
I have to wonder if there are any programs that would tell you the percentage of "lost" data on a disk, so that you would know how much was being "reconstructed" and how much actual source material is there.
am I still missing something?
pesky
Ripping comparison
I needed to recover a CD anyway so iI ran the experiment. My copy of John Fahey's 'Fare Forward Voyager' fell on the floor of my car and got scratched till it would not play in any CD player I own. I used the ripper that came with a cheap MP3 player and EAC to losslessly rip the title track a single 23' 37" long cut. EAC reported no errors to my surprise. The two tracks lined up exactly but EAC had .63 seconds of silence tacked on the end(?). As data there were 8 single bit differences between the two files (out of 250Meg). As music this resulted in 8 -40 to -60dB inaudible 'ticks' spaced several minutes apart.
This took a while so I ripped Bach's 1st Brandenburg (~6.5min)from a well worn but playable CD and the two results were identical, exactly the same data at every sample.
So at least I, for one, don't get it. The 6moons site (IMHO) contains a mix of technical jargon, hubris, and conclusions that don't follow logically.
The whole jitter issue has little or nothing to do with the moving of static data from one storage medium to another. The means of reading data usually has a decision window or 'eye' with a well defined probabliity of a HARD error (usually vanishingly small).
Taking two exact 10 Meg files and claiming that information (jitter) was removed from one and not the other would violate first principles.
And BTW EAC has lots of neat (for some) little features and fiddly bits and anyone who does this much work and gives it away is a great guy.
I needed to recover a CD anyway so iI ran the experiment. My copy of John Fahey's 'Fare Forward Voyager' fell on the floor of my car and got scratched till it would not play in any CD player I own. I used the ripper that came with a cheap MP3 player and EAC to losslessly rip the title track a single 23' 37" long cut. EAC reported no errors to my surprise. The two tracks lined up exactly but EAC had .63 seconds of silence tacked on the end(?). As data there were 8 single bit differences between the two files (out of 250Meg). As music this resulted in 8 -40 to -60dB inaudible 'ticks' spaced several minutes apart.
This took a while so I ripped Bach's 1st Brandenburg (~6.5min)from a well worn but playable CD and the two results were identical, exactly the same data at every sample.
So at least I, for one, don't get it. The 6moons site (IMHO) contains a mix of technical jargon, hubris, and conclusions that don't follow logically.
The whole jitter issue has little or nothing to do with the moving of static data from one storage medium to another. The means of reading data usually has a decision window or 'eye' with a well defined probabliity of a HARD error (usually vanishingly small).
Taking two exact 10 Meg files and claiming that information (jitter) was removed from one and not the other would violate first principles.
And BTW EAC has lots of neat (for some) little features and fiddly bits and anyone who does this much work and gives it away is a great guy.
>Taking two exact 10 Meg files and claiming that information (jitter) was removed from one and not the other would violate first principles.<
Before anyone comments, yes the quality of the pits on a CD and the magnetic domains on your hard drive are potential jitter and for that matter so it the exact voltage on a RAM cell. A priory one has no (or little) control of these. Certainly two identical files written by two different programs to a hard drive are indistinguishable to a third program.
Before anyone comments, yes the quality of the pits on a CD and the magnetic domains on your hard drive are potential jitter and for that matter so it the exact voltage on a RAM cell. A priory one has no (or little) control of these. Certainly two identical files written by two different programs to a hard drive are indistinguishable to a third program.
Re: Re: Re: thanks for clairification
True. Just note that I didn't mean to make any definitions and I think I made it clear that I agreed with you that there is no techical difference. My point was merely to mention some different cass of over-/upsampling. My attempt at classifying these were of course not meant as definitions, but only to discuss how the terms seem (to me) to be used in practice. I am sorry if I gave the impression that there was any fundamental difference between the case.
I wish compilers would output a smiley now and then. 🙂
I didn't know that, but that makes sense, assuming that also several japanese companies used Philips technology.
Interesting. It makes sense too. Many strange words and terms are based on mistranslations between languages.
At least here in Sweden it seems hard to find soundcards that does not resample to 48 kHz without going up to the more expensive brands like M-Audio. Maybe it is different elsewhere.
Once again, please note that my intention was absolutely not to make any separate definitions for oversampling and upsampling. My only intention was to discuss different situations where the techniques are used. I do, however, of course appreciate your clarifications. I wrote the post mostly because it seemed that the experts, like you, didn't have the time to, so I tried to do my best as a stand in, with the hope that I mght make things a bit clearer.
Werner said:
There are no firm definitions of the terms and trying to pigeonhole them is futile.
True. Just note that I didn't mean to make any definitions and I think I made it clear that I agreed with you that there is no techical difference. My point was merely to mention some different cass of over-/upsampling. My attempt at classifying these were of course not meant as definitions, but only to discuss how the terms seem (to me) to be used in practice. I am sorry if I gave the impression that there was any fundamental difference between the case.
Syntax error: 'either' not followed by a matching 'or' 😉
I wish compilers would output a smiley now and then. 🙂
Not 'most early CDPs' but just all early CDPs that were built on Philips basics.
Japan Inc employed slow 16-bit DAC chips with analogue-domain reconstruction (anti-imaging) filtering. Philips opted for 4x oversampling with digital-domain reconstruction filtering, and once oversampled decided they could get away with 14 bit DAC silicon, adding noise shaping to the formula to retain 16 bit performance. (Or the other way around). This was their TDA1540 DAC partnered with whatever the digital filter chip was called.
I didn't know that, but that makes sense, assuming that also several japanese companies used Philips technology.
(That shortly after everyone named it 'oversampling' has probably its origins in Dutch language only having this one term, while English has both 'up' and 'over'; the system was developed in Belgium and Holland, so ...)
Interesting. It makes sense too. Many strange words and terms are based on mistranslations between languages.
I don't agree, unless you mean 'most computers have a Soundblaster and SBs only work at 48kHz'. Now an SB is a poor example of a sound card ...
At least here in Sweden it seems hard to find soundcards that does not resample to 48 kHz without going up to the more expensive brands like M-Audio. Maybe it is different elsewhere.
That just depends on your definition of the moment, which carries no weight whatsoever.
And anyway, we don't even need such definitions. All one has to remember is that any operation that modifies the effective data rate or sampling rate of a digital audio stream, be it named 'over', 'up', or 'down', has to obey to certain laws and thus has to implement certain mathematical operations. These laws and operations are immutable, and while their implementations may differ in details (blown up by marketeers!) their fundamentals are carved in stone.
Once again, please note that my intention was absolutely not to make any separate definitions for oversampling and upsampling. My only intention was to discuss different situations where the techniques are used. I do, however, of course appreciate your clarifications. I wrote the post mostly because it seemed that the experts, like you, didn't have the time to, so I tried to do my best as a stand in, with the hope that I mght make things a bit clearer.
Re: further reading
I am afraid you are confusing two different things here. The Nyquist frequency is half the sampling frequency. The sampling theorem says that in order to be able to recreate an analog signal that has been sampled, the bandwidth of this analog signal must not be greater than half the sampling frequency, ie. the Nyquist frequency.
And that is the theoretical limit, which assumes perfect filters which cannot be implemented in reality etc. In the simplest case, you just DA convert the sampled data stream and use analog low pass filters. However, nowadays most or all CD players first do some processing in the digital domain, pre filtering the signal using digital filters. Often the signal is upsampled to a higher frequency to make this processing simpler or more effective. All this has to do with the basic theory of sampling, and assumes the samples are not in any way corrupted or lost.
Lost data from the CD (due to reading errors or errors on the disc itself) is an entirely separate problem, which handled in the digital domain and before all of the above. First, there is a lot of redundant data on a CD to make it possible to correct data errors perfectly up to a certain degree (there are even two levels of such redundancy). Further more, data is cleverly distributed on the disc to minimize the damage of non-recoverable errors. For instance, samples are not stored contiguously, but odd and even samples are separated into blocks stored some distance apart from each other. That minimizes the risk of losing both odd and even samples. Hence, even when errors occur, you usually have acces to data at half the sampling frequency, which isn't too bad when compared to losing all data for a short period. Then there is, as you say, also often some interpolation mechanism that tries to fill in the lost data. All this takes place before entering the digital filters and the DAC, and is an entirely separate process, aimed at solving an entirely different problem.
There has been discussion about this in earlier threads. It seems there has been a lot of experiments showing that irrecoverable read errors in CDPs are very rare. Errors of the type that can be perfectly repaired may be quite common, but that doesn't matter simply becaues they can be repaired due to redundant data on the disk.
Also note that errors may either be due to errors on the disc or to reading errors of correct disc data, although in practice it is often not so clear cut. The digital data is after all stored in analog (as the length of the pits) and different CDPs may have more or less difficulty to read the data correctly when parameter stray away towards the border of the allowed. Unfortunately, a CDP cannot go back and reread a block when read errors occur, as a computer CD drive can with data CDs. You do have that option when ripping on your computer though, and EAC uses that technique and tries to rearead the data until no serious errors occur. The audio CD format is however not well suited for doing that.
One thing you can do is to rip a CD with some ordinary program and then with EAC and compare the results (assuming EAC reported no errors). Howver, that doesn't say so much interesting, and it says nothing about how your ordinary CDP manages to read the disk.
pesky said:I read the about the nyquist theorem, and it seem that the reason for additional processing is lost data due to imperfections in the source material, ie bad disk sectors due to dirt or incomplete "burning" and that the reconstruction of the lost data is accomplished by averaging the two signals, and inserting the "lost" data into the stream, rather than silence, so the better your source (clean, well rendered disks) would also reduce the necessity for "guessing" at the missing word (s)
I am afraid you are confusing two different things here. The Nyquist frequency is half the sampling frequency. The sampling theorem says that in order to be able to recreate an analog signal that has been sampled, the bandwidth of this analog signal must not be greater than half the sampling frequency, ie. the Nyquist frequency.
And that is the theoretical limit, which assumes perfect filters which cannot be implemented in reality etc. In the simplest case, you just DA convert the sampled data stream and use analog low pass filters. However, nowadays most or all CD players first do some processing in the digital domain, pre filtering the signal using digital filters. Often the signal is upsampled to a higher frequency to make this processing simpler or more effective. All this has to do with the basic theory of sampling, and assumes the samples are not in any way corrupted or lost.
Lost data from the CD (due to reading errors or errors on the disc itself) is an entirely separate problem, which handled in the digital domain and before all of the above. First, there is a lot of redundant data on a CD to make it possible to correct data errors perfectly up to a certain degree (there are even two levels of such redundancy). Further more, data is cleverly distributed on the disc to minimize the damage of non-recoverable errors. For instance, samples are not stored contiguously, but odd and even samples are separated into blocks stored some distance apart from each other. That minimizes the risk of losing both odd and even samples. Hence, even when errors occur, you usually have acces to data at half the sampling frequency, which isn't too bad when compared to losing all data for a short period. Then there is, as you say, also often some interpolation mechanism that tries to fill in the lost data. All this takes place before entering the digital filters and the DAC, and is an entirely separate process, aimed at solving an entirely different problem.
I have to wonder if there are any programs that would tell you the percentage of "lost" data on a disk, so that you would know how much was being "reconstructed" and how much actual source material is there.
There has been discussion about this in earlier threads. It seems there has been a lot of experiments showing that irrecoverable read errors in CDPs are very rare. Errors of the type that can be perfectly repaired may be quite common, but that doesn't matter simply becaues they can be repaired due to redundant data on the disk.
Also note that errors may either be due to errors on the disc or to reading errors of correct disc data, although in practice it is often not so clear cut. The digital data is after all stored in analog (as the length of the pits) and different CDPs may have more or less difficulty to read the data correctly when parameter stray away towards the border of the allowed. Unfortunately, a CDP cannot go back and reread a block when read errors occur, as a computer CD drive can with data CDs. You do have that option when ripping on your computer though, and EAC uses that technique and tries to rearead the data until no serious errors occur. The audio CD format is however not well suited for doing that.
One thing you can do is to rip a CD with some ordinary program and then with EAC and compare the results (assuming EAC reported no errors). Howver, that doesn't say so much interesting, and it says nothing about how your ordinary CDP manages to read the disk.
CD v LP
Hi
I have most of my LP collection bought on CD now (easy to take into car etc etc) but still find i will listen all day to my rega turntable and yet probably only 2-3 CD's at a time.
John
Hi
I have most of my LP collection bought on CD now (easy to take into car etc etc) but still find i will listen all day to my rega turntable and yet probably only 2-3 CD's at a time.
John
The problem with error correction is possibly at the heart of bad cd reproduction.From feedback here it seems that hard disc playback or even EAC burned cds sound better.
I found interesting the fact that practically all the staff at stereotimes.com went head over heels for and bought the new Memory Player which addresses some of these error correction problems.
http://www.stereotimes.com/
I found interesting the fact that practically all the staff at stereotimes.com went head over heels for and bought the new Memory Player which addresses some of these error correction problems.
http://www.stereotimes.com/
vinyl is better NATURALLY
It's an eternal question, just like to be or not to be. Maybe I overlooked something but it seems to me that vinyl is better just because of the construction of the human ear. It is ANALOG and that's why it experiences the sound of a TT or a cassette, no matter how "poppy", "scratchy" or "noisy", as something natural. However we strive for anything "out of this world", hence - the never-ending search for a perfect digital sound which can be immaculately clean, beautiful, hair-raising etc. - but it's destined to be ALIEN to human phisiology.
I listen both to digital and analog getting the best from both of these universes.
Sometimes wish I had a pair of digital ears🙂 !
It's an eternal question, just like to be or not to be. Maybe I overlooked something but it seems to me that vinyl is better just because of the construction of the human ear. It is ANALOG and that's why it experiences the sound of a TT or a cassette, no matter how "poppy", "scratchy" or "noisy", as something natural. However we strive for anything "out of this world", hence - the never-ending search for a perfect digital sound which can be immaculately clean, beautiful, hair-raising etc. - but it's destined to be ALIEN to human phisiology.
I listen both to digital and analog getting the best from both of these universes.
Sometimes wish I had a pair of digital ears🙂 !
Hi
Yes plenty of things to make digital yet...just imagine digital tyres on my car! My arm joints i would say need to be replaced by digital parts just so i can say its modern and im cool.
The modern digital tv...either my sky or digital set top box now just produce little blocks becuase of the compression, and so no i wont be buying a 50inch LCD or plasma in the near futue.
John
Yes plenty of things to make digital yet...just imagine digital tyres on my car! My arm joints i would say need to be replaced by digital parts just so i can say its modern and im cool.
The modern digital tv...either my sky or digital set top box now just produce little blocks becuase of the compression, and so no i wont be buying a 50inch LCD or plasma in the near futue.
John
digital cd reproduction: product of the 80's
It seems that the system for creating cd's and playing them back was mostly a product of the 80's, which was limited by parts availability, storage limitations, and a lack of computers and high tech foundries that are commonplace today. The standard, while robust enough to be almost bullet proof as far as damaged/poor quality cd's, and which has managed to remain backward compatible for some time now, has reached it's limitations as far as a "replacement" for the lp.
Being that record technology developed over almost a century, that is not suprising, and the cd's that we are listening to would be akin to the 78 as far as reaching the quality possible from the medium. (digital reproduction of audio)
I believe that much of the problem is due to phase shift (jitter) resulting from an overworked decoding system, so the shift to hard drive stored media, ripped with some other system (eac for example) would produce better results. The real question is where will the whole thing go in the near to distant future. It seems like there should be a simpler decoding method, even if it means creating "data" cd's with less error correction.
That assumes of course that the DRM issue is resolved by the record companies, and that there is a greater appreciation for "real" audio by the general public.
With the available capabilities of home computers, increasingly cheap, reliable, accurate storage devices, cheaper bandwith increasing the tendency towards streamed media sources as opposed to physical media, I would predict a bright future for sound quality overall.
It seems that the system for creating cd's and playing them back was mostly a product of the 80's, which was limited by parts availability, storage limitations, and a lack of computers and high tech foundries that are commonplace today. The standard, while robust enough to be almost bullet proof as far as damaged/poor quality cd's, and which has managed to remain backward compatible for some time now, has reached it's limitations as far as a "replacement" for the lp.
Being that record technology developed over almost a century, that is not suprising, and the cd's that we are listening to would be akin to the 78 as far as reaching the quality possible from the medium. (digital reproduction of audio)
I believe that much of the problem is due to phase shift (jitter) resulting from an overworked decoding system, so the shift to hard drive stored media, ripped with some other system (eac for example) would produce better results. The real question is where will the whole thing go in the near to distant future. It seems like there should be a simpler decoding method, even if it means creating "data" cd's with less error correction.
That assumes of course that the DRM issue is resolved by the record companies, and that there is a greater appreciation for "real" audio by the general public.
With the available capabilities of home computers, increasingly cheap, reliable, accurate storage devices, cheaper bandwith increasing the tendency towards streamed media sources as opposed to physical media, I would predict a bright future for sound quality overall.
A couple of quotes from the "Memory Recorder" site....
>However, the RS (Reed-Solomon) code cannot determine any information about the bit. It can only replace it.<
>RS cannot reconstruct errors shorter than approximately 25 microseconds.<
There is no information other that 1 or 0 in a "data" bit, and when sampling at 44.1kHz (any frequency actually) there are no "errors" between samples ( I think the 25usec was an approximation to the sample period). Both of these statements again show a mis-understanding (or deliberate obfuscation) of the difference between the analog and digital worlds. Yes, many cheap CD players can be demonstrated to have jitter issues and error issues too probably, but the elaborate discussions of data blocks, Red Book whatever, etc. only serve as a smoke screen.
I propose another experiment; play the CD normally on a decent player and record the SPDIF data stream out to a file and compare it to an EAC extraction. If the data still matches well enough to make you happy, clock it into a really big buffer and build a "perfect" clock and clock it out without any timing feedback to the source player. You need to buffer enough so the difference in clock speeds over a whole CD would use up say less than 1/2 the buffer. You can make a "wrist" sized atomic clock now too. AFAIK this has been done several times before, maybe sans the atomic clock. Makes a great DIY project too.
>However, the RS (Reed-Solomon) code cannot determine any information about the bit. It can only replace it.<
>RS cannot reconstruct errors shorter than approximately 25 microseconds.<
There is no information other that 1 or 0 in a "data" bit, and when sampling at 44.1kHz (any frequency actually) there are no "errors" between samples ( I think the 25usec was an approximation to the sample period). Both of these statements again show a mis-understanding (or deliberate obfuscation) of the difference between the analog and digital worlds. Yes, many cheap CD players can be demonstrated to have jitter issues and error issues too probably, but the elaborate discussions of data blocks, Red Book whatever, etc. only serve as a smoke screen.
I propose another experiment; play the CD normally on a decent player and record the SPDIF data stream out to a file and compare it to an EAC extraction. If the data still matches well enough to make you happy, clock it into a really big buffer and build a "perfect" clock and clock it out without any timing feedback to the source player. You need to buffer enough so the difference in clock speeds over a whole CD would use up say less than 1/2 the buffer. You can make a "wrist" sized atomic clock now too. AFAIK this has been done several times before, maybe sans the atomic clock. Makes a great DIY project too.
Re: vinyl is better NATURALLY
Cassette had it's problems, as does RtoR and all are subject to continuous degradation of the input media. The only good analog is live music (and that is ephemeral).
But, my opinion aside, what is the general consensus on albums such as Ry Cooders' "Bop 'til you Drop"?
But the LP has to be one of the worst ways to experience analog... the speed of the stylus in the groove changes continuously, as does the angle of the stylus giving you weird treble distortions. ( And prompting such 'correction methods' as "Dynagroove" to be invented).jazzigor said:...but it seems to me that vinyl is better just because of the construction of the human ear. It is ANALOG and that's why it experiences the sound of a TT or a cassette, no matter how "poppy", "scratchy" or "noisy", as something natural.
Cassette had it's problems, as does RtoR and all are subject to continuous degradation of the input media. The only good analog is live music (and that is ephemeral).
But, my opinion aside, what is the general consensus on albums such as Ry Cooders' "Bop 'til you Drop"?
- Status
- Not open for further replies.
- Home
- Source & Line
- Analogue Source
- CD as good as vinyl?