Ripping CD's in Safemode sounds much better...

Status
Not open for further replies.
Ok, Simon use whatever you like (as long it is reliable). My point is that we should know if the two ripping strategies produced two different files or not. I´m under the impression that the OP assumes they are identical (bitwise), but actually he said "the same checksum". The checksum tool used is unknown, so I suggested a known good one.
 
Sha256 will provide go/no go but using Audacity to attempt to null one by summing with the inverse of the other will provide a measure of the degree of corruption (if any).

Sorry don´t understand what you mean by "go/no go". If two files have the same sha256 sum the are identical to the last bit. (Or you have found a collision in the algorithm which is unlikely to happen in the time remaining until the earth crashes into the sun...)
 
What I mean by go/no go is it provides a binary (yes/no) answer to the question 'Is the file corrupted?'. If you get a 'corrupt' answer there's no measure available for how significant that corruption is - could just be a single LSB in the lead-in to the track. Whereas I reckon its more useful to know something about what kind of corruption is present (if any at all).
 
What I mean by go/no go is it provides a binary (yes/no) answer to the question 'Is the file corrupted?'. If you get a 'corrupt' answer there's no measure available for how significant that corruption is - could just be a single LSB in the lead-in to the track. Whereas I reckon its more useful to know something about what kind of corruption is present (if any at all).

Exactly.

I am almost positive that we can't get 2 CRC's the same. The problem i feel lies in the drives themselves with all their error correction, dithering, dithering rejection, jitter / anti jitter, laser errors. This all happens at the bitstream level, before we even convert to a number. Like i said earlier, yellow book fixes a lot of these problems by having multiple levels of error correction. Red Book is hit and miss and really doesn't care if it misses the odd 1 or 0 because you won't hear it.

BUT, do we care ... as long as the error level is acceptable and we can't hear it, it wouldn't bother me. Until today i knew the errors existed but didn't think they were audible.

Now Erin has made me question whether or not these errors are audible. It seems with the software solutions available now, that these errors are measurable and can be minimised.

btw ... 2 people have done the test .... BOTH have reported results saying there is a difference,

100% - Safe mode is better
0% - Safe mode no difference.

** Don't get me wrong ... we still need to do a bit test to be sure ... that would make this thread obsolete then ... Now if only i had an audio CD
 
Last edited:
Dave
With EAC you have the facility tlo calibrate your Optical drives using the EAC server. This ensures repeatable results provided that the CDs are not in poor condition and the Optical Drives have no problems reading the contents.
This results in all writers producing .wav files with identical check sums for the ripped track(s) I have 3 different Optical devices calibrated with EAC.
SandyK
 
If your level of understanding of how oversampling works is based on that Wikipedia article I suggest you find a somewhat more authoritative text. Try this one and keep your eyes peeled for the phrase 'zero stuffing'.

tutorials: oversampling [SynthMaker]


I'm sorry, a "more authoritative text" didn't seem appropriate - if you read the article you refer to, you'll see (surprise, surprise) that zero stuffing is relevant to Upsampling, ie increasing the sampling rate. Oversampling is something altogether different.

I'll leave it at that, this thread already has more tangents than a trigonometry class.

 
... if you read the article you refer to, you'll see (surprise, surprise) that zero stuffing is relevant to Upsampling, ie increasing the sampling rate. Oversampling is something altogether different.

I did read the article and I don't see it supporting your contention that 'oversampling is something altogether different'.

Perhaps you could explain what the difference actually is between those two terms then because I've always tended to treat them as pointing to the same activity - increasing the sample rate. If you use the word 'upsampling' where I use 'oversampling' then what I originally said applies to upsampling just as it does to oversampling - DAC manufacturers pretty much all use it nowadays.
 
is this the scientific way to confirm, or not, a new finding?

The peers see the claim and the method and try to reproduce it.

I tend to agree with Simon B on this, what erin's said sounds more like opinion than evidence. What we currently have from him is:

When I played it back, I found that the music had incredible detail, and tone, depth, and clarity, much more so than any rip I had done before.

Would it be possible to have more detail erin - which CD, which music did it have on it and could you describe the changes better? That way makes it easier for others to replicate the initial experiment. Experimental replication is a very important aspect of science and so far the experimental details are somewhat sketchy about what the changes are and which music you've found to be the best way to showcase those changes.
 
AcurateRip, (what a gem, thanks Steve). It states that there are in fact differences in audio Rips, stands to reason otherwise the program would not exist. It also states that no rip is perfect and that the quality is dependant on the drive. Further it also compares a CRC with an internet database of CRC's. This sounds perfect except they don't tell us how they calculate CRC and if they're looking for exact or approximation or anything. The only thing they say they're looking for is silence and spikes, cliks and pops. SO i don't know what to make of this program other than it'll tell you if you've got dropouts or those horrible squelchy spikes.

The Hydrogen Audio forum has better info on AccurateRip than AR's own page. Try the knowledgebase article for starters:
AccurateRip - Hydrogenaudio Knowledgebase
 
...because I've always tended to treat them as pointing to the same activity - increasing the sample rate. If you use the word 'upsampling' where I use 'oversampling' then what I originally said applies to upsampling just as it does to oversampling - DAC manufacturers pretty much all use it nowadays.

I use upsampling to refer to any increase of sample rate like 44.1kHz->96kHz and oversmpling to refer to a increase with only an integer number of the sample rate (lik e2x, 4x, 8x, 16x...).
None is directly related to an increase of bitdepth.
That transition from 16 bit to 20 or 24 bit is called interpolation (by various functions). It creates samples that are different in level also, not just in time. It should be applied to a correct upsampling/oversampling process in order to achieve good results, but is not equal to those two words.

PS: "Bit stuffing" is everywhere - it is exacly what SPDIF signal does for a 16 bit signal.
 
Last edited:
I did read the article and I don't see it supporting your contention that 'oversampling is something altogether different'.

Perhaps you could explain what the difference actually is between those two terms then because I've always tended to treat them as pointing to the same activity - increasing the sample rate. If you use the word 'upsampling' where I use 'oversampling' then what I originally said applies to upsampling just as it does to oversampling - DAC manufacturers pretty much all use it nowadays.


Abraxalito, you're essentially quite right, I was essentially wrong. Zero stuffing is used in so-called oversampling DACs. I'd say that oversampling is actually something that is done during ADC, not DAC - the common useage of CD player manufacturers is to call DACs "oversampling".

Foot squarely shot, guilty as charged
 
I use upsampling to reffer to any increase of sample rate like 44.1kHz->96kHz and oversmpling to reffer to a increase with only an integer number of the sample rate (lik e2x, 4x, 8x, 16x...)...
None is directly related to an increase of bitdepth.
That transition from 16 bit to 20 or 24 bit is called interpolation (by various functions)...

I agree with the above explanation. One has to do with how often in time you take a sample of the signal. The other has to do with once you acquire a signal sample, into how many discrete "slots" you cut up the signal intensity. The more "slots" you can cut up the signal into (i.e., bit depth), the more accurate you digital representation of the original signal will be (all other things being equal).
 
This thread is wack... From safe mode ripping slided to definition of upsampling.

My PC does not make a difference between any ripping methods - tested ABX in foobar. Maybe if some would have a malware on their PC (that usually does not act in safe mode) or if it has a bad driver (not loaded in safe mode) that it might have different files resulted and those will sound different. In no way two identical files (by crc) will sound different.

I will try to keep away from this tread, it does not deserve my time :D
 
This thread is wack... From safe mode ripping slided to definition of upsampling.

Yeah but look how it got to upsampling - because homeopathy was interjected. Can't have a decent subjectivist thread these days without some joker bringing up alternative medicine.

I will try to keep away from this tread, it does not deserve my time :D

There is no try.... :p
 
Status
Not open for further replies.