Would a 32bit floating point dac/adc system remove the need for dither?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
CD is dead, DVD-A and SACD are not going to supplant it, in fact all audio going forward will be digital transfered. MP3 is far from dead but the end is in sight, it has had a good 10 years but it will not have 10 more. FLAC or ALAC will be the next standard, or something that is similarly lossless. I just think that the next format needs to be designed for the future, not for planned obsolescence. As the internet has shown, planned obsolescence works great for the analog world, nothing really ever leaves the digital world.

Hi,

CD is not dead, far from it. It was a natural archive standard to store
all music that preceded it, and now we are stuck with it. If you have
a CD standard archive of an analogue recording, there is no point
re archiving from the original to a higher standard, its pointless.

It will never be supplanted as an audio standard with alternatives,
or become obsolescent due to "better" digital standards. It is
what it is, a system capable of archiving analogue recordings
to a quality standard well beyond the best analogue tape.

Which is if course better than vinyl. Best pure analogue is tape.

rgds, sreten.

Many CD's sound poor. Nothing to do with the CD standard.
 

That may be nice sentiment but it doesn't take much more then this graph to see where CD sales have trended since the middle of the last decade. Let me tell you the trend line does not rise after 2004.

An externally hosted image should be here but it was not working when we last tested it.



CD is a dead format. Like sreten says "it was a natural archive standard to store all music that preceded it". Standards grow and change with time and technology.

A good example of all this are all the movies are being rescanned and put on bluray look better then the DVD and VHS versions that preceded it. VHS was the standard, then DVD and now Bluray. 4K TV is now being released and 8k UHDTV will be the standards we move to in the next few years. Audio of some sort will do the same. I will never understand why people think there will be an end to the move for better or that we will achieve an unsurpassable level in anything.
 
Last edited:
I feel like you guys are missing the forest for the trees. Bit depth has more then just an affect on S/N ratio. It also is a measure of the smallest step along a waveform we can measure.
An externally hosted image should be here but it was not working when we last tested it.

It is this finer measurement and representation that makes 24bit sound more "analog". Has there been a measurement of the human ear's bit resolution perception is? I agree that 24bit may be the point of diminishing returns but storage and processing is so cheap there is no reason to not push the envelope here.
 
But step size **IS** SNR. The equation is SNR = 6.02*N + 1.76 dB, where N is the number of bits.

Step size determines your quantization noise. And quantization noise looks a lot like analog noise at higher values of N.

There has been lots of work to determine what humans can resolve, both analog and digital. Get an ABX plugin for your favorite music player and you can try yourself. You'll probably be surprised just how good even 16 bits can be at normal listening levels.
 
There is one advantage to having a higher resolution format, that being that no oversampling or filtering of any kind will need to be applied to the output of the DAC, can go straight into the amplifier. A lot of the reasons digital often doesn't cut it is that the electronic fiddling required to produce a clean, analogue waveform is not well executed currently, and ultimately corrupts the sound. An absolutely mimimum cost way for getting the analogue back, at the time of playback, should hopefully mean that even the most incompetent companies should get it reasonably right ...

Frank
 
fas42 said:
There is one advantage to having a higher resolution format, that being that no oversampling or filtering of any kind will need to be applied to the output of the DAC, can go straight into the amplifier.
With a sampled system you always need filters. Fortunately, you always have filters. With sufficiently fast sampling the filters are already built-in to your system so you might not notice them, but they are there: microphone bandwidth, preamp bandwidth, loudspeaker bandwidth.

A basic point overlooked in this thread is that if we eventually have the capability to produce virtually noiseless electronics etc. then we might all go back to using analogue storage mechanisms. We only need digital because of analogue noise.
 
The high resolution I was referring to included fast sampling, sufficiently high data density such that 1 bit steps could "perfectly" track the analogue waveform, even for a full level 20kHz sine. Like having DSD run at such a high frequency that absolutely no noise shaping is required. Then, the remaining digital "noise" is exactly that, analogue in quality in every sense, at least, say, 100dB down.

We "need" digital, because analogue deteriorates, and degrades with multiple generation copies. Or would you want all recordings masters in 200 years say to be a tape copy of a tape copy of a tape copy of a tape copy of a ....

Frank
 
Steps can never perfectly track an analogue waveform. As I said, fortunately they don't need to because of inherent filtering.

The big snag with digital is that formats (both physical (e.g. disc) and logical (e.g. data structures)) keep changing. That means we either have to maintain old systems or keep transfering all the stuff we want to keep to a new system. Eventually the volume of stuff we want to keep will mean that we can't copy it all before the format changes again. Or high quality audio will just disappear: the X-factor-watching MP3-downloading majority will win.

I suspect that in 100 years time it will still be easier to read an analogue wax cylinder than a CD. Both will be obsolete, but one will always be simpler than the other.
 
Steps can never perfectly track an analogue waveform. As I said, fortunately they don't need to because of inherent filtering.
I would have to beg to differ there ... if the steps are small enough, and occur fast enough then the waveform can be perfectly tracked. Take a maximum level 20kHz sine wave at zero crossover, that's the fastest changing analogue waveform you can get -- we'll forget for the moment all the talk of higher frequencies being audible, for the sake of the argument. If we take the slope of the signal at that point, it has a certain amplitude change per time division, translate that into 1 bit per some time interval. Then that time interval corresponds to the sampling rate needed to perfectly capture the waveform we're looking at: the replay data would be : 1,1,1,1,1,1, ....1,0,... -- the first 0 appears when the slope just begins to ease off ...

Remember all of nature is really digital, when you look closely enough at it: that's the good thing about digital, you can just keep adding bits until you have enough resolution ...

I suspect that in 100 years time it will still be easier to read an analogue wax cylinder than a CD. Both will be obsolete, but one will always be simpler than the other.
I reckon all the data will be in that infamous Internet "cloud", or equivalent ... :D

Frank
 
Last edited:
You may be using the word 'perfectly' in some private sense which means 'less than perfectly'. Alternatively, perhaps you believe that technology will one day provide infinite sampling rates and/or infinite resolution, which is what a perfect slope would need. The d in dV/dt really is infinitesimally small, so only infinite digital technology can implement it perfectly.

Is nature really digital? I don't think so, unless you are thinking of speculation about foam vacuums etc. Sound pressure, microphone signals etc. are not digital as an air molecule's position is an analogue quantity.
 
You may be using the word 'perfectly' in some private sense which means 'less than perfectly'.
Only in the sense that it's capable of tracking the waveforms of any currently available analogue recording mechanism precisely, including any noise content. To consider an "extreme" example, take a virgin, highest quality 1" magnetic tape, and digitally record its hiss, the inherent noise signal. I would suggest that high resolution digital would "perfectly" capture that sound signature so no test could differentiate which was which on playback.

Frank
 
That sounds like a private definition. To me, 'perfect' (in this context) does not mean 'indistinguishable in any reasonable test' but 'identical'. For that you need infinite resolution, not high resolution. You may be confusing the finite sampling which can fully capture a band-limited signal, with the infinite resolution which is needed for almost any signal. Of course, we can alway add dither to a finite resolution system but then we get infinite resolution at the expense of added noise.
 
I suggest that the first thing to realize is that there is no such thing as perfect digital sampling or reconstruction. No digital system can meet the requirements for perfection, as defined by Shannon. It's only possible in mathematical theory, and is not possible in practice. What we are then dealing with are various degrees of practical imperfection, with assumptions regarding what constitutes perceptually perfect performance. What's good enough. (Although, Bob Stuart has done excellent work in researching what might be our perceptual limits.)

I suggest that the second thing to realize is that even if mathematically perfection were possible in practice, Shannon's sampling theorem does not account for signals which change over time, i.e. music. Signals are assumed to be constant, carrying only frequency domain information and no time domain sensitive information.
 
Last edited:
Ken Newton said:
I suggest that the second thing to realize is that even if mathematically perfection were possible in practice, Shannon's sampling theorem does not account for signals which change over time, i.e. music. Signals are assumed to be constant, carrying only frequency domain information and no time domain sensitive information.
Why does this old chestnut keep getting dragged out every month or two? Is there somewhere an influential audio journalist or website who doesn't understand Fourier and Shannon so keeps asserting that digital audio does not work. Then other people repeat it.

Put your CD player on permanent track repeat. Then the music signal is strictly periodic, so digital audio can be shown to work. Nothing weird happens when you stop repeating the track, so digital audio works for that too.

Why do people learn just enough maths/physics/information theory to confuse themselves?
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.