Why is 32 bit the "way to go"? Do you mean fixed point 32 bit? Or floating point 32 bit? Float 32 is what most "Native" applications use today, which is functionally equivalent to 24 bits fixed point (aka linear), which is used in traditional DSP systems.
Major downsides to 32 bit linear audio are the increasd resource consumption, including disc space and compute, the need to dither back down to sensible bit depths for everyone else and you're dealing with a market that all but rejected high quality audio in favor of convenience. I'm looking at you Apple...
Even more importantly, THERE'S ZERO AUDIBLE BENEFIT for playback.*
In return what does 32 bit linear get you? Unless you're the bionic man there's no point having anything greater than 16 bit linear audio anywhere near consumer space. Certainly there are applications in the Pro Sphere. Examples include Pro Tools HD having a 48 bit mix bus for mixing multiple 24 bit streams with headroom, double or even triple precision plug-in calculations, but examples like that are exceptions and not the norm for good reason. They also need to be well understood to use correctly, and avoid causing more harm than good.
More bits and higher sampling rates are more marketing and an excuse to have sloppy design in your converters than being required to improve audio fidelity.
Here's a controversial, yet absolutely defensible position:-*
There are many more things that will yield audible results than wasting resources on capturing signal at resolutions that provide no benefit.
If a human can't perceive it, there's no point wasting resources capturing it and playing it back. Movies wouldn't look better if they captured the ultraviolet and/or the infrared spectrums as well as visible light you see in the cinema, so why do people insist that it is necessary to do the equivalent when dealing with audio.
Unless I'm missing something?
Nyquist theorem is very explicit when it comes to the sampling rates required to capture full-bandwidth audio. 44.1KHz contains enough data to reconstruct the entire audible range, 192k contains exactly the same data plus three times that amount of information that is supersonic and of seriously questionable benefit to bother with.*
A great number of folks can, or at least claim to, hear an appreciable improvement in quality with over the top 192KHz recording Vs 44.1, despite there being no apparent justification for there to be a difference. The reason why those recordings sound better is not because of the sample rate, but because the AD converters can have a much more gentle cutoff slope at 192KHz, and as a result have fewer ripple effects defecting back into the audible band. Not because of an inherent limitation of 44.1KHz making it unable to capture the part you can hear in a faithful manner.
Sampling rate snake oil is a topic for another thread though.
Coming back to the original question.. Yes, turning down the level in the preamp DESTROYS dynamic range. The next question should be, does it matter?*
Sorry, you didn't read what i wrote, or maybe i didn't express myself clearly.
But thank you for writing such a long post, even before reading my whole post. Maybe you liked me at first sight 😀

Maybe Qusp has done a better job than me, in explaining what i meant.
sorry but parts of the above post are kinda meaningless, even if a recording has only 30-40db (or whatever) of DNR, you still gain a better representation of those particular 30-40db used by the recording by having more bits to describe them.
No offense intended here:-
Although I understand your reasoning, put bluntly your statement is totally incorrect, and a common pitfall when it comes to understanding sampling theory.
Each bit provides 2x the level of dynamic range (6db) that can be expressed and that is all.
There's no "Use more bits to express a smaller range with more detail" possible with this. Admittedly it is one of the more counterintuitive things about digital audio. Nyquist theorem is pretty odd too until you get a grip on the mathematical concept behind it since it seems odd that only 2 datapoints are needed to reconstruct a fundamental sine wave and any more are unneccessary.*
The developer of a digital volume control should dither the output of any wordlength reduction (i.e. when you reduce level) otherwise there are truncation errors which are undesirable. This comes at the expense of adding a little noise, but increases the perceived bitdepth of the recording.
Dithering in this way is not as common as you'd hope.
* Subject to the Sampling interval being >= 2x Frequency of sinewave being recorded.
Sorry, you didn't read what i wrote, or maybe i didn't express myself clearly.
But thank you for writing such a long post, even before reading my whole post. Maybe you liked me at first sight 😀.
Maybe Qusp has done a better job than me, in explaining what i meant.
Not sure what you're getting at because I reread your post and still disagree with you about 32 bit audio, unless you point to something that explains why it works in spite sampling theory saying the same thing as I am?
Qusp got a shorter response (after the big one already posted to you) saying the same thing. No point to 32bit DAC (which afaik are linear 32bit, I've come across a floating point DAC - doesn't mean they might exist) due to inherent limitation of 16/24 bit systems.
Could you clarify what you mean? Curious that I may have misunderstood you, but I don't see how.
Rob
- Status
- Not open for further replies.