Analog vs Digital volume test

Modern DAW software mixing busses are mostly implemented these days with 64-bit floating point math. Digital volume controls in that case effectively do not reduce resolution. However, eventually the data must be truncated to a lower bit-depth. For CDs it is reduced to 16-bits. After that, turning down the volume will cause some loss of resolution.

That is only partially correct. Any sane modern audio middleware (Linux: Pulseaudio, Apple: CoreAudio, Windows: WASAPI) will use a configurable internal bit depth, that is much higher than 16 bits. Pulseaudio can work with integers up to 32bit and float up to 32bit, WASAPI up to integer 32 bit and for CoreAudio I could not find any info in a timely manner which bit depths are actually supported or if it is configurable at all.

Now if you play 16bit material it simply is bit stuffed with zeros in the LSBs (when working with integer values) or converted to float. If you apply digital volume control then, then you have much more digital headroom/resolution to work with.

The digital signal is then decimated to the soundcard's announced resolution, for example 24bit integer, right before sending it out via USB, PCI or PCIe.
 
Suppose you want to turn down the volume, and are starting with 16-bits. If you bit-shift one place to the left (i.e. divide by two), where does the left-most bit go (the LSB) ? The answer is that it is lost, and the MSB is replaced with a zero. That is a loss of one bit of resolution.

Next question, if you turn down the volume by dividing by two, what is that attenuation expressed in dB? -6dB?

However if you dither before dividing, then some of the resolution will be retained. It will just be below the slightly worsened noise floor.
 
Last edited:
As noted above, no modern OS/audio backend will do that because it will not run at 16 bits resolution.

Suppose the backend runs at 24bit resolution, then 8 zeros are added to the 16bit incoming number. If you then bitshift right by 1 (divide by two, MSB first notation) then you're left with a zero in the MSB, the original number and then 7 zeros in the LSBs.


EDIT: of course that means you need a DAC with more than 16 bits resolution. As modern DACs provide ~22bits, you can halve the volume 6 times for a 16bit signal with no apparent loss in digital resolution. That's an attenuation of 36dB.
 
Last edited:
Whatever bit-depth math was used, it still has to be truncated if being sent it to a 16-bit dac. -6dB has lost a bit of resolution and or raised the noise floor. However, if it is sent to 24-bit dac (I2S is wider than 16-bits), then resolution can be preserved. It still starts to become audible though, at least it does for me if using my best dac. If its my laptop dac then it doesn't matter because the dac is junk anyway. Although its accepts 24-bits, it really only sorta accurate to about 16-bits. The attenuated lower bits are deeper into the dac hardware noise floor.

Another complication is that if using Windows and not using ASIO very carefully, or else not using WASAPI Exclusive Mode, then Windows is likely to resample the audio and do it poorly.
 
Last edited: