John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Disabled Account
Joined 2012
Dither is only added at the last step... converting down to 16/44 of the master. If everytime you mixed with a DAW and dithered, the total affect from all those dithered signals become a problem. So files are kept and massaged etc undithered until the final cut.

That others may have done it more often doesnt change a thing, IMM. The best way is to do it only once.




I still have most of the day and then the night before I start partying. But, have a great, happy and Merry New Year !!!


THx-RNMarsh
 
Last edited:
Bill,
that seems to go against what I think I have been understanding. I think you are supposed to dither each time you transfer or change bit rates or any steps in the digital domain if I'm understanding this correctly. I assume when you are recording on a digital mix console you would do the dithering when you sum the multiple channels but that you wouldn't need to dither each individual channel while recording, just the summation. Up sampling or down sampling you would also want to dither it seems.
 
Member
Joined 2004
Paid Member
BTW I've heard so much about digital audio and out of band signals that I've obtained a little test equipment to do some real world investigations with:

View attachment 522196

Good enough?

If your testing analog FM tuners. However, last I checked they had digital intruding as well.

Better to define what you are looking to prove or disprove then devise a test than get test hardware and test for what the hardware can test.
 
Disabled Account
Joined 2012
Any references to that, or is it just a feeling? Or at least a source?

yes, of course. Check what experienced mastering/recording engineers say. I wouldnt know, personally, as I dont do that kinda work. Also, #77386 fits what I have read.
Keep all work in one data format like 24/96 or greater until you are ready to get it down to 2 channels of 16/44.





THx-RNMarsh
 
Last edited:
Member
Joined 2004
Paid Member
The input I have had from the DSP guys is that 24 bit audio is enough (the last two bit at a minimum are meaningless) and that the operations should be at least 48 bit precision. Current computers make 64 bit arithmetic easy so why not. In dedicated DSP engines (e.g. Sigma Studio) 26 bit internal data should be enough to prevent any compromises in the resulting audio. Fixed point vs. floating point etc. is stuff that I would leave to the dsp guys. Sometimes you need one or the other for things like processing speed for real time tasks or minimizing power consumption. Like all this stuff a really good dsp guy can get great results with very limited resources, e.g. Sonic Solutions on an early Mac.
 
There used to be more discussion in some circles about ProTools and its (proprietary) internals than there is here about dithering. Lots of wondering and worrying about when and where dither was or wasn't applied - and as time went by it seemed that folks agreed that PT improved. Every operation that involves a truncation results in some added granularity, but of course only those granularities within spitting distance of the next operation matter. It all happens inside the black (software) box, so:

"Life is short, Art is long, Experiment treacherous, Judgment difficult" - Hippocrates

Happy New Year to All,
Chris
 
Last edited:
As far as 44.1K goes, the need for that is being diminished every day. The only thing that *needs* 44.1 is the CD. All other digital audio is basically a multiple of 48K, ie all things with an image associated with it. And the aggregator/distributors are busy building streams that will handle 96/24 or at least use that as the source image for lossy compression. So good riddance to 44.1K. I think we can all agree that it introduces needed filter complications that are easily avoided at 50K or greater, making 96K the most viable existing alternative.

Pro Tools has changed their internal buss structure to 32 bit and default dithering when exporting from that structure (which may happen more than you think, in and out of many plugins for example). There are a lot of gottcha's in digital audio processing and many of them have been ignored for many years, "too difficult". It is a lot better than it used to be, the tools today are starting to adhere to what we have known for many years (such as proper dithering regimes) but have been "too difficult" to implement.

On some types of audio, many of these imperfections don't matter much. What is -60dbfs on a pop piece, compared to -60dbfs on a classical quartet recording?

ps, that SOS article was published in 2001, in the real world of that time, it made more (musical) sense to stay in 44.1K in some instances.

Cheers and happy new year,

Alan
 
Last edited:
So good riddance to 44.1K. I think we can all agree that it introduces needed filter complications that are easily avoided at 50K or greater, making 96K the most viable existing alternative.

The difference between 44.1k and 48k (or 50k) is insignificant, sonically.
How significant can the difference between a Nyquist frequency of 22k and 24k be? It represents 1/6th of an octave—around one tone at what is, for most, an ultrasonic frequency.
Real improvements are attainable at 88.2 or 96k sampling, and even more at double that.
 
Status
Not open for further replies.