John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Hi,

How do you know Philips didn't dither the test waveform?

I read the data sheets. Trust me, it works.

You may have to go back to earlier parts and longer datasheets (which may be available only in print), but the processes and setups for the measurements are extensively described in several.

Ciao T
 
Hi,

For another, I'd say that -60 dB of tape noise is more objectionable than -90 dB of dither noise, regardless of distribution.

Sure, but we do not have to step down to the levels of generic Ironoxide Casette Tape without dolby.

Try a dual halve track mastering grade machine and include the headroom... You will find that tape noise can and should be WAY, WAY lower than -60dB.

Ciao T
 
Many misunderstandings may result from the fact that A/D recordings are not done captured in 16bit/44.1kHz. More likely 24bit/192kHz (even 24bit/384kHz). Then a "CD quality" is achieved by maths - downsampling. Right here there is a place for "improvements" by noise shaping etc.

My original question, regarding "resolution" was - how to make higher resolution than 1/2LSB at A/D side, meant through whole freq. band (till Fs/2)? The answer is that there is no way to do it. No dithering can make it.
 
Ha! What you say is literally true, but SY never suggested that you add them together. He stated that you should 'take' them, as in look at them, or maybe convert them to analog and listen to them. There is information in those 10 samples to be gleaned in ways other than summing them together.

It was too much to hope that I'd get quoted accurately twice in a row! :D
 
Ha! What you say is literally true, but SY never suggested that you add them together.

So then he was moving the goal posts. The context of the discussion at that point was whether averaging samples together reduced the bandwidth.

He stated that you should 'take' them, as in look at them, or maybe convert them to analog and listen to them.

10 samples converted and listened to would last just over 200uS - not quite long enough for me to hear anything useful.:D

There is information in those 10 samples to be gleaned in ways other than summing them together.

If that's what SY was saying, who am I to disagree? Sounds perfectly reasonable but entirely irrelevant to the discussion about averaging and bandwidth.
 
But if you take (for example) 10 samples, you know amplitude and frequency of the signals to a greater precision, right up to the Nyquist limit. I think you're defining bandwidth as "reciprocal of number of times per second I can take a group of samples and decode what signals are in there" rather than "highest frequency that can be recorded and played back," the sense in which I understand it.

Just to save you from having to go back a few pages. Spectral information right up the the Nyquist limit is there, so the bandwidth (in the usual sense of the term) is not reduced.
 
I read the data sheets. Trust me, it works.

Ah, did you think I was somehow averse to reading them?:p

You may have to go back to earlier parts and longer datasheets (which may be available only in print), but the processes and setups for the measurements are extensively described in several.

ISTM not particularly relevant to those figures whether they dithered the waveform or not, I doubt it would make much difference to the THD+N figure they quote, which is 3.2% typical. That would be dominated by other shortcomings (INL, DNL) - compare and contrast the corresponding figure for the TDA1541A. It is after all an economy model.
 
1 LSB = 2 Levels (as in using the single LSB)
2 LSB = 4 Levels (as in using the two lower Bits)

PS, it may be my take is archaic, it goes back to the use we had in industrial systems in the 80's...
Sorry, but we say 'LSB' to distinguish from 'bit.' 1 LSB is not 1-bit, 2 LSB is not 2-bit. You would be correct if you said that 1-bit offers 2 levels and 2-bit offers 4 levels, but the nomenclature in use is to use 1 LSB to refer to the smallest step size in the quantized value. Thus, 1 LSB varies by 1 code value, or 1 level, and 2 LSBs is a variance of 2 code values, or 2 levels. For a given bit depth, an LSB always has a constant size.

In this nomenclature, the largest undithered error is 1/2 LSB, meaning +/-0.5 LSB is the largest difference between the actual value and the quantized value (ideally). With proper dither, that increases to +/-1 LSB, but what you gain is complete elimination of nonlinear quantization distortion, provided you have a minimum of TPDF.
 
To give you a simple and practical example.

Take an LP, record at 88.2K (or 176.4KHz if you like) and 24 Bit.

Then post process to make best use of 16 Bit dynamic range (eg set highest peaks to 0dBfs possibly even compress some if you feel like it).

Then convert to 44.1KHz/16Bit.

Do one conversion using ZOH & Truncation (essentially discard every 2nd sample and the lower 8 Bit of the 24 Bit signal) and another with the usual ASRC and Dither.
What is ASRC? Is that an outboard-hardware-only solution?

For me, the usual is regular SRC (would that be called SSRC?) and MBIT+ or POW-r (despite some folk's dislike of the latter).

As for the ZOH & Truncated version, I've certainly been disappointed to find that certain 'HD' purchases were accomplished just this way, if the ultrasonic aliasing is any indication.
 
Hi,

What is ASRC? Is that an outboard-hardware-only solution?

ASRC = Asyncronous Sample Rate Conversion

It is available in software or hardware and is used whenever we want to convert ione sample rate into another where the ratio is non-integer (e.g. 192KHz to 44.1KHz).

For me, the usual is regular SRC (would that be called SSRC?) and MBIT+ or POW-r (despite some folk's dislike of the latter).

For me it is precisely nothing.

As for the ZOH & Truncated version, I've certainly been disappointed to find that certain 'HD' purchases were accomplished just this way, if the ultrasonic aliasing is any indication.

I doubt you can achieve HD releases that way, I was referring to going from "High Rez" to CD standard.

Ciao T
 
<snip>
And, technically, it's (6.0206N - 1.76) ... you have to be the first person I've heard suggest that 16-bit CD has over 98 dB of S/N rather than the typically-quoted 96 dB. At the moment, I can't cite a reference, but I've never seen any claim that the S/N or dynamic range is better than the bit depth. This is all referring to maximum signal amplitude versus quantization noise.
<snip>

No, it is indeed .... + 1.76 dB .

But it depends on the waveform that is quantized. For example, if a sinus is given, then the error waveform after ideal quantization can be expressed (mainly) as a sawtooth (under the assumption of a zero order hold) with a peak amplitude +- 1/2 LSB .

Calculation leads to the formula
6.02N + X dB (with N = number of bits in the quantizer)

and X being a number depending on the waveform quantized; for the sinus it is the quoted 1.76 .

More in general, maybe we should seperate DC and AC level quantization; for AC, i think we all will agree, that dither is needed during A/D conversion, because even an ideal quantizer will always produce correlated error signals if used undithered.

Further, while for DC any dither will not higher the resolution nor the accuracy during A/D, it will do so for AC, under the assumption that the results are analyzed with some processing like our hearing sense does.

And, i think it can be stated that processing with dither is better in all circumstances compared to undithered processing. (under the assumption that signal levels approach the region in which problems arise and/or truncation occurs)

Questionable is if processed D/A conversion (using high amounts of dither _and_ noiseshaping combined with low bit DA in the loop) will in all cases produce the _same_ results as a precise mulitbit D/A convertor.
 
What is the peak level of the dither? What is the peak level of the signal? What is the difference rendered as %?

It will have that level of "fuzzy" distortion (as opposed to harmonic distortion) with dither.

But how is this different from just plain old Gaussian noise (assuming I believe reasonably that the dither is Gaussian)? Surely you're not saying that plain noise is this fuzzy distortion.

This matter should be of concern even to analog diehards because essentially *no* modern recordings are not digitally processed. Even record mastering uses a digital delay for groove control.

Thanks,
Chris
 
Hi,


Do one conversion using ZOH & Truncation (essentially discard every 2nd sample and the lower 8 Bit of the 24 Bit signal) and another with the usual ASRC and Dither.

Listen to the resulting files in direct comparison to the original LP.

Done it, can't hear any difference with my 61 year old ears. In fact the tape hiss coming and going between cuts is very obvious so the dither could be less important. BTW the designer of the ESS DAC is an old friend, they actually built the whole thing on a giant FPGA and tweaked by listening. Some of the tweaks were not the mathematically correct thing to do. Of course no details were revealed.
 
Last edited:
Cedar sucks! At least the stuff I heard that was processed. It 'sucked' so bad that my friends came to me with some Cedared material and I took it to a recording engineer to find out what was wrong with it. He explained it to me.


Untitled Document

Mark's work for Naxos is highly respected in the classical music community and by people who love music and could care less about the audiophile concerns. Calm down.
 
What did one Deadhead say to the other when they ran out of weed?

Scott must be away from the keyboard to leave that striaght line hanging so long... what can you expect applying Noise Reduction processing to noise

but for a simplified look a TPDF dither:

triangle probability distribution is used, not Gaussian, for the added dither when requantizing "24 bit" studio ADC (or DAW word size, now can be 64 bit) material to Audio CD 16 bit word size

we seem to hear just the RMS value of the noise - additionally, TPDF has a definite peak amplitude - if you watch a Gaussian distribution "forever" you can expect to see a "infinite" peak
the TPDF noise is made by adding (rather subtracting for zero mean) 2 independent uniform probability distributions over the 0, 1 interval (the uniform PDF noise is commonly available from a random() function in many math packages)

while the dither is added at higher resolution before the quantizing/rounding step the result of 2 lsb p-p TPDF dither after quantization is to add

0 to 3/4 of the samples,
+1 count to 1/8,
-1 count to 1/8 of the samples ( for a value centered on a quantization boundary )

and if the value we are dithering is at +½ a lsb then
½ the dithered samples are 0,
½ are +1


in both cases the RMS value of the added +/- 1 “count” TPDF noise (after quantization) is 1/2 lsb – an engineer should be able to do those RMS in their head


the quantization noise is decorrelated with the signal after dithering, but still gets added to the TPDF noise, but the above RMS noise calc is after quantization

I believe we can use 93 dB as a good estimate of "flat", unweighted, sinusoidal S/N in a 16 bit digital audio stream with +/- 1 count TPDF dither

where “sinusoidal” refers to the ratio of the RMS value of the peak amplitude sine we can represent to the RMS value of the noise
 
Last edited:
Status
Not open for further replies.