• WARNING: Tube/Valve amplifiers use potentially LETHAL HIGH VOLTAGES.
    Building, troubleshooting and testing of these amplifiers should only be
    performed by someone who is thoroughly familiar with
    the safety precautions around high voltages.

How does a tube's need for bias change with age?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Nyquist is not quite everything that it is supposed to be.

1. Sample a 22.05 kHz sine wave at a 44.1kHz rate.
Sample at 0 degrees, 180 degrees, 360 degrees. What do you get: Zero, Zero, Zero.

2. Sample a 22.05 kHz sine wave at a 44.1 kHz rate.
Sample at 90 degrees, 270 degrees, 450 degrees. What do you get: +1, - 1, +1.

3. Sample a 22.05 kHz sine wave at 44.1 kHz rate.
Sample at 30 degrees, 210 degrees, 390 degrees. What do you get: +0.5, - 0.5, +0.5.

I admit, these are special cases, because the sampling rate and signal frequency are exactly 2:1.
So what happens when the sampling rate is 2.02 times the signal frequency?
(not an exact integer ratio).
Perhaps you can understand that there will be times when we nearly capture the crests (peaks) of the sine wave, and there will be times when we nearly capture the mid amplitudes of the sine wave, and times when we nearly capture the zero amplitudes of the sine wave.
It changes as we roll through time.
And for the harmonics of the 2.096 kHz note, the effect is even worse.
Real nice, right?

You can use the same 44.1 kHz sampling rate on a sustained 2.096 kHz flute note. There will still be some variance on the amplitude of the samples as we scroll through time.

Higher sampling rates can more accurately capture the correct amplitudes as we roll through time of many many cycles of a note.

How good or bad is it?
Only you can answer.
 
Last edited:
Nyquist is not quite everything that it is supposed to be.


Nyquist is exactly what it's supposed to be. Your example violates Nyquist, which requires sampling at more than twice the highest frequency of interest. Sampling at 44K1 Hz works perfectly for 22K0 Hz and fails for 22K05 Hz, exactly as promised.


I'm sorry this is off topic, but errors in fundamental conceptions need correcting before they spread. Many very smart and capable people have been misinformed about sampling and quantization via the Internet. Gotta stick that thumb in the dyke.


All good fortune,
Chris
 
Chris Hornbeck,

I admit, you are right.
Nyquist rate is where it falls apart. All signal rates below Nyquist are closer to the actual amplitudes versus time.

Yes, lets nip it in the bud.

But I was hoping to see what kind of answer I could get to a practical situation:
Given a 44.1 k sampling rate, what is the probability of capturing the exact full amplitudes of both the positive and negative crests of a 2.096 kHz note?
Can it be done in 10 Cycles?
That is only 4.77milliseconds, not exactly a sustained note.

I am not into writing software to calculate that, nor do I have the energy to do that manually on my HP 11.
So how many of those 10 cycles do we get within 3% of the full amplitude?
How many of those 10 cycles do we get within 3% of the amplitude of the zero crossings?
. . . within 1%?

I'm just sayin'
I'm just asking because I have doubts.
I know nothing in sound capture and reproduction are perfect, but how good or bad is it?
 
Last edited:
But I was hoping to see what kind of answer I could get to a practical situation:
Given a 44.1 k sampling rate, what is the probability of capturing the exact full amplitudes of both the positive and negative crests of a 2.096 kHz note?
Can it be done in 10 Cycles?
That is only 4.77milliseconds, not exactly a sustained note.


Let's take the simplified case of just sampling and without quantization. Just signal below the Nyquist limit sampled and fed directly to reconstruction. And lets pretend perfect sampling and perfect reconstruction. In this (purely analog) case, the signal amplitude is perfectly reconstructed, with no noise added and no variations in timing (group delay variation, etc.).


Now lets add quantization with dither. Each sample is quantized with dither to randomize quantization error, so noise is added, then A/D , then D/A, is performed and the samples reconstituted. Signal is then reconstructed from the samples as in the first case. Note that no timing errors have been added by quantization.


So, sampling alone can (ideally, of course) be perfectly reversed in samples of any size provided Nyquist isn't violated. Adding quantization adds a lower limit to sample amplitude but makes no changes to timing of the samples.


When I first worked with PCM, for military coms in 1970, it was considered so secure that radio transmissions weren't even encrypted in a war zone. Nobody had even heard of dither AFAIK. We've come a long way since then but some misconceptions from the late 1980s survive. Hope this example helps.


All good fortune,
Chris
 
I see I have a lot to learn about quantization and dither.

In the 2.906kHz flute note, sampled at a 44.1k rate, that is 17 degrees from sample to sample of the 360 degree cycle.

Is the sample effectively an average amplitude of that 17 degrees of the flute note cycle?
Or is it merely an instantaneous sample, i.e. 1 degree or less of the 17 degrees in time?

The way I (incorrectly?) see it is:

At 90 degrees, +/- 8 degrees, an instantaneous sample amplitude is within 1% of the amplitude of the crest voltage at 90 degrees (0.99 versus 1.00).

At 90 degrees, +/- 14 degrees, an instantaneous sample amplitude is within 3% of the amplitude of the crest voltage at 90 degrees (0.97 versus 1.00).

At 0 degrees, +/- 0.58 degrees, the sample amplitude accuracy is within 1% of the amplitude of the crest voltage (0.01 versus 1.00).

At 0 degrees, +/- 1.7 degrees, the sample amplitude accuracy is within 3% of the amplitude of the crest voltage (0.03 versus 1.00).

It seems to me (incorrectly?) those 1% and 3% errors seem like amplifier distortion, unless the quantization and dither takes care of those errors.

Is there a simple way to explain how quantization fixes that?
 
Is there a simple way to explain how quantization fixes that?


Quantization doesn't really "fix" anything, in fact it adds noise, but there isn't anything to fix. I've looked briefly for a good tutorial on the Interweb, no luck yet, maybe someone else has a good reference. Wiki has its usual non-intuitive explanation:


Nyquist–Shannon sampling theorem - Wikipedia


Maybe thinking about how only two samples are needed for perfect reconstruction would be a useful path. And, that reconstruction is the other, and equal, half of sampling.


If anyone still reading this hijacked thread (I'm very sorry.) knows of a good tutorial, please jump in.


All the best fortune,
Chris
 
If I may return to the topic for a bit, I have some data points that may be of interest: I bought a McIntosh MC240 fresh out of Mc Clinic with new tubes in 1978. I run the tubes hard at about 60 mA instead of the original 20 mA or so. After 34 years of normal use (evenings and weekends), I retubed in 2012. The original RCA 6L6GC still have some life left (variation in Ia/Gm vs nominal):

  1. 6.9%/8.3%
  2. 5.6%/8.3%
  3. -7.6%/-8.3%
  4. -31.3%/-25%
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.