John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
From a theoretical perspective Scott Wucer and syn08 have been correct all along. They didn't clearly explain why DACs and ADCs seem to perform imperfectly in practice, which is where all the problems are (not in the mathematical theories). Unfortunately, by this point everyone is so tired of arguing there may be no interest in trying understand as much as possible about what some of the practical problems are.

RNM, if we band limit transients to 20kHz with an ideal filter before digitizing then we can reconstruct the band limited signal back into analog quite well if we go to enough trouble to do it as well as we possibly can.

Most of the practical problems are with filters, accuracy if data converters (amplitude, time), time delays, etc. If we increase sample rate what may happen is that some practical problems get worse and some get better. There are always trade offs.

Now, if your point is that transients we can hear require frequencies above 20kHz to be reproducible in the way that humans hear them, that would be another issue entirely. That would be to say that humans may not be able to hear above ~20Hz when measured as a single tone, but in transient cases there is some hearing sensitivity that does not apply to single tones. That would imply some nonlinearity to how hearing works. In other words, that theory only pertaining to linear systems may not perfectly apply to hearing. That gets more back into problems requiring more perceptual research. Its not really an issue about the correctness of linear theory itself as applied to linear systems.

JNeutron, we could talk about the issues you raised and the practical limitations. Again, the problem is not with theory, it is practical.
 
Last edited:
No, you still don’t get it. Sample after every 179.9 degrees. Now how many periods you need to get the same approximation as for 179 degrees? But after a period of 179.99999999999999999........ degrees?

I can’t do anything better to explain it, other than quote: http://www.wescottdesign.com/articles/Sampling/sampling.pdf Please pay attention to the “What Nyquist didn’t say” section, it addresses all of your questions and confusions.

I do recommend you actually read and understand my posts.
Jn
 
First a unit impulse has an infinite bandwidth. Even if the repetition rate is 100 times a second.

Second a drum hit is not a unit impulse, but does have rich harmonic structure.

Third both Rupert Neve and Manfred Schroeder by independent means arrived at the conclusion that a 5 degree phase shift at 20,000 hertz was perceptable. This would indicate an upper band limit of 20,000 hertz is not adequate to reproduce music accurately.

If you have a subject with good hearing and the ability to accurately reproduce a test signal, you just might find human hearing does not have a brick wall filter. Although many folks can detect a 30,000 hertz loud tone being on or off, it does not come across as a sine tone.

If you produce a 20,000 hertz tone with the "ADSR" envelope a Red Book CD cannot reproduce it accurately.

The Red Book CD sampling rate was not based on Nyquist sampling rate and practical filter limits. It was picked so you could record it on an existing Sony modified video recorder.

The original recordings were based on Thomas Greenway Stockham's work. His A/D converter design used Allen Bradley resistors and some of the red mil-spec ones. It was nine bit linear with an additional 7 bit addition. He did not want to do 8 and 8 as that would put the transition at the signals zero crossing and be far more perceptable. Sony then made a converter chip based in the 9 and 7 bit design. It was used for virtually all the original CDs.

Finally one cannot claim CDs all behave the same. Current practice is to record at higher rates and resolutions and then down sample and add dither. That is great for almost anyone but obviously not all. Nor can one claim the reproduction of a studio recorded piece is flawless unless they are listening in the original control room.
 
Last edited:
I'm really not following. The last couple of days I have been playing with measuring the phase difference between the left and right channels on a test LP at 1kHz. There is no problem doing 0.1 degree at 16/48 which is ~250nsec. Do you want me to post the plots?

No need, I know you are doing it right.

However, you are relying on the strength of a large sampling window.

Can you achieve that temporal accuracy with a single triangle hit? IOW, transient constantly changing content, where each channel already has a temporal relative shift.
Remember, we are not trying to measure a mono signal, and a real stereo signal has ITD built in.

Jn
 
Last edited:
One very important thing before digitizing is the BW restriction to avoid aliasing, so as a reaction to the article “What Nyquist did’n’t say” the following:
When sampling at 44.1kHz it is almost impossible to create an analogue filter that goes from 0dB at 20Khz to ca. -100dB at 22.05Khz.
That is why some are fighting against 44.1/16 as a decent CD format.

But the solution is simple:
1) use a moderate order analogue filter going from 0dB at 20Khz to ca. -100dB at 192Khz,
2) sample with 384/24, 24 bits to keep computation and rounding errors well above 16 bit accuracy
3) digitally filter all content from 20kHz to 192Khz to below 120dB, and
4) decimate to 44.1/16 while adding dither and noise shaping.

As far as I know 384/24 is what most studios are using nowadays when creating their masters, while there also seems a tendency to use DXD or 352.8/24 for easier conversion to (almost obsolete) DSD.
So producing a decent 44.1/16 file seems to be within reach.
But DSD once called "the best invention since sliced bread" making all other formats obsolete, cannot convince in a large test against 176.4/24.
See “Perceptual Discrimination of Digital Audio Coding Formats”, performed by the University of Music of Detmold, Germany, also published as AES paper 6086.

The other point mentioned a few times in this thread is that Nyquist/Shanon is only true for continuous signals and not for short events.
I have tried to debunk this mythe in LA vol 8 page 19 as absolutely not true.

As a last point: FIR filters, as often used in the digital domain, are producing pre ringing which can be seen as unnatural.
That to some degree the length and type of the digital filters have their effects on the reproduced sound seems to be the case, that's why some prefer NOSDAC's without any upsampling + digital filtering.
But modern DAC's have huge processing power with much improved digital filters, also some with selectable filters without pre ringing.
But most test that I have read could not tell the difference between the filters, although the differences could be measured very well.
Or when they heard slight differences between filters, the tester could not tell for sure what he preferred.
When a minimum phase filter without pre ringing would have been an obvious step forward, all D/A chips producers would most likely have switched to minimum phase already by now.
But that does not seem to be the case, ergo ???

To conclude, there is nothing wrong with a 192/24 High Res formats, apart from consuming more memory.
The only possible advantage that I see is that when using an D/A that always converts to analogue at 192Khz, the upsampling and digital filtering steps can be omitted, preventing an eventual sonic character to be added by the digital filter.
But as mentioned, I was unable to hear the difference to a decimated 44.1/16 file, when upsampled, filtered and converted into analogue at 192Khz.

Hans
 
Sample a 10khz sine using 3 samples only at red book, triggered by pushing a button (IOW, temporally random). Calculate the location of the positive going zero crossing. Edit: of course this does require a second channel with time base only information to compare to.

Repeat multiple times..graph the statistics.

Then 4,5,6 samples etc. you will find a trade off between temporal accuracy and window. It is important to consider that temporal accuracy (uncertainty) when two partially correlated channels are being converted simultaneously. Especially if that temporal accuracy rises above human discrimination thresholds.

What is the equivalent "window" for human localization?

Jn

Ps. It is far more stimulating to consider analysis of waveforms when one harmonic is gating another, simple convolution falls apart. But I digress...back to audio.
 
Last edited:
Remember, we are not trying to measure a mono signal, and a real stereo signal has ITD built in.

Which does create some confusion. AFAIK a lot of the allegedly audible DAC differences are supposed to be audible in mono reproduction.
So it's perhaps not ITD related - which doesn't mean it's irrelevant, but that it's a different issue.

On the other hand, the misunderstandings of sampling shown by some above does go a long way to explaining things...

It certainly doesn't help to assert 20kHz as a limit to hearing - most of the members here would love to have hearing worth s**t at half that frequency! 🙂

Seems to me a lot of effort goes into trying to provide technical DAC or other system based explanations for differences in what is actually their individual ear / brain processing modulated by subjective preferences!

For the audio impulse discussion - perhaps someone could mic up a cymbal and record the spectrum of a strike - I'd offer but I don't have one...
 
Sample a 10khz sine using 3 samples only at red book, triggered by pushing a button (IOW, temporally random). Calculate the location of the positive going zero crossing.

... exactly one the problems I use to work with when writing high speed analog modems, v32 - v92... Fortunately in that case you know a lot about how it should look so polynomial interpolation and the right tracking loops works.
 
Am I failing so far? So, for those with ADD like myself, can you state your point again? Do you still maintain that two samples per period cannot uniquely reconstruct a sine?

Again, if you sample exactly at the zero crossing, you get zero data.

At any other phase, the amplitude will depend on the location between zero crossings and peak.
0, 180, 360...zero
90, 270, 90, peak.
45, 225, 405 (45 +360), an intermediate value.

Sampling just off twice rate, the amplitude will modulate over time.

Jn
 
1 degree at 1khz is 2.7 uSec., 5.5 uSec at 500 hz 13.8 uSec at 200 Hz.

We try to keep timing jitter very low, down in the low pS range (depending on how it is to be measured).

Harder to say about amplitude accuracy of a sample point (may be different for audio and instrumentation data converters). Generally, we truncate analog values between quantization levels, rather than employ rounding.

For very small timing errors, they may be taken as equivalent to amplitude errors.

There are various sources of noise, so one thing you would need to specify is how much noise or uncertainty is allowable for the physical phenomenon you wish to measure.

All the above are intended to suggest practical considerations that might be encountered in some data acquisition situations. Don't know what your requirements might be as a practical matter.
 
Last edited:
Sampling just off twice rate, the amplitude will modulate over time.

True, but it doesn't matter in theory. The analog values of the points (taken to infinite decimal places) are unique to only one solution, assuming an ideally band limited system.

Practically speaking, ambient noise, quantizing noise, data converter nonlinearity, timing jitter, etc., all contribute to noise and or uncertainties.
 
Last edited:
Status
Not open for further replies.