John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
You don’t need that many to make a halfway decent filter.

Here is a 1990 example implementation from BB:

http://tech.juaneda.com/download/DF1700.pdf

Its a total of 199 taps if I read the datasheet correctly.

While the stopband attenuation of the digital lowpass filter overall (removing the images between the old Fs and the new Fs) looks already quite good, the attenuation at 0.5 x Fs is only 6dB, which isn't sufficient.
If I may again point to Goedhart et al. who argued that at least 50 dB attenuation at 22.05 kHz (in the case of "Redbook" ) are needed and realised that with their first 4 times oversampling filter (SAA 7030); but of course, as shown in a recent post, the stopband attenuation was only 50 dB at the beginning and got worse at higher frequencies.
 
Last edited:
Hi, vacuphile,
for example SEGOR-electronics GmbH search for VKL 350 Eurocard 100 * 160 mm.
There were also "powerplane boards" from Vero; I don't know if they
still exist.

There are prettier ones with tin plating, such as
Elektronik und Technik bei reichelt elektronik günstig bestellen RE01-LFDS with tin plating and
RE334-LF with tin plating and solder mask. I could not find them
right now on the web, but you can see one at

< https://www.diyaudio.com/forums/equ...zer-ak5397-ak5394a-ak4490-95.html#post5986973 >

#2832
 
Last edited:
Not a million points, but my first FIR filter from 1984.
We used it to filter ultrasonic test signals on the inner
containment of nuclear reactors, not important things
like audio. The bottleneck was Z80 processor I/O.

It is also an example that electronics tends to use up
all the available space.

Wow, I have a vague recollection of z-80 code from back in the day. I didn't do enough to get it to stick.

All,
My concern all along has been how to not violate nyquist. Having sample frequency too close to nyquist produces beat patterns which are sine envelope modulation. My 22 kHz example, the rise from zero to peak is 220 samples, so for a filter to flatten the envelope requires at least double that depth, maybe more. That process is great for pure sines. However, if that depth is used, what would it do to a signal with a fast envelope rise time? Cymbals I suspect are a great example.
As I pointed out, the beat pattern length in samples is fs/(sample rate/2 minus fs)..44.1 sampling a 22k sine gives 22000/(22050-22000), or 22000/50....440. Note that gives only one lobe, the second lobe completes the full 360 sine modulation envelope.

Dropping fs to 20k gives a beat of 20 samples, so a filter depth need be roughly that length for flat envelope sine. But again, you are looking at transient issues at the 10 to 20 sample range.
And all of this starts because of the analog sampling process at the front end.
From Lavry, the desire to oversample to relieve the output filter concerns is a valid thing to do, but if the resultant filter has insufficient temporal depth, sines will modulate at beat. The alternative is dulling out the transient response a bit.
Summary: there is a flatness/transient trade off when you get too close to nyquist at the A/D side. It is independent of the oversampling rate at the D/A. If you want better transient response at the upper end, you have to get more data from the analog stream. I have provided rudimentary math showing the level required (roughly).

It is possible this math relationship has already been established, as I said, I have two historical data points.
Jn
 
Last edited:
Hi, vacuphile,
for example SEGOR-electronics GmbH search for VKL 350 Eurocard 100 * 160 mm.
There were also "powerplane boards" from Vero; I don't know if they
still exist.

There are prettier ones with tin plating, such as
Elektronik und Technik bei reichelt elektronik günstig bestellen RE01-LFDS with tin plating and
RE334-LF with tin plating and solder mask. I could not find them
right now on the web, but you can see one at

< https://www.diyaudio.com/forums/equ...zer-ak5397-ak5394a-ak4490-95.html#post5986973 >

#2832
I get boards from spark fun. The boards have isolated plated through pads. They also have pogo pins, I used both to make a bed of nails fixture for testing a laser kerfed 3 inch diameter diode in liquid helium, so I know the boards will survive the lowest room temperatures normal people would want (or survive). I think the largest they have is somewhere in the 2 by 4 inch range though.
Jn
 
Last edited:
My concern all along has been how to not violate nyquist. Having sample frequency too close to nyquist produces beat patterns which are sine envelope modulation. My 22 kHz example, the rise from zero to peak is 220 samples, so for a filter to flatten the envelope requires at least double that depth, maybe more. That process is great for pure sines. However, if that depth is used, what would it do to a signal with a fast envelope rise time? Cymbals I suspect are a great example.
Can you analyse these samples? Poll: Cymbals of different sampling rates listening test

Please don't post any results until UT has revealed the differences
 
Can you analyse these samples? Poll: Cymbals of different sampling rates listening test

Please don't post any results until UT has revealed the differences

I am sorry, I cannot. What I have access to is quite different and entirely unsuitable for the task. The system I recently worked on has been packaged and sent back to the Far East, it was theirs. It was reassuring to see FFT's with one pixel wide spikes protruding out of an almost nonexistent noise floor, it's what you get when you sample for two minutes at large rates. This seismology stuff is really cool.

However, I hope my analysis has somehow informed, as I had not seen any even "semi-rigorous" analysis* to explain the trade off. I would really hate a huge effort be done on reconstruction oversampling comparisons if the original data stream were comprised of 44.1k material.

*several have pointed out how there is a settling time for limited length samples due to the transition hf content, but I never saw any attempt to quantify the effect. Math is indeed our friend. Luckily, my temporally based analysis was simple math, so I didn't have to take my shoes off to do the math..😱

Jn
 
Last edited:
Summary: there is a flatness/transient trade off when you get too close to nyquist at the A/D side. It is independent of the oversampling rate at the D/A. If you want better transient response at the upper end, you have to get more data from the analog stream. I have provided rudimentary math showing the level required (roughly).

It is possible this math relationship has already been established, as I said, I have two historical data points.

The time issues are no different for filters as for A/D and D/A. A filter with -120dB attenuation and a 10 Hz transition band would be very long and be processing a large time window. Sine waves 10Hz apart do the same thing no matter what the sample rate is and separating them takes a long filter.

As for these transients I have looked at the waveforms of Tibetan bells and chimes and some other similar things and they typically are less than most think. The claim of audible dulling by allowing some extra room to get say -100dB at 22050 has not been demonstrated.
 
The time issues are no different for filters as for A/D and D/A.
. Agreed.
That said, sampling of 20k at 44.1 loses transient information, a consequence of nyquist.

The real question of audibility I leave to others.

I truly get tired of seeing large window FFT's being used to prove the fidelity of CD, ignoring the transients. That is why I introduced the concept of harmonic subtraction analysis.

I personally do not try to listen for any transient issues, so am certainly happy with the format.

The claim of audible dulling by allowing some extra room to get say -100dB at 22050 has not been demonstrated.
I have not discussed headroom. I point out violation of nyquist.

Jn

TNT....bingo.
 
Last edited:
The issue not being mentioned here is that what we listen to are real recordings not data taken with measurement microphones. I could make a list of the mics that recording engineers love, many 1" condenser some ribbons. You will not find many with BW's approaching 20kHz. For instance Royer's best ribbon, Frequency Response: 30 -15,000 Hz +/- 3dB.

Another matter, the market is full of Nyquist violating DAC filters and many prefer them.
 
Last edited:
The issue not being mentioned here is that what we listen to are real recordings not data taken with measurement microphones. I could make a list of the mics that recording engineers love, many 1" condenser some ribbons. You will not find many with BW's approaching 20kHz. For instance Royer's best ribbon, Frequency Response: 30 -15,000 Hz +/- 3dB.

Another matter, the market is full of Nyquist violating DAC filters and many prefer them.

While I agree on both counts, they are outside the wheelhouse of my analysis.
My only concern is that the initial sampling rate be sufficient to capture transient information. Excessive filtering to produce a wonderful FFT on a 44.1 original stream is contrary to transient capability.
Note my analysis does not limit to 20k, it covers any frequency. Clearly 1khz suffers far far less, but the math is there.

So, mic quality is an entirely different (yes important) one.

To Richards initial premise that the sampling should be high enough that all waveforms are captured instantly....while I will do that for my high speed stuff, the counter argument that all the information is within 44.1 does indeed requires nyquist violation for transient content.
Also remember, as others posted, group delay effects are millisecond level events, localization is at the 10 uSec level band specific of course. To see group delay levels clearly covered by 44.1 as I presume PMA's previous graph showed, is not the same as temporal fidelity of transients upon which humans localize by.

Jn
 
Last edited:
The issue not being mentioned here is that what we listen to are real recordings not data taken with measurement microphones. I could make a list of the mics that recording engineers love, many 1" condenser some ribbons. You will not find many with BW's approaching 20kHz. For instance Royer's best ribbon, Frequency Response: 30 -15,000 Hz +/- 3dB.

Another matter, the market is full of Nyquist violating DAC filters and many prefer them.

I'm not sure, if this kind of arguments count, as obviously there a lot of chances to do something that will not bring out the best possible result. And of course it is a matter of what people are thinking about the importance; better midrange vs. extended bandwidth for example.

What helps is IMO the known spectral distribution of typical music genres, which shows in general that magnitude of signals above 10 kHz is ~ 40 dB lower than in the bass region.
Btw, comparison of data over the decades shows that the magnitude above 10 kHz gets higher.

But does all that help, if someone thinks cymbals don't sound right if recorded to "Redbook-Standard",but more like the real event if the sampling frequency is higher ?
Or if people are feeling better if the recorded and reproduced bandwidth is extended above 20 kHz?
 
Last edited:
sampling of 20k at 44.1 loses transient information, a consequence of nyquist.
The real question of audibility I leave to others.
Any complex signal is composed of many single frequency components and so is a transient.
When properly brick wall filtered, just in case that the end result should be 44.1/16, only those frequencies above 20Khz will be removed and no chance that Nyquist will be violated or that information below 20Khz gets lost because of Nyquist.
Since these removed frequencies are all above the capabilities of your hearing system, you won't notice the difference, not even for a Cymbal.

As already mentioned, master recordings nowadays are made with much higher sampling frequencies only needing a simple analogue filter.
After digitizing, a digital brick wall filter removes all content from slightly above 20Khz to 1/2 Fs.
And after this step the digital data can be decimated to 44.1/16 after applying dithering and noise shaping, and all info below 20Khz will still be intact.

Apart from this all, harmonics energy near 20Khz is very low compared to the midrange, and added to the a-weighting curve of your hearing system, it's rather an academic discussion.

Hans
 
My analysis shows that transient content is lost by nyquist violation. But I've been discussing merely the envelope.

For a 20 kHz sine, the level of effect extends ten cycles by beat frequency. Ten cycles is what, 200 uSec? Increase the beat frequency by a factor of ten, the expectation would be a 20 uSec effect. By 40, 5 uSec.

As the beat frequency speeds up as the signal frequency goes further from sampling rate, this defines the sampling rate required to maintain ITD humans can discern. It also defines the frequency where ITD concerns fall off to negligible.

Jn
 
Last edited:
...all the information is within 44.1 does indeed requires nyquist violation for transient content...

Of course, even at higher sample rates transients can be badly smeared out by ADC and or DAC clock jitter. Easy enough to demonstrate the effect, as I recently did here in good old Auburn. Point is that there are plenty of imperfections in digital audio that one could focus attention on. Its always of question of how good is good enough?
 
Status
Not open for further replies.