John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
I wasn't aware the discussion had moved from sampling theory to interchannel localization requirements.

I have no dog in your ITD fight - I haven't investigated this in detail myself. That said, it seems totally acceptable to me to assume Redbook is sufficient. My system has no problem producing a stable and reproducible soundstage with 44.1 or 48 kHz files, including binaural recordings. The mass adoption and market penetration of 44.1/48 over the past 40 years should not be ignored. All proposed successors to this day have failed to gain any significant traction. DSD and high-res PCM can't even consistently prove they are audibly superior in well-constructed studies that I am aware of. If this is a significant problem, it should be possible to generate files to demonstrate it and compare with higher sample rate versions?

In terms of understanding, I agree, and that goes both ways. I normally find your posts enlightening and logic sound... but we did just have a 2 page discussion because you introduced an example of why Nyquist is inadequate which actually violated it (zero-crossing @ Fs/2).

I did not say nyquist was inadequate.
I said at exactly 2x, or sampling at exactly twice the sine frequency, the sampled stream cannot be used to reconstruct the exact signal amplitude and phase as there will be an infinite number of sines at that frequency that can produce the exact same sampled set. Tha simplest example is sampling every zero crossing.
I also said that once you begin to sample incrementally faster, the window needed for good accuracy is large. The faster you sample, the smaller the window that is needed. In the limit at high sample rates, no window is even needed.

As I said, some who are here merely to put others down will take sentences out of context and attempt to strawman them.

Jn
 
Do you write and simulate it it in Matlab + Simulink and then generate the C, or do you write your own implementation?.

The Delta Tau has blocks of code to do the PID software written in some Motorola code. But they let the user point to and access working registers. In this way, I could have the second PID structure use the output of the first as an input.

It also allows me to use multiple encoders for one PID loop. A great example is a rotary on the motor and a linear on the load. The rotary can be used for high speed motion with low accuracy, the linear can be used for end of motion integration. The current example I just worked, the rotary was 70 microns out by end of travel due to mechanical forces compressing the 2 inch diameter leadscrew. The user requires 1 micron accuracy at end of travel, and the second loop integrates position down to 5 nanometers accuracy, with no need for the user to adjust.

Many of the techniques I design are outside the scope of the controls engineers learnings....but certainly not beyond their abilities. It just takes time, but they are very smart and learn fast.

Jn
 
Last edited:
I did not say nyquist was inadequate.
I said at exactly 2x, or sampling at exactly twice the sine frequency, the sampled stream cannot be used to reconstruct the exact signal amplitude and phase as there will be an infinite number of sines at that frequency that can produce the exact same sampled set. Tha simplest example is sampling every zero crossing.
I also said that once you begin to sample incrementally faster, the window needed for good accuracy is large. The faster you sample, the smaller the window that is needed. In the limit at high sample rates, no window is even needed.

As I said, some who are here merely to put others down will take sentences out of context and attempt to strawman them.

Jn

And? Exactly 2x violates Nyquist. There is no reason to bring it up. I understand your point, though.
 
Jn, I think you are mixing up real facts into an untrue conclusion...that Redbook is inadequate to support interchannel localization requirements.

It is true that recognizable ITD cues are shorter than the time between samples in Redbook.

However, it is also true, that localization through ITD is limited to stimuli below 3.5kHz. It is also true, that the ear codes for zero crossings of tones, and that these zero crossings are all the brain has to determine localization.

Now, back to Redbook. It can reproduce a 3.5kHz tone with uncanny precision, ppm error. The zero crossings of the reproduced tone are totally unrelated to the sampling frequency. The zero crossings of the reproduced tone are only related to, and fully determined by, the zero crossings of the tone before digitizing. There is no 'graininess' in this, besides the possible graininess of time itself.

Since it is the temporal precision of the reproduced zero crossings that determine the potential quality of ITD, and since this temporal precision is perfect under Redbook, there is no issue here.
My goodness, I thought I explained it quite well.

It is not the zero crossings that are the focus. I used zero crossings in an example to show how 2x sampling can produce a flat no output signal if the sampling occurs at the zero crossings. I also said that merely shifting the relative phase of the 2x sampling to the sine can generate a sample stream that can be anywhere from zero to full scale.

When discussing what I stated, please read my posts and understand what I said, not some poster who has been reducing this forum to a pig pile.
As to redbook as adequate for ITD accuracy with actual music, you assume it is adequate based on what, distortion? Or have you seen any measurements that actually show that ITD is preserved given 6 or 7, even 15 sources in the mix?

As I said, I was quite happy with vinyl, quite happy with CD.
Jn
 
Last edited:
And? Exactly 2x violates Nyquist. There is no reason to bring it up. I understand your point, though.
You missed my point, as there was a very specific reason to bring it up.

I pointed out exactly why 2x violates Nyquist in terms of actual samples.
I then point out that there is a relationship between sample rate as you rise above 2x and the window width required for accuracy.

Scott provided a great example of a 512 sample FFT on a pure 1khz sine pair as how good the technique is, but that level of calculation is not what redbook does.

My point being, how wide a window does redbook have w/r to the accuracy needed for ITD control.

So far, the response has been, "Jn you are an idiot"....pig pile!!
 
Member
Joined 2016
Paid Member
Hmm... I may be misreading this but I think you are misreading vacuphile? I didn't read his post as referring to your 2x non-nyquist example... :
" It is also true, that the ear codes for zero crossings of tones, and that these zero crossings are all the brain has to determine localization."
 
And renders ITD accurately enough?

I do not know. My listening is always about background music while I do something else. Dinner with friends, martinis in the backyard, while I ride my bike, make chips in the basement..

I do not sit in a sweet spot looking for proper image placement, that is not me. I have heard it on good systems, but have no desire to setup that kind of system in my house.

But I understand the concepts quite well.

Jn

Ps. Do not get me wrong..I do not drink martinis while riding my bike, nor while making chips...I enjoy being able to count to ten..
 
Last edited:
Hmm... I may be misreading this but I think you are misreading vacuphile? I didn't read his post as referring to your 2x non-nyquist example... :
" It is also true, that the ear codes for zero crossings of tones, and that these zero crossings are all the brain has to determine localization."

I am not misreading him. However, with an IPad I cannot easily direct specific responses to specific statements. Vac and see absolutely eye to eye.

The "ear codes for zero crossings" however, may not be accurate. Over the years the researchers have been changing the model of our ear response, I've seen zero crossing, rectification, SAW, envelope modulation,
all manner of models.
Zero crossing is troublesome in that we localize hf signals despite them riding on larger LF signals.. Signals which have few if any zero crossings at hf.
The implication of zero crossing as the sole determinant means that the cochlea is doing a low pass along it's length**, something I did not find during my research. There may be research on that, I have not looked in over 15 years, but can certainly see it as a possibility.

Jn

**or perhaps the brain is doing the low pass..

All..please note. I stated that a specific theory is troublesome, why so, and then provide a possible reason why my assertion may be incorrect. The best person to challenge their own theory is many times the most capable one for questioning such. That is how it should be. That is how I roll.
 
Last edited:
And? Exactly 2x violates Nyquist. There is no reason to bring it up. I understand your point, though.

You'll find in many textbooks or introductory course material a so-called Nyquist formula:

Fs >= 2 x Fmax

as a general expression without any further preconditions that must be fulfilled.

Even Shannon's description in his 1949 paper was ambiguous at that point,but I'm quite confident that he trusted in the readers ability to figure it out.

I rember attending a practice lecture in my university years back in the 80s where exactly this formula above was introduced/used and when I objected with the zero crossing example, it wasn't accepted at first because "energy of the signal can't disappear" (which obviously is correct but doesn't matter) .

I've to reread Nyquist's publication because I can't remember which exact expression he gave, but the communication guys might have been more interested in channel bandwidths starting above DC, due to the technical realisations.
 
JN,

What makes you think people misquote you. Remember "Never ascribe to malice...."

Now about your comments on mushroom soup....

(Humor is the proper way to deal with the very serious folks around here mis-applying things.)

Perhaps we could talk about sampling limits, modeling samples as a series of unit impulses and even sampling techniques.

For example some sampling methods record the peak signal level in the sample time, others the average. (Hint not a good thing.)
 
The next logical step would be to make sure the channels are identical in every way. Wouldn't this eliminate any time differences?

We do make sure they are identical. The real question is, do both channels preserve all ITD relationships despite differing signal mixes, especially with the sampling rate so close to nyquist.

To your question, I don't know. I do know it is not good engineering to assume that it is without verification by test.

I believe ITD was not considered in the development of redbook.

Jn
 
No anti-aliasing applies to the A/D process. Clock CD data to your DAC at 44.1kHz and send the staircase signal right to your PA there are lots of images at all the odd harmonics of the sampling frequency.

In fact there are images located at every multiple of the sampling frequency (odd and even), each spanning from (Fs - Fmax) to (Fs + Fmax).

The zero order hold (staircase signal) introduces an overlaying amplitude weighting function that attenuates the highest frequencies in band (without correction) and the mirroimages up to a certain degree.

(I know that you know that, but......)
 
Status
Not open for further replies.