John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
😀 Of course, I have not.


"In signal processing, group delay is the time delay of the amplitude envelopes of the various sinusoidal components of a signal through a device under test, and is a function of frequency for each component. Phase delay, in contrast, is the time delay of the phase as opposed to the time delay of the amplitude envelope.

Is everyone listening to a CD player which produce flat group delay 20- 20KHz?

[Does your whole system have flat GD?]

What does it sound like when it is not flat GD compared to flat?

Relaxing the LP filter requirements via over-sampling might help but often exchanges one problem for another HF problem.

What would a roll-off of about 40KHz standard BW have in making GD and Hf noise easier to deal with?



THx-RNMarsh

Two things.

The first is that if you had read the whole Wiki you quoted from, you would have found a short piece on 'group delay' and 'audio'. From it, you could have concluded, that group delay cannot be an issue at the high end of the frequency scale. See below.

The second is that talking about group delay in the context of audio obfuscates the issue. Just call it phase shift. The reason is that 1 ms group delay @ 1kHz is one cycle, or a 360 degree phase shift, and at 10 kHz that same 1 ms group delay would be 10 cycles, or 3600 degrees of phase shift. If one and the same thing can also be two very different things, that is not conducive to furthering understanding.

Blauert did not go as high as 2 kHz, but he provided data up to 8 kHz. At this frequency, group delay only became audible at 16 cycles. That is a phase shift of 5.760 degrees, or otherwise said, what a 64th order would do. In other words, the phase shift caused by a brick wall filter at 8 kHz might just be audible, but only under lab conditions with earphones. At 20kHz, no way.
 
In other words, the phase shift caused by a brick wall filter at 8 kHz might just be audible, but only under lab conditions with earphones. At 20kHz, no way.
Same question than to scottjoplin: Do-you use a Behringer DEQX or 2496 or other similar device and are-you reporting a personal experience, or is-it something you had read somewhere ?
Tip: Did-you ever noticed how different the sound is affected by an analog equalizer and a digital one, applying the same response curve modification ?
 
Good speakers are always better than headphones for serious critical listening, including for DAC evaluation. I have to say that using cables designed by Jam (not for sale at this time, don't know about the future) helped get the very best out of the system. Some people think special cables are a fraud, period. I used to think that too, understand the point of view quite well. Its one reason I invite people out to visit, so they can swap cables if they want to see what effect (if any) they think it has. Or not, if they don't want. Up to them 🙂

Really not correct. It is well known from auditory discrimination experiments, that headphones allow differences to be heard that are impossible to be discriminated on loudspeakers.
 
Question: Do-you use a Behringer DEQX or 2496 or other similar device and are-you reporting a personal experience, or is-it something you had read somewhere ?
I have a DCX2496 which I use primarily as a crossover at ~200Hz between U-frame woofers and OB Jordan Eikona. I don't use any EQ on the Jordans, I use some on the woofers which are separate and positioned to fight room modes so I use some delay to time align them. When I first time aligned them I thought I heard an improvement.
 
Same question than to scottjoplin: Do-you use a Behringer DEQX or 2496 or other similar device and are-you reporting a personal experience, or is-it something you had read somewhere ?
Tip: Did-you ever noticed how different the sound is affected by an analog equalizer and a digital one, applying the same response curve modification ?

In my old workflow developing loudspeakers, I would first designed crossover and compensations in a DSP environment, which would subsequently be transformed to an analog homologue.

They measure and sound the same if done right. So I do have extensive personal experience. On top of readings and discourse with peers.
 
<snip>

Can I assume 8x oversampling is waveform prediction, a least squares data fit?

You were not snarky, period. My apologies.
By zero stuffing, do you mean NRZ with 1/8th width hold? If so, that was exactly my question regarding 1/3rd or 1/4 hold to get the images out there. It was a question begging an answer.

Actually, I was hoping for a wide window polynomial fit, or least squares, that would have been far more sexy than thin pulses..😉

At 8x, I would almost think an rc roll off would work...
Jn

The images are a direct result of the sampling process at the beginning.
If we were using the ideal dirac pulse probing at the A/D conversion and were using a D/A conversion spitting out this series of amplitude weighted dirac pulses, the images at/around each multiple of the sampling frequency were nevertheless there.

The NRZ (ZOH) in fact reduces the high energy - compared to a dirac pulse train - therefore we get an overlaying frequency dependent attenuation with the first null at the original sampling frequency (as Hans Polak already wrote recently).

Stuffing null samples in does nothing with respect to the images (it is just a better approximation of a dirac pulse train) but would provide lower attenuation (as it is no longer a NRZ).

The removal of the images is done by the lowpass (brickwall) filter used in the oversampling process.
The so-called bandlimited interpolation is in theory a perfect fit (still an interpolation in the mathematical sense, although no error exists) not only at the original sampling points but at all points in between.

In our reality it is not perfect as we are using a limited set of numbers (right at the beginning, during A/D and later on when doing the calculations for the filter) so ....

The Wadia guys used the so-called french curve interpolation, which was (afair) a bezier spline interpolation which by definition (again AFAIR) allows some residual errors but providing a smooth curve.
 
I guess I could. The flatest high quality mic I have is a hyper-cardioid condenser. Probably won't sound great on a cymbal since the sound from one comes from all over the surface not just one spot. Well... unless maybe I experiment with distance and see what I can do with that. It might cause some HF loss...

Also, the cleanest preamps I have are not beautiful sounding. Suspect the files might be good for data, but not so much for triggering to use for making music.

What sample rate are you interested in?

As this would be for measurement purposes, and use as a test signal, the highest SR would make sense. I'm with JN that when measuring, you need as many samples as possible... And you can always downsample or decimate if needed.
But the main thing is that it would be a known source - ie you would know exactly how it was done, unlike some of the other data online. Info and reproducibility is the thing!
 
Thanks.

The only reference which deals with the audibility relies on the same infinitely debunked as flawed Oohashi paper results/data that nobody was ever able to reproduce and confirm.

<snip>

One funny thing about Oohashi et al.'s publication is, that experimenters who tried to reproduce (and even tried to test additional hypothesises) were often able to find corrobation for Oohashi et al.'s results.

Others, who not even tried to reproduce their experiments, found other results and these were often used as arguments who allegedly "debunk" Oohashi.

Btw, there was a promise to link to SY's discussion/ (debunking?) of Oohashi but it wasn't fullfilled yet....

Wrt Oohashi it is nevertheless still in question, if apart from gamelan music connaisseurs, an extended frequency range would be benefitial for other music styles/listeners.
 
Jacob2,

Lavry doubled output rate by essentially halving the NRZ, I assume this is way it being called stuffing zero's. The process, when extended more, is approaching simple impulses, but the effect is to push the resultant images further away from what we want to keep. That was why 8 asked about 3 and 4 times rate "stuffing zero's in the first paper. Apparently industry alleviated by going to 8 times.

So instead to trying to address my issue with NRZ added/delayed energy, they simply switched to "stuffing", an obviouss extension to what Lavry described.

Jn


Ps. For lack of terminology, I use "half NRZ" when I mean halving the hold level time, then dropping to zero, and 1/4 NRZ to mean hold is 1/4 the interval, and the last 3/4 is zero.
 
One funny thing about Oohashi et al.'s publication is, that experimenters who tried to reproduce (and even tried to test additional hypothesises) were often able to find corrobation for Oohashi et al.'s results.

Others, who not even tried to reproduce their experiments, found other results and these were often used as arguments who allegedly "debunk" Oohashi.

Btw, there was a promise to link to SY's discussion/ (debunking?) of Oohashi but it wasn't fullfilled yet....

Wrt Oohashi it is nevertheless still in question, if apart from gamelan music connaisseurs, an extended frequency range would be benefitial for other music styles/listeners.
Oohashi' measurements are accurate, it is trivial to prove that the human brain reacts to U/S energy.
I personally have heard it in the workplace, somebody behind me was using an ultrasonic insulation blanket welder.

The mechanism is the muscles of the inner ear that react to the sound energy and constrict, our hearing auto-ranging mechanism.

The heard response is not "I can hear ultrasonic sounds", but a lowering of background noise due to the inner ear muscle reaction.

Whether we notice it or not depends on the ratio of background content vs the U/S energy. For recorded content, I would surmise the energy is insufficient for that mechanism to be noticed.

Jn
 
Every thesis brought forward, has no general value if others can’t repeat it.
And since mathematics doesn’t seem to convince everybody in this very case, we should listen to a hi res file that’s recorded in the same hi res frequency.
The best person to provide this file would be the one who hears most differences.

Richard, you seem to be the ideal candidate for making this file available with content of your own choice.
If not, we may continue the rest of the year without making any progress. 😀😀

Hans
 
Really not correct. It is well known from auditory discrimination experiments, that headphones allow differences to be heard that are impossible to be discriminated on loudspeakers.

I thought the same but did not comment. I was really surprised by Markw4’s assertion.

I use headphones to assess my amps during development. No way any speaker can compete and if you hunt around on the web there’s a lot of anecdotal evidence in support of this.
 
Jacob2,

Lavry doubled output rate by essentially halving the NRZ, I assume this is way it being called stuffing zero's.

Lavry only showed the result _after_ the oversampling process as the "new" (it is new, but obviously contains no new information) output signal of the D/A convertor prior to any further analog filter. But he still shows a NRZ-signal with the step size halved in time.

The process, when extended more, is approaching simple impulses, but the effect is to push the resultant images further away from what we want to keep. That was why 8 asked about 3 and 4 times rate "stuffing zero's in the first paper. Apparently industry alleviated by going to 8 times.

But no, the "stuffing zero" process does nothing wrt the images. As said before, even if we would do sampling by ideal dirac pulses and would output a series of amplitude weigthed dirac pulses at the D/A we still would have images around multiples of the sampling frequencies.
These images are a direct result of the sampling process are there regardless of NRZ or non-NRZ.

If you look at shorting the pulse width as an approximation and by infinitesimal short pulses as a transistion to the ideal case of dirac pulses, it becomes obvious that the images are still there and were there right at the beginning.
Lavry started with a sampling frequency of 48 kHz and so the images are located around 48kHz, the next at 96kHz, the next at 144kHz and so on (N x Fs; N = 0,1,2,3,....)

Oversampling filters only some of the images (for example at the old sampling frequency and up to the new sampling frequency), images remain at multiples of the new sampling frequency.

So instead to trying to address my issue with NRZ added/delayed energy, they simply switched to "stuffing", an obviouss extension to what Lavry described.

Jn


Ps. For lack of terminology, I use "half NRZ" when I mean halving the hold level time, then dropping to zero, and 1/4 NRZ to mean hold is 1/4 the interval, and the last 3/4 is zero.

Presumably it is only one for nerds, but a NRZ dropping to zero is funny. 🙂

While all the process is fascinating (and elegant) in theory, in the real world the devil is in the details simply where the fun starts......
 
Really not correct...

Guilty. Overstated the role of speakers a bit.

Headphones can work like a microscope sometimes. Problems discovered with equipment using headphones should be fixed as much as possible.

The judgement of overall sound quality is probably more important on really good speakers though.

Don't really know how much listening research has been done using SOA reproduction equipment, even at the time the research was done. It could be prohibitively expensive to do so. Experimenters may judge this or that piece of equipment should do well enough based on measurements alone, and without expert listener judgement as to equipment reproduction accuracy. Qualifying a panel of expert listeners would add cost and complexity, unfortunately. Don't know how much of research conducted on possibly lacking equipment can be trusted in relation to some of the things people tend to argue about here.
 
Last edited:
Status
Not open for further replies.