John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
ALL reactive circuits - and that includes every amplifier under the sun - have some form of
phase shift between input and output - basic law of physics) and delay which is an entirely
different thing. I think the 'delay' around an amp is about 15 nano seconds or something.

If a first order, low pass amplifier has significantly greater bandwidth than the signal,
then the amplifier's effective time delay is approximately = -1/BW, where the
bandwidth BW is in Hz. This was at one time termed "integrator delay".

This is because the amplifier's phase shift is -arctan (f/BW), and is approximately linear
for f much less than the BW, where -arctan (f/BW) ~ -f/BW. Then this linear phase is
equivalent to a time delay of -1/BW, since d(phase)/df = d/df (-f/BW) = -1/BW.

This is distinct from propagation delay, and is unrelated to it.
 
Last edited:
OK, I would never have thought a top SOTA playback system would not buffer this out.

It probably takes long time delays on the order of a second or even more to do that. That can get costly and the delay may not be compatible with some applications like video playback, video editing, DAW editing, etc. At some price point for a really good dac there could be a high quality, long delay mode, and a lower sound quality, short delay mode. IIRC, Chord Dave has that, but it also costs maybe 6 times as much as DAC-3. Stereophile is probably correct in classifying DAC-3 price as a bargain.

Also, IMHO, what constitutes and 'top SOTA dac' changes maybe every decade or so. DAC-1 was considered that at one time, but not anymore. The era of DAC-3 may be about to change with the arrival AKM's new chip and with more dac designs making better use of DSP. I am watching with interest to see what happens. :)
 
Mind you, in any rational system there is no need for real time sample rate conversion.

That rules out SPDIF, TOSLINK, and AES. Only leaves USB, and RPi music players. What if someone wants to bi-amp a surround system? Are they going to have a USB interface to every dac in the system and have the whole thing stay in sync? How about upsample the whole music library to DSD512, and pre-electronic crossover and apply room correction in advance into files ready to play for each of the dacs?
 
Member
Joined 2011
Paid Member
General purpose microprocessors that run desktop operating systems, have had full 64 bit internal registers, internal datapaths, and external datapaths, since 1990. In the subsequant 29 years, lithographic featuresize has shrunk from 800 nanometers to 10 nanometers. FPGAs with millions of gate equivalents and hundreds of I/Os have become commonplace ... clocked at more than a gigahertz.

Anybody who wants a DSP with a wide wide datapath to ensure big big overload margin, merely has to sit down and type it up in Verilog. Whoosh, it goes straight to FPGA where it runs at hardware (not software) speeds, all day long. Want 120 bit datapaths? You got it.
 
That rules out SPDIF, TOSLINK, and AES. Only leaves USB, and RPi music players. What if someone wants to bi-amp a surround system? Are they going to have a USB interface to every dac in the system and have the whole thing stay in sync? How about upsample the whole music library to DSD512, and pre-electronic crossover and apply room correction in advance into files ready to play for each of the dacs?

You don't need sample rate conversion to use SPDIF or other interfaces that carry the master clock. You can just use the recovered master clock - the new receivers like DIX4192, WM880x, etc. have relatively low amounts of jitter - or you can use your own PLL or FIFO + VCO.

I will admit that an ASRC is attractive for SPDIF inputs.

AES67 sounds like a better solution to a multichannel issue.


Upsampling a library to DSD512 is not a good thing either, but since you like it, continue I guess.
 
Member
Joined 2016
Paid Member
That rules out SPDIF, TOSLINK, and AES. Only leaves USB, and RPi music players. What if someone wants to bi-amp a surround system? Are they going to have a USB interface to every dac in the system and have the whole thing stay in sync? How about upsample the whole music library to DSD512, and pre-electronic crossover and apply room correction in advance into files ready to play for each of the dacs?

Well, you can complicate things needlessly if you want.
I'm only interested in playing my ripped music. It's all at same SR, system is biamped, dsp controlled, optically linked, no problems.

Still wonder why you have no measurements of problems with reverb?
 
Disabled Account
Joined 2012
I don’t need to ask, since I already know the answer and gave it to you. I was just hoping you’d do a quick search so you could figure out why rather than parroting their narrative which isn’t based in fact.

DSD is essentially a dead or nearly dead format that offers no real advantage over 24/192 PCM. It looks even more pointless when you consider that 99% of recordings will have to be converted to PCM for any sort of processing and then back for pressing onto the SACD or however you want to distribute it as DSD.

Well, I dont really care so wouldnt look further.

Since I dont know you, i would have to leave an open mind about what you said .... would not accept it, necessarily. Only if I researched it, as you thought i might, would I take a side.


THx-RNMarsh
 
Still wonder why you have no measurements of problems with reverb?

To save anyone from having to wonder, its not high on my priority list. Doing it won't particularly help with any end purpose I would like to get accomplished. I already have sufficient reason to continue trying to reduce jitter a little bit more if I can, and to continue learning about FPGA programming and programming the newer Sharc chips. There are trade offs with different features, chip prices, software licensing, etc. Lots to think about.
 
Oh, you could probably measure anything that can be heard. I don't see why not in principle. The question is more to the effect of whether or not you actually do measure everything that can be audible.

To save anyone from having to wonder, its not high on my priority list. Doing it won't particularly help with any end purpose I would like to get accomplished. I already have sufficient reason to continue trying to reduce jitter a little bit more if I can, and to continue learning about FPGA programming and programming the newer Sharc chips. There are trade offs with different features, chip prices, software licensing, etc. Lots to think about.
OK, so everyone has been wasting their time, that is what this thread seems to be for....ho-hum
 
www.hifisonix.com
Joined 2003
Paid Member
If a first order, low pass amplifier has significantly greater bandwidth than the signal,
then the amplifier's effective time delay is approximately = -1/BW, where the
bandwidth BW is in Hz. This was at one time termed "integrator delay".

This is because the amplifier's phase shift is -arctan (f/BW), and is approximately linear for f much less than the BW, where -arctan (f/BW) ~ -f/BW. Then this linear phase is equivalent to a time delay of -1/BW, since d(phase)/df = d/df (-f/BW) = -1/BW.

This is distinct from propagation delay, and is unrelated to it.


If you apply a signal in one input and look at the associated feedback node you will see that it shows a non-zero value immediately, aside from any relativistic delays. The feedback signal will peak later than the input signal due to the phase shift through the system but there is point anywhere where the feedback signal does not relate directly to the input signal when operating in its linear region.

If its 'delay' (and not phase shift) related to 1/BW, then on a 100kHz input stimulus there would be a 10 second 'delay' around the loop and the amplifier would be running 'open loop'. An amplifier running open loop for 10us does not work - its the old chestnut rolled out by Colloms et al

But, better than that, it can be tried out on LTspice.
 
It probably takes long time delays on the order of a second or even more to do that. That can get costly and the delay may not be compatible with some applications like video playback, video editing, DAW editing, etc. At some price point for a really good dac there could be a high quality, long delay mode, and a lower sound quality, short delay mode. IIRC, Chord Dave has that, but it also costs maybe 6 times as much as DAC-3. Stereophile is probably correct in classifying DAC-3 price as a bargain...

The Auralic Vega has four clock PLL settings which are user adjustable: Auto/ Coarse/Fine/Exact, which have the same characteristic you state. In the Exact mode if there is any input jitter or uneven latency (long T jitter...) it will insert pauses, letting you know the source is unstable. I feed the Vega with a high-end Asus motherboard USB3 and can usually keep it on Exact, but with some sources, especially streaming have to back it down to Fine, or if I am switching sources a lot leave it at Auto.

Cheers,
Howie
 
To save anyone from having to wonder, its not high on my priority list. Doing it won't particularly help with any end purpose I would like to get accomplished. I already have sufficient reason to continue trying to reduce jitter a little bit more if I can, and to continue learning about FPGA programming and programming the newer Sharc chips. There are trade offs with different features, chip prices, software licensing, etc. Lots to think about.

If you're designing an 8x interpolation filter or whatever, you can certainly do the math in a DSP like the SHARC but most DSPs are limited to 192 KHz on the I2S peripherals. That's one reason why you see most external filters done in the FPGAs.
 
Status
Not open for further replies.