John Curl's Blowtorch preamplifier

Status
Not open for further replies.
john curl said:

<snip>
Capacitors with high DA will change the path as you might see it on an oscilloscope, typically 1-3% for each cap. What happens then, when you put 5 such caps in series on each channel?
<snip>
What we really need is someone to put 5 isolated RC time constants in a SPICE simulator
You need two 'identical' branches, EXCEPT that C is ideal on one branch, and the other branch uses less than ideal caps composed of the linear model put forth by Pease, or Dow before him.
Please understand that we need a high DA model for mylar, aluminum, or ceramic, and NOT the model of polystyrene, used by Dow. Then we subtract the two models and see what is left.

Before anybody jumps in, would you do everybody a favor and define what exactly "zero linear distortion" means?
 
Re: Calling Ernst Mach!

SY said:
When excited by an AC signal, golf balls are popping in one end, then out the other, then the process reverses. The golf balls in the hose don't move much.

Yes, but my question of which come first, the current or the field, is to figure what cause the electrons to move. The electron on the other side of the cable seems to pop out at the same speed as field propagation (which are influenced by dielectric and conductor resistance?), that make me think that the field control the electron flow, not necessarily the electrons 'pushing' each other. But then again why does the conductor resistance influence propagation speed?

Perhaps it is easier just to listen than to try and understand as well. 😀

André
 
I'd suggest not reading Mr. Self

His Audio power amplifier design book is certainly one of the best stuff you can read on amps, also for people for which low-THD is not important. Would be a shame to miss it.

Have fun, Hannes

EDIT: By the way, Drude's theory was rather soon slightly reworked (by Sommerfeld) and as such very successfull even today.
 
Re: Ad797

Edmond Stuart said:


Hi Scott,

Are sure that this is the real schematic (AES preprint 3231, fig. 3a)?
There are a few things that puzzles me and/or makes no sense:

1. Q61, unless the bases of Q3 and Q4 are tied to the collector of Q61 instead of to the collector of Q7.

2. Under normal conditions Q50 is conducting, which spoils the whole thing. If I break the connection between the base of Q15 and the emitter of Q50 the circuit (almost) behaves as expected.
BTW, is Q50 meant to ease the recovery from overloading?

3. Bootstrapping the collector of Q19 instead of Q18. The latter makes more sense as no non-linear Cob loading (by Q18) of the input of the output stage.

Please, see below for my suggested mods and, of course, correct me if I'm wrong.

Regards,
Edmond.


OK back at my schematic;

1) Q61 is superfluous or is there for some other reason not shown. The correct bias for Q5/Q6 is 0 Vce.

2) No, if you check the collectors of Q16 and Q17 are at the same potential, two Vbe's up from the top of I3. Q50 and Q61 are b to b diodes clamping the current mirror during slew. They are drawn as transistors because they are, the excess current during clamping action gets dumped out the supply which is often more benign.

3) No, Q15's and Q18's Cob are cancelled by symmetry the same way this single stage gets Aol of 5,000,000.


I guess the schematic was pretty close afterall 🙂 Also to clarify again the term "neutralization" was probably inappropriate. The cap I call Cn is actually bootstraped EXCEPT for the voltage signal across the output stage. So it amounts to a 1-A ~ 0 bootstraping or positive feedback of the error.
 
SY said:
Andre, as far as audio signal is concerned, it's all instantaneous.

Yes, I knew you would say that 😀 but it's not answering my question, do you agree with my thinking that the field control the electron flow? Just trying to understand something.

We all know that cables are not perfect, just use a longer length and measure, the question is only when the differences become audible. Surely that point will differ depending on equipment used and also the listeners ability to discern them.
 
Wavebourn said:


Yes, we were speaking about non-linearities when you chimed in with resonances and interferences-diffractions claiming that on higher SPLs they are more audible. Since audio sensitivity is logarithmic, on higher SPLs so called linear distortions should be less audible, but non-linearities caused by compression of an air and deformations of surfaces that transfer and reflect the sound cause additional colorations. Why do you think planar speakers (long thin membranes on heavy stiff frames) and phased arrays (many point sources with light cones, mounted on heavy stiff non-resonant panel) sound more natural than single powerful point sources with much bigger displacement? However, some line arrays have too big distances between drivers, so "linear" distortions are quite audible, but they are more linear than point sources.


This does not seem correct.

Keep in mind that I am a fan of line sources and planars (especially ESLs).

Planar speakers do not automatically sound any better than a so-called point source. Most real world speakers are not point sources in the first place, just worse and better approximations - therein lies a big issue.

Planar speakers approximate the expansion of a wave coming from a point source at some distance from the point source. Albeit usually only in one axis...

Witness the Quad 63 design - a planar that emulates the expansion of a wavefront from a single point source at some distance from the surface of the planar diaphragm.

The audible differences between excellent speakers of the sort you allude to are due to distortions that relate back to the specific mechanical means that is used to produce the sound and the SPL vs. distortion (especially non-linear) that results from these differences.

Not to mention that deadly "7th" harmonic that just might be produced as the result of some of these distortion and IM producing mechanisms.

So other than the perception that the ear has of the sound source's apparent size and location the only other difference that is actually audible is going to be the spectra and level of distortion vs. frequency and level.

That is going to be different for speakers that are physically different, no matter what. Physically different is going to include everything right down to the cone/diaphragm material and past that point.

Planars and line sources are not magic solutions nor do they have automatically beneficial characteristics.

_-_-bear
 
FrankWW said:
So you never read the paper, did you?



If you did, what's the mistake?

I did not comment his mistake, I commented your statement that you instead of arguments try to enforce by link on some authority.


bear said:



This does not seem correct.


From your explanation I don't see what does not seem correct.

Keep in mind that I am a fan of line sources and planars (especially ESLs).

I am not, because they have own drawbacks.

Planars and line sources are not magic solutions nor do they have automatically beneficial characteristics.

They are not, but cylindrical waves, relatively light diaphragms and heavy non-resonant in the band frames, tiny excursions, all factors help to reduce non-linear distortions caused by air pressure and deformations of surfaces. On high SPLs mechanical resonances are more audible not because of uneven frequency response (that presents on any levels), but because of non-linearities caused by deformations. This is my point. Some manufacturers are cutting costs on speaker constructions and materials trying to compensate resulting mechanical resonances by DSP equalizers, but that does not work as intended because making frequency response flatter they can't "equalize" distortions produced by speakers and their enclosures on resonant frequencies. Working in this direction I had to make very heavy speakers with complex shapes. It is costly and inconvenient, but so far it is the only way to go I see.

Edit: let me explain something. Long time ago working on musical synthesis I tried to simulate acoustical instruments using analog electronics. The idea seems to be simple: to generate certain waveforms that change in dynamics, then apply formants using analog filters. I had to give up because found that filtering is not enough: filters had to add own harmonics, depending on levels of signals. It was too complex and I gave up because digital sampling technology got over real synthesis. What is the difference between speakers and musical instruments in terms of physics? Nothing. The same laws work, the same principles sound. But what is desirable for musical instruments is bad for speakers.
 
So you didn't read it. Too bad.

It's a short article and wouldn't tax your intelligence. One take-away is that perception of linear distortion and non-linear distortion is not the same. Which, if you're in audio is likely a useful thing to know.

I did not comment his mistake, I commented your statement that you instead of arguments try to enforce by link on some authority.
 
Status
Not open for further replies.