John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Each time i was powered off by my GFI (washing machine, toaster or coffee machine), means 1 time a year, i thank the ground wire.
Writing this, i realize it is mostly due to liquids inside electric devices. Or singers tuning their electric lamps from their baths (Claude Francois) :)

Oh, by the way, US is known as one of the country witch use the most electrical power by citizen: How can they manage such power with 110V only ?
It which not witch . You are very correct the Us power that ends up about 120 to 125 in most of the country is unbalanced. where you deal with 220 or so that is balanced reference to ground as well as branch circuits having about the twice the wattage (amps times volts.) The power is sent in to the homes at 240 to 250 with 2 phases . Dryers ,ac, heating run on 30amp, 50amp and more (240 volt) . Most houses have 200 amp service to them with 30 to 40 space circuit breaker panel . That how some much electricity is used with 120volt branches. Industrial is real big with 3phase and 480 volt to a lot more. Hope that helped.
 
I think semantics is rearing its ugly head (phase vs polarity)
from wikipedia :
>
North America
In the U.S. and parts of Canada and other countries, split phase service is the most common. Split phase provides both 120 V and 240 V service with only three wires. The house voltages are provided by local transformers. The neutral is directly connected to the three-phase neutral. Socket voltages are only 120 V, but 240 V is available for heavy appliances because the two halves of a phase oppose each other.
>
 
Last edited:
diyAudio Member RIP
Joined 2005
I sympathize with Chris's attempt to correct here, as so many people use "phase" where "polarity" is appropriate. One of the prevalent misnomers for example is the "phase splitter" stage, as it came to be called as the transitional stage in a tube amp converting unbalanced signals to direct and inverted polarity for push-pull output tube drive.
 
I sympathize with Chris's attempt to correct here, as so many people use "phase" where "polarity" is appropriate.
Why ? Referenced to the neutral (if i understood well) , the two other signals are in phase opposition and have no polarity, because alternative tension/current , no ?
i wonder too why countries had chosen different frequencies (50 or 60) with all the consequences for TV programs (25i/m/s, 30i/s, 29.97 drop frame).
Note: In Europe, the movies supposed to be 24i/s were played at 25 i/s for TV diffusion. So, the were shorter :)
 
Last edited:
In fact in the US GFCI receptacles or circuit breakers are required in almost all residential re-work or new construction. Now Arc Fault Current Interrupters circuit breakers (AFCI) are required in more residential circuits. The big problem with AFCI's is that they often have a different opinion on what is an arc than your equipment and appliances do.
 
Back to balanced analog cables.
These three Jim Brown (& Bill Whitlock) ASE papers cover the topic:

Common-Mode to Differential-Mode Conversion in Shielded Twisted-pair Cables (Shield-Current-Induced Noise)
Neil Muncy has shown that audio frequency current flowing on the shield of balanced audio wiring will be converted to differential mode voltage by any imbalance in the transfer impedance of cables, and hypothesized that the effect increases linearly with frequency. Whitlock has shown that conversion also occurs with capacitive imbalance. This paper confirms Muncy's hypothesis, and shows that shield current induced noise can be significant in the MHz range. Preprint Number: 5747 Convention: 114 (February 2003) Authors: Jim Brown,; Bill Whitlock
http://www.audiosystemsgroup.com/AES-SCIN-ASGWeb.pdf

Testing for Radio-Frequency Common Impedance Coupling (the "Pin 1 Problem") in Microphones and Other Audio Equipment
The author has shown that a primary cause of VHF and UHF interference to professional condenser microphones is inadequate termination within the microphone of the shield of the microphone's output wiring, a fault commonly known as the pin 1 problem. Tests using only audio frequency test signals generally fail to expose susceptibility to radio frequency (RF) interference. Simple RF tests for pin 1 problems in microphones and other audio equipment are described that correlate well with EMI observed in the field. Preprint Number: 5897 Convention: 115 (September 2003) Author: Jim Brown
http://www.audiosystemsgroup.com/AESPaperNYPin1-ASGWeb.pdf

A Novel Method of Testing for Susceptibility of Audio Equipment to Interference from Medium and High Frequency Radio Transmitters
The author has shown that radio frequency (RF) current flowing on the shield of balanced audio wiring will be converted to a differential signal on the balanced pair by a cable-related mechanism commonly known as Shield-Current-Induced Noise. This paper investigates the susceptibility of audio input and output circuits to differential signals in the 200 kHz - 2 MHz range, with some work extending to 300 MHz. Simple laboratory test methods are described, equipment is tested, and results are presented. Laboratory data are correlated with EMI observed in the field. Preprint Number: 5898 Convention: 115 (September 2003) Author: Jim Brown
http://www.audiosystemsgroup.com/AESPaperNY-SCIN-ASGWeb.pdf


Interestingly the star-quad did not do as well as expected.
He points out that the US fire rules, limit what cables can be used for permanent in-wall projects.
 
Why ? Referenced to the neutral (if i understood well) , the two other signals are in phase opposition and have no polarity, because alternative tension/current , no ?

As hitsware said, it's mostly a semantics thing. Usually makes no difference, but AC power is the one case where it matters (some.... ). Power in North America at least is strung around in 3 phase lines and then fed single phase, 2 polarities to residences. Phase refers to timing and polarity to voltage.

Really big motors get fed the 3 phase power because the motors run better on it. That's also the way it comes out of Hoover Dam (and Nuclear One here). Folks who work really big traveling sound systems can tell us the details about hooking up *between* phases for their equipment's power. That's where the 208 volt primary taps on transformers are used. I think they use those lines instead of regular wall outlets to get enough amps ratings within the building.

Thanks,
Chris
 
Disabled Account
Joined 2012
For audio practices ---- common-mode (CM) interference signals are usually HF or RF and are field-coupled into a shield and/or two wires. The polarity is therfore in-phase or Common (-mode) in all wires. When the currents induced in the wires are thru different impedances as in a differential input amplifier that does not have EXACT same Z (R and X) at each input (+ and -), a difference potential is created with respect to common/grnd.... the CM to Differential-Mode conversion. That differentail HF/RF signal tries to then be amplified by the input stage.
It is why the amps CMRR is so important... to reject and minimize those imbalanced effects. I am sure many here can compute the CMR affects on such imbalances by degree of imbalance. [PS The quad-star works at low frequencies best] Thx-RNMarsh
 
Last edited:
Disabled Account
Joined 2012
Yes, Richard. Reasons why we prefer good transformers for mikes and lines in professional area, despite the price of devices like Lundahl.
Reasons why, often, power amps works better with a low pass filter in their inputs.

The input signals to all products should not be greater than the BW of the amp. Notice that the frequency response tests on M-Levenson products had rolled off high freq just above audio range/ 20Khz. This was done at the input... Limiting the distorting affects of EMI/RFI and unwanted HF/RF from entering the amps/preamps. Thx -Richard
 
Last edited:
The input signals to all products should not be greater than the BW of the amp. Notice that the frequency response tests on M-Levenson products had rolled off high freq just above audio range/ 20Khz. This was done at the input...

..... at the cost of transparency and resulting in a "special" sound. Especially with hires sound sources, this way is a bit unhappy. Much better to work on high BW product without necessity to limit input below 100kHz.
 
Disabled Account
Joined 2012
..... at the cost of transparency and resulting in a "special" sound. Especially with hires sound sources, this way is a bit unhappy. Much better to work on high BW product without necessity to limit input below 100kHz.
No need to limit amplifiers internal BW. A simple rc roll off on the input will help. Just design best amp possible then simple filter on input wont be audible and amp characteristics still all there. i am not saying ML didnt limit BW best way or at too low freq. its just what they did to avoid HF/Rf issues.
Personally, I would probably agree with 100KHz as a more reasonable number. Internally it can be very, very fast. As fast as needed.... most designs end up with very wide BW in process of making very linear amp using best transistors. But they invite HF/Rf into them with unintended side-effects that are not pleasant. -Thx RNMarsh
 
Last edited:
I know you speak about input RC - it was easy to understand what you meant. Anyway, I am saying that 20kHz input roll off is to low (frequency) and it is audible as a lack of transparency. So, IMO, it is much better to design the power amplifier properly and then there is no need to tune input RC below 100kHz. Without increase of RFI sensitivity. They should just do it right and not to use supporting crutch of the 20kHz input RC.
 
You know i like very fast amps, current feedback, for this reason: to do not create inter-modulation products with fast transients (including HF). I usually compensate the amp internally for flat bandwidth (no ringing near 5-10MHz) then input filter for no overshoot on square waves. That is the minimum input filter. From that point, i try to lower the filter, just by listening.
It seems that the best sound is when the phase is still flat at 20kHz, ie low pass at ~200Khz.
 
Last edited:
Member
Joined 2002
Paid Member
..... at the cost of transparency and resulting in a "special" sound. … Much better to work on high BW product without necessity to limit input below 100kHz.

This gentleman attributes the audible effects of band limiting to steep phase change and not to amplitude roll off. Phase starts changing early on (at 1/10fc)

The importance of the phase response


For audio practices ---- common-mode (CM) interference signals are usually HF or RF and are field-coupled into a shield and/or two wires. The polarity is therfore in-phase or Common (-mode) in all wires. When the currents induced in the wires are thru different impedances as in a differential input amplifier that does not have EXACT same Z (R and X) at each input (+ and -), a difference potential is created with respect to common/grnd.... the CM to Differential-Mode conversion. That differentail HF/RF signal tries to then be amplified by the input stage.
It is why the amps CMRR is so important... to reject and minimize those imbalanced effects. I am sure many here can compute the CMR affects on such imbalances by degree of imbalance. [PS The quad-star works at low frequencies best] Thx-RNMarsh

This seems to be the main drive for using instrumentation amps.
http://phobos.iet.unipi.it/~barilla/pdf/InAmpDesignerGuide.pdf


George
 
This gentleman attributes the audible effects of band limiting to steep phase change and not to amplitude roll off.
That is what i said. And i can correlate it with recording equalization. Wen you use an analog corrector, you introduce phases shifts. The changes you make to the instruments are obvious with little- or even no- changes in frequency response (equalization outside of the band). When you use digital equalizers, you can change a lot the frequency response with no change in the 'texture' of the source.
Result, digital equalization is a must to clean a sound (remove parasitic low frequencies etc...) while analog is irreplaceable to sculpt a sound.

By example, we have often, on movie dialogs, some parasitic frequencies, due to lighting equipment. When you use an analog notch to remove-it, the voices are deeply degraded. Using the same notch with a digital filter, the parasitic frequency is gone, and the voices not affected. But no way to create a wah-wah effect with digital filters.

An other example is the use of brick wall analog filters, comparing to digital oversampling.
 
Last edited:
Status
Not open for further replies.