John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Hi Pavel!
Don't let me put words into your mouth, but I suspect you were making an overall generalization, and one which would not be inaccurate. In my experience, here in the US 95% of all FM broadcasting (virtually all commercial stations) are processed to achieve maximum loudness by multiband compressors and other 'processors' which massively distort and spectrally skew the sound into an unrecognizable mess :headbash:. To be the loudest on the radio dial is to attract the most advertising revenue, or so the theory goes.

HOWEVER...

The best non-commercial stations have high-end audio signal chains which can acheive over 100dB s/n at the input to the stereo generator or STL. These stations employ minimal if any compression, and use only an overall long-term time constant AGC to control deviation. With the best tuners this approach can result in an overall s/n of better than 85dB at the receive end. This can be subjectively 6-10dB quieter than a signal with the same single figure s/n rating, but with a flat noise shape, due to the HF de-emphasis in the receiver.

Hi Howard,

thank you for the explanation. This is a very good news for me, as I have always experienced here some kind of dynamic compression and short-term AGC. For this reason, I have never been satisfied with listening of classical music concerts through FM. In case that you have 5% of FM broadcasting stations who do not use any or use only minimal dynamic compression, and add only long-term AGC, lucky are the sophisticated listeners who can make a choice of such broadcasting stations.

Best regards,
 
John,

Maybe it is just you are listening with the wrong speakers. These are what I am using.

ES
 

Attachments

  • Louisville Arena.jpg
    Louisville Arena.jpg
    211.7 KB · Views: 495
Best filtering of FM chip output

I have the original RadioXTuners modifications to Sony XDR-F1HD. Stock rolled-off sound was not acceptible, nor was transition to inferior HD compressed subcarrier. Now I have solved those problems and others, this is definitely in the top rank of tuners, sonics included. Now I'm thinking of even better solutions.

Sony has a two capacitor passive filter between FM chip output and two transistor buffer. These capacitors have time constants equivalent to cutoff points at 150Khz and 75Khz. It is my guess that Sony is executing a low quality (and low Q) passive 50Khz filter. In my modified version, the filter is executed with "a hiqh quality chip" and typical lowpass feedback network, the the built-in filters and buffers are bypassed.

But is a two pole 50Khz filter the best solution here? What is the chip actually putting out here? My guess is that it's obviously highly oversampled if not a sigma delta source. I figure the main reason why there is rolloff at the top of the passband in the stock circuit is that the Q of the cheap filter is so low, something like 0.3 (though I leave calculation to circuit modelers). Raising the Q to 0.5 by having two identical buffered filters might be just enough to keep the passband rolloff below 0.1dB. So that's the modification I'm thinking of now. I plan to run the FM chip output through a resistor/capacitor lowpass filter with 100Khz rolloff and no buffer (cable capacitance merely parallels small 0.03uF cap in filter), then implement second 100Khz rolloff in a following tube buffer stage.

Now in typical SACD filters, don't they use more than 2 poles for 50Khz lowpass? That's what I was trying to find when I found this thread.

But tapping directly into the digital would be even better in my system, which runs on DSP for crossover and EQ. Is there high rate PCM or is it some kind of delta sigma?
 
Member
Joined 2004
Paid Member
FM rarely has content above 15 KHz. Analog FM has a 19 KHz pilot tone at 10 % modulation or so. The 38 KHz subcarrier also limits HF extension in FM, too much HF into the 38KHz and it folds down below 19 KHz and messes up the stereo decoder. A well designed low pass filter at 20 KHz would be more than enough.

The HD radio runs at 44.1 KHz sample rate (so at least most CD's won't suffer the abuse of bad sample rate conversion) but that again sets a max audio frequency of 20 KHz. HD radio is represented as essentially CD audio quality making still more reason to look at adding an SPDIF output.
 
Demian, please stop spreading tales about CD quality

from HD Radio - Wikipedia, the free encyclopedia
Digital information is transmitted using COFDM with an audio compression algorithm called PAC Lucent's (Perceptual audio coder). However, audio compressed with this method led to complaints about poor sound quality, so in 2003, iBiquity combined it with an enhancer called SBR (spectral band replication) that improves audio quality at very low bit rates and branded the codec to HDC (High-Definition Coding). (HDC is a proprietary codec based upon but incompatible with the MPEG-4 standard HE-AAC).

usually we are listening 25 kbit/s MP3 like garbage HD1, HD2, HD3
 
FM rarely has content above 15 KHz. Analog FM has a 19 KHz pilot tone at 10 % modulation or so. The 38 KHz subcarrier also limits HF extension in FM, too much HF into the 38KHz and it folds down below 19 KHz and messes up the stereo decoder. A well designed low pass filter at 20 KHz would be more than enough.

The HD radio runs at 44.1 KHz sample rate (so at least most CD's won't suffer the abuse of bad sample rate conversion) but that again sets a max audio frequency of 20 KHz. HD radio is represented as essentially CD audio quality making still more reason to look at adding an SPDIF output.

In principle, there could be baseband mono audio in FM all the way to 22khz, which is the lower limit of the 38Khz L-R subcarrier. People often think that tuners which implement 19Khz cancellation sound better than tuners which implemented a 19Khz filter. Some tuners didn't even bother to include filtration of all the stuff above 22Khz.

Anyway, the Sony XDRF1HD tuner already filters out 19Khz pilot and processes 38Khz modulation. What comes from the chip output does not need a steep filter, which would have been required for either of those. All that is required is a gradual filter to cut the high frequency digital sampling noise, similar to what is done for CD where there is digital filtering or for DSD.

Just because the filter *can* be gradual, doesn't mean it needs to be. A steep filter at 15Khz would probably give the best technical specs, if not sound quality, because it would increase S/N and tuners are only spec'd to 15Khz anyway.

But since the tuner is already a bit challenged in the sense of high frequency openness, a steeper and lower frequency cutoff is not likely to improve sound quality. Instead, what is needed is a better gradual filter of some kind.
 
Thanks for the input charlesp210. I remember the good old days of mono FM and I was even able to make quality 15 ips analog tapes from live events. Stereo was a bring-down, but MONO still could sound pretty good.
Stereo is a REAL compromise, and most of us listened in MONO, at least until recently. This little Sony is a REAL find, especially if we can FINISH it to hi fi standards. Apparently, serious FM manufacturers have tried to get the chips themselves, to no avail, from Sony.
 
One of the biggest headaches in audio design today is getting the technology developed by the major manufacturers and being able to optimize it with better power supplies and output stages.
We can often buy 'one off' components from the major manufacturers, but then we have to bypass some of their 'lowest common denominator' circuitry, that is just added on to complete the unit, and here is where the audio quality suffers, if you believe in such a thing. When we audio manufacturers try to make a product, we are faced with BIG licensing fees from Dolby, Sony, etc. above and beyond just buying their technology that is relatively cheap to buy. It limits our ability to manufacture, and we are stuck with making amps, preamps, and loudspeakers. Sources such as CD, DVD, SACD, and even FM are limited to us.
 
Thanks EUVL for the input. We are at a crossroads in jfet design. The 2SK170-J74 combination is continually more difficult to implement in these modern times. We are finding EVEN MORE fake devices sold from offshore sources, creating a dilemma.
It just so happens that the 2SK246-J103 combination is adequate for line level preamp and power amp input stages. They are not really very quiet, setting us back about 35 years, but they will work.
 
I guess if you parallel enough of them, the noise will go down and the transconductance will go up. But I am bother that they are not really complementary, even though they are used in many applications where they are assumed to be. In that sense, it is easier with bipolars.

The real challenge is to design pure single ended (balanced) circuits that has the same low distortion as, e.g., you complementary input stage. I have an idea how, but I want to prove it first before publishing, so that I would not look totally stupid if I got it wrong.


Patrick
 
Last edited:
Today, I would like to discuss the history of audio measurement, at least as far as I go back, to see its achievements, its 'pitfalls' and its insights with regard to audio design.
I would like to start about 40 years ago, when I had about 10 years previous experience with making audio measurements, both with my own and with the latest factory purchased equipment. The favorite measurement instrument at the time was the CROWN IMA IM distortion analyzer. It measured as low as 0.001% distortion, had auto tracking, and was THE test instrument on almost every audio engineer's test bench.
I didn't have an IMA at first, but I first used and ultimately built and modified my own Heathkit IM analyzer in 1965, to measure about .005% distortion. The Levinson JC-1 and JC-2 and many other designs were built with just the Heathkit IM analyzer, but in 1974, I got my own IMA to use.
Given what was known at the time, this was just about all that was needed in audio distortion testing, with the exception of analog tape recording. You could not easily use the IMA with tape, as you got inconsistent results, due to dropouts and modulation noise. Here an old time or new wave analyzer with a single tone at a time was best.
Still, who needed anything more? The IM was shown to be related to harmonic distortion and usually IM was the most sensitive test, so why worry? Also, there was a 'distortion output' jack that you could attach a scope to, wave analyzer, or spectrum analyzer (if you could afford one at the time) to further analyze the distortion components. Usually, just looking at them on a scope from the 'distortion output' was a clear indication of the order of the distortion and how much higher order garbage was being generated. Also, it was easy to separate 2'nd from 3'rd, even with just a scope, and be able to balance the 2'nd out, if you wished.
For most practical purposes, this appeared to be all that we needed, and we were satisfied. Today, that is not the case. More later.
 
However, a degree of mysticism had appeared to 'creep' into audio design. Back in 1966, an article in the IEEE audio transactions by Daugherty and Greiner had implied that open loop bandwidth was significant to get minimum distortion contribution from the input stage. Then, in 1970, some guy from Finland published a similar article, citing the earlier text, but moving further, in the IEEE audio transactions, pointing to a potential distortion mechanism. These were real 'head scratchers' that meant little in the world of hi feedback IC design, which the latest test equipment was also composed of, by the way. About this time another guy pointed out to me that what we were measuring with, was built with the same components that we were trying to find distortion in. In other words, IF we used the same or similar IC to measure an IC, is it no wonder why we might miss something really wrong with the IC under test? Because the test equipment would produce the same 'something wrong' but we just are not set up to detect it. Another head scratcher. Well we muddled on, but some of us, sometimes in desparation to make a good sounding design, tried a few ideas learned from these earlier inputs. More Later.
 
Last edited:
please use your ears

John, how I wished you used your ears instead of fancy measurement equipment.
Stradivarius, various clavichord builders, wind organ builders even in modern times making copies of ancient instruments do not use measurement equipment..........
Only the end result does matter: the sound, not the specs!
 
Late in 1972, I was put back into consulting with the Grateful Dead, after working on the film sound track on a rock film for about a year. Here, I had two main tasks. First, to measure the existing speakers available to make a super sound system, and to make electronics, especially the microphone and preamp electronics that would satisfy the GD, when they had personally found that only tubes seemed to do the job. I had worked on the design of an IC based board made of sophisticated IC's for the time, in 1971, and it had been rejected by the GD, and they went back to tubes, (open loop by the way).
In desperation, I thought that maybe, just maybe, that the best discrete design with feedback, BUT with an open loop bandwidth of about 20KHz, might be something reasonable to try. I used a complementary differential jfet input stage with a bipolar transistor second stage, and NO output follower or buffer stage. This is what separates it from a good op amp with modest open loop bandwidth, and turns it into a transconductance or TA amp and moved its open loop bandwidth to about 20KHz. Of course the second stage had to be 'beefed up' to about 40-50 ma idling current, in order to be linear enough to drive even a 600 ohm load, if necessary. This type design had an added feature of NOT RINGING very easily or oscillating with difficult loads.
This design had to have ACCEPTABLE IM, but not .001% or so. Still, it was difficult to just 'throw together' because the feedback was so limited. A great deal of fussing with different output devices was necessary and operating them at near their peak BETA was mandatory. I found that a certain pair of complementary T0-5, 2W RCA driver transistors had the lowest distortion. This is where simple Spice models fall down, especially in those early days, 37 years ago, and it was, (and maybe still is) best to 'cut and try'.
Now, ears are not YET getting into this discussion. Just another design technique, hoping to avoid some mysterious distortion factors not yet MEASURED with our IM test equipment. More later.
 
In the summer on 1973, I came up with the Grateful Dead module that could be configured much like a typical op amp, running on +/- 24V supplies, and about 60 ma/module. I happened to meet up with Mark Levinson, and we contracted him to make the modules for us, using a conventional hybrid IC pin-out. These modules 'worked' and passed the listening tests the GD subjected it to. At the same time, Mark Levinson became interested in making a 'simple' preamplifier, using these modules, reduced down to +/-15V. This became the JC-2 line stage.
Later, in 1974, my associates and I made a 10W tweeter amp that passed our listening tests and proved out to have a slew rate of about 100V/us, at a time when most amps were much slower, except for the Lohstroh-Otala design and perhaps the Electro-Research. This later became the JC-3.
The breakthrough that put these products on the map was their high open loop bandwidth and high slew rate. They were also class A, comp fet input, etc. that made them very linear. This was necessary, as I said before, that the amount of negative feedback was limited. This lead to more research on TIM, which we all thought was the 'hidden' measurement that the IMA could not detect. It happened to be partially true, but it wasn't the whole story, but it did revolutionize the audio industry, at the time.
More later.
 
In 1976, after being satisfied that high slew rate, and probably high open loop bandwidth was necessary, I was invited to Finland for 1 month to work on a paper on TIM distortion.
This gave me access to MUCH more test equipment, and paid techs to do the tests that I could think up. Matti Otala and his associates had already tried a variety of test procedures, including IM, harmonic, noise loading, etc, and found that a sine wave tone added to, but not related to a square wave, harmonically, was a good choice to bring out the effects of TIM over normal IM or standard harmonic of 1K, or 10KHz, that were standard at the time. The square wave was to be rise time limited, so there was TIM 30 which had an approximately 10uS rise time, and TIM 100 which had a 3.5uS rise time. We then proceeded to make 100's of tests with different op amps, and power amps, noting the rise in TIM with peak to peak output level. More later.
 
Status
Not open for further replies.