Reducing gain of audio output stage

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
I didn't say that gain needs to be reduced in the player. I said in "one of the stages". If his preamplifier has too much input sensitivity (gain) then that's what needs to be fixed, not the player!
That's usually the rule - old preamps, good for LP (316mV), will be driven "hot" by CD outputs (1-1.6 V).
 
Last edited:
Of course, but James1 wants to keep his line and phono preamps as they are : he just wants to reduce his CD player's output voltage.
If you read my other replies, I've already talked about reducing his line preamp's gain by changing its NFB resistors' value and increasing his phono section's gain by adding a step-up transformer.
 
10K into 200pF gives an HF rolloff from 80kHz - unlikely to affect CD sound? My suggestion was for 22K series and 10K to ground (source impedance 6.9K), and put it at the preamp end of the cable so avoiding even that inconsequential HF limit. No change to the CD player. No change to the line stage. Trivial change to the wiring between them; you might squeeze the resistors into the RCA plug.

Wild ramblings about this disturbing op-amp DC offsets shows ignorance: either in the person making the claims, or the person who designed the equipment with alleged DC volume controls. The CD player output will already be DC-referenced to ground; my attenuator merely changes the DC impedance a little (probably reduces it). Any competently-designed line stage will not be sensitive to the exact value of the DC resistance to ground of its input - in fact it should not even notice DC at all.

Talk about accelerating and braking at the same time are merely confirming what I suggested as the correct long-term solution: ditch the line stage, as CD does not need it. Adding a line stage to amplify a signal, then adding attenuation to reduce it again is daft - whether you do it between equipment (as I suggest) or within equipment (as others suggest). There are only two possible explanations: fashion, or a preference for added noise and distortion.
 
10K into 200pF gives an HF rolloff from 80kHz - unlikely to affect CD sound? My suggestion was for 22K series and 10K to ground (source impedance 6.9K)

And mine was 10k to 2k, for a source impedance of 1k7, giving a rolloff at half a megahertz! A few dB more attenuation! Devastating to the sound! But I suppose that if one can't be bothered to calculate orders of magnitude and would rather spin tales, construct inapt analogies, and use lots of exclamation marks, that's a better way of answering simple questions!

:D

(There is at least one other person as cranky as you)
 
But your -15.563dB is bound to reduce the musicality more than my modest -10.103dB. Unless, of course, the line stage has a low and non-linear input impedance in which case yours will be better.

Perhaps we should now discuss the type of resistors to use. Of course, the one through which the signal flows must be much better than the one which merely connects to ground.
 
Hi Guys,
Hi-Fi is not a question of frequency roll-off ! It's a question of instantaneous energy transfer which refers to the TIME domain. When you calculate your 80 KHz (or less) roll-off frequency, you forget :
-that most loudspeakers' highest cut-off frequency is 25 KHz or less
-that even if (not sure at all) James1's preamp and amp have BWs bigger than 100 KHz, all the elements in his Hi-Fi chains are connected in serie with interconnect cables from sources to loads
-that CD players will never deliver more than 20 KHz
So, in these conditions, how do you explain big sonic differences (even in the lower frequency range) between 2 identical amplifying stages (preamps or amps) where only NFB resistors' values have been changed giving only different gains and BW ? Normally, with 100 KHz and more BW, we're far beyond the maximum 20 KHz but the differences are there and they are big !
Have you ever tried super tweeters with BWs superior to 100 KHz (like the PT-R100) with your digital sources ? It's as if the whole system, in every frequency range, has been replaced even if the maximum frequency is still 20 KHz !
Bringing everything to the frequency domain, as you do with your dividing networks, remind me Audiophiles searching for music since half a century without success !
The best would be to ask James1 to try both your advices, and let him be the last judge for the sonic results, that's all.
Have fun !
 
Administrator
Joined 2007
Paid Member
So, in these conditions, how do you explain big sonic differences (even in the lower frequency range) between 2 identical amplifying stages (preamps or amps) where only NFB resistors' values have been changed giving only different gains and BW ?

Changing the feedback factor alters a lot more than the gain and bandwidth. It alters the whole character of any given design modifying the distortion spectrum for one. If you think some mod or other will sound better then that is half the battle won for you as you believe, so therefore it will :)

I think all this will have frightened James off...
 
I didn't forget anything. On the contrary, my point was that an 80kHz roll off is unlikely to damage a sound source already hard limited to 20kHz, heard through speakers with little output above, say, 25kHz using ears with little sensitivity above 15-20kHz (depending on age).

Woffle about "instantaneous" energy transfer doesn't cut much ice with me. It is amazing how many of these "big" changes seem to vanish like blue smoke when the listeners are not told what, if any, changes have actually been made to the equipmnt.

I suspect you may have forgotten about the placebo effect. You may even trust your own ears! Unwise - ears and brains are too easily fooled. Mooly makes a good point too.
 
We're talking about Op-Amps here : just take some well known references' datasheets and you'll notice that manufacturer's application hints show different NFB resistors value : generally, Rf is fixed and Rg varies with the desired gain. The bandwidth also varies as indeed does the harmonic distortion but for a few percentage which is nothing compared to loudspeakers' harmonic distortion range which generally goes from 5 to 20 % ! Again, it's a question of proportion.
Your ear is much more sensitive to intermodulation distortion (IMD) than to total harmonic distortion (THD).
By the way, in James1's case, he has to decrease the gain so he'll reduce THD.
Anyhow, I don't think that he is afraid as you wrote, but he's just comparing different posts to chose a solution : after all, he'll certainly test the resistive network and listen to the difference.
 
I didn't forget anything. On the contrary, my point was that an 80kHz roll off is unlikely to damage a sound source already hard limited to 20kHz, heard through speakers with little output above, say, 25kHz using ears with little sensitivity above 15-20kHz (depending on age).

Woffle about "instantaneous" energy transfer doesn't cut much ice with me. It is amazing how many of these "big" changes seem to vanish like blue smoke when the listeners are not told what, if any, changes have actually been made to the equipmnt.

I suspect you may have forgotten about the placebo effect. You may even trust your own ears! Unwise - ears and brains are too easily fooled. Mooly makes a good point too.

You forgot that increasing source impedance (to say an equivalent 7KOhms) with a 200pF capacitive cable leads to a 1.4 uS time constant and as you know a capacitor needs 5xRC to be fully charged and discharged : so, with your divider, each signal pulse needs more than 7uS to appear and disappear completely leading to a smeared signal with phase lag because audio signal is not a unique static sine wave but a complex combination of dynamic pulses and sine waves with harmonics, present at the same time. This well-known phenomenon is called "memory effect" and it's comparable to jitter in digital domain.
Concerning the "placebo effect", that's what young listeners who "enjoy" MP3 mention about Hi-Fi !
Instantaneous energy transfer doesn't cut much ice with you, but when this complex signal is feeding your power amp, then its output voltage has to drive a speaker's low impedance, thus needing a high amount of instantaneous current in a short time : this energy transfer goes from the amp's PSU via the speakers' X-Over to the different voice coils.
You seem to consider each interaction in a limited portion of the signal loop but every mods will affect the whole audio system's result : it's a chain reaction !
 
A simple 10k:2k resistive divider will cause none of those issues and will be satisfactory for driving a reasonable (2M) length of cable as well as the preamp's input Z. Cost: about 5 cents. Performance: essentially perfect.Why make a simple problem horribly complex?

So why ALL audio equipment designs' include low impedance output stages, assuming that their designers are not champanzees ?
Every voltage source must have low impedance to drive correctly a load which impedance must be higher : that's a basic rule if you've ever studied electronics. An output stage is nothing else than a voltage regulator with a variable (ac) input : a voltage regulator must always have a LOW output impedance. Have you ever heard about damping factor ? ;)
 
a capacitor needs 5xRC to be fully charged and discharged
Not true, although this myth is often trotted out by people who don't understand maths. It is an exponential decay process so it never ends! That means, on your basis, that any low pass filter of any kind anywhere would smear the music. This is clearly nonsense - which is what you get when you erect a superstructure on false foundations. Also, this is not a "memory effect" but an example of energy storage. Dielectric absorption or thermal funnies in BJT are memory effects.

As others have said, this has nothing whatsoever to do with jitter. Jitter is random. CR response is completely reproducible and linear (given good capacitors). Turning your head a few degrees causes more time lag and response variation than an 80kHz filter!

A pure voltage source, by definition, has zero output impedance. A pure current source, by definition, has infinite output impedance. So what? All that is required to get a signal reliably from here to there is that both output and input have linear impedances. A low output impedance helps cope with cable capacitance, and input impedance non-linearities. A high input impedance helps cope with output impedance non-linearities. There is certainly no requirement as a basic rule that low Z must feed high Z - that is just defensive design for voltage drive in case your partnering equipment is poorly designed. The European DIN standard used current drive - high Z to low Z! Damping factor is completely irrelevant here, as that relates to electrical damping of mechanical resonances in a loudspeaker.

I suggest you study electronics a bit more, as you seem not to have understood what you have studied thus far.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.