A precision LED/LDR-based Attenuator

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Wapo, I totally agree. I purchased a bunch of LDR's a few years back, after reading about the Lightspeed on the DIY Audio Projects site, and have been playing with them on and off ever since. The minute I 'heard' them I was sold, even when the pots I could build would only match at a few points on their sweeps.

I soon realised that some finer 'feedback' control of current was needed, but I had no idea how that could be done. I congratulate you, and cannot wait to try your elegant solution.

I must admit, and this is probably not the thread to discuss it, but I have no idea why LDR's make such an 'improvement' to the sound. They seem to take away layers of fog that I am not sure analog pots actually add?! LDR's seem to manipulate the sound in a very favourable manner? Removing an analog pot from an 'integrated power' amp and running it from a D/A converter with built in preamp and volume control does not give anywhere near the same improvement, or signature to the sound, as adding an LDR based pot into the system!? Black magic?

I read people are now using LDR's as replacements for other resistors used elsewhere in audio circuits. I do know that using a voltage regulator based CCS makes a big difference to the sound of valve amps when replacing the Cathode resister on output tubes. A favourable difference to my ears anyway. A difference quite similar in sound to adding an LDR to the system. Maybe it has something to do with the sound of 'chip based' resistors versus 'through hole fixed/smd' resistors?

Only 60 more sleeps, so keep up the great work!
 
LOL! I think it has arrived at a good place, I would not want a soldering beginner to attempt it but it would be straight forward for an experienced audio amateur.

I reduced the board area by more than 15% by removing one component that had become obviously redundant through my experience and James' experience (because of the increasingly obvious lack of need for frequent calibration) and that one change allowed eliminating quite a lot of additional blank space.

Actually, this fits in with my design objectives -- have never been primarily interested in achieving 0.1 dB precision in the log curve, my goals have always been superb sound, great operator convenience, small size, simple circuitry, low cost and, finally, tracking and log curve accuracies that are so good that any errors are inaudible. In other words, all the real-world qualities that really matter.

I have no proof one way or the other, but I'm beginning to wonder if it will make any audible difference whether this board is in a room-temp box or in a 100 degree F amplifier because I don't think the shape of the curve will change even if the curve is shifted somewhat one way or another so I don't think the shift will be noticeable.

James had previously questioned the feature that I removed, and at least from a theoretical perspective he'll be pleased to see that it's gone. I needed the evidence we acquired by actually burning a few in and playing them for many hours on end to convince me. I don't think the difference will be audible, it just makes the board smaller.
 
Possibly a large temp difference will show up as a mismatch between the channels towards the extremes of very loud or very soft - don't think a difference of 1dB would be particularly noticeable at these volumes, despite possible significant variations in device impedances - boxing the 4 ldrs in a single heat transfer compound (beeswax, etc) will keep a more constant temp between the devices but still not totally avoid higher/lower temp individually variations.

I've had this unit operating from a miserable 18*C up to a scorching 38*C (64 - 100*F) and it's not shown any variations in the sound (same thing with the other Lightspeed devices too) yet the individual ldr devices, when tested, were quite sensitive to temperature - is it serendipity, or something?

I had another stray thought (I heard that groan!) - could you perhaps add/extend the pins of the 'in-line' 6 way header pins so it is possible to connect these points directly to the i/p & o/p phono sockets (3 wires/ channel) while still retaining the 'self-calibration' facility (with nothing else plugged in, naturally) and also avoiding the risk of overheating the ldrs on the removable header pcb - not sure if it'll make any difference to the sound (altho maybe headamps ...) but it does simplify the signal path, and an easy option to include.
 
I don't think it's serendipity -- I think the bottom line is that four LDRs co-located on a board are close enough, temperature-wise, to change consistently given a similar current level. Since both the L & R series LDRs are at similar current and resistance and the shunt LDRs are likewise, if there is any change in LDR resistance it will be closely matched across channels and not affect channel balance and affect volume only negligibly.

The way to measure temperature effect in a real-world situation would be to first match the resistances of two LDRs at one temperature by adjusting the currents so the resistances match, and then changing the temperature of both simultaneously to see if one changes resistance in a significantly different way than the other. With balance centered, my system shows both shunt LDR current tracking within 0.1 mA of each other across the entire resistance range until they get to > 5.0 mA where they begin to vary by up to 0.4 mA.
 
Last edited:
@ammel68, I was only describing the soldering skills to indirectly describe the physical nature of the boards. There will be no soldering involved at all for the end user because the boards will be professionally assembled and all connections to off-board components will be through screw terminals. All you'll need is a fine-tipped flat blade screw driver which you can buy at practically any hardware store.
 
Last edited:
Yesterday I was listening to my system in the usual way, using a laptop at my listening position to control the laptop connected to my stereo which was using Foobar to source the audio that was going through the outboard DAC, then the LDRs, then power amp and speakers. I was using a separate IR remote control to adjust the LDR volume control.

So, I had this "brilliant" thought -- if the LDR control is in the system and making the sound lovely, why don't I just set it to a fairly high level and then simply use the volume control in Foobar to actually adjust the level via the laptop? I could do away with the IR remote control entirely and just use the laptop to control everything.

So, I tried it -- set Foobar to minimum, set the LDR control very high, and then ran the Foobar control up to normal listening level. I almost instantly knew that there was something wrong -- the transparency was gone, the music sounded somehow "dead." So, I went back to full volume on Foobar and attenuation via the LDRs, and the life returned to the music. At the time, I didn't realize the implications.

So, woke up early this AM thinking about this, realizing that attenuation in the digital realm seems to have seriously affected the music in a way that attenuation in the analog realm does not. As I thought about it, I remembered reading somewhere that when you reduce volume in the digital realm that means you reduce the number of bits available to characterize the sound and you lose detail. I'm guessing that's what happened to me.

So, can I conclude that the LDR not only provides a certain clarity, but it also allows you to avoid the destructive act of attenuating the sound digitally? I think I've got that right.

I know that James has remarked in the past that my control seems equally transparent at all volume levels and at the time I didn't understand what he was talking about or why he would say that. Now, I get it.

If you've got a system where the volume is normally controlled with an LDR and the source is digital with a digital volume control capability built in, you can prove/disprove this for yourself. And if you're using a digital source it appears that you should have your digital volume control set at zero attenuation.

That's the way it appears to me.
 
It's called "bit stripping" wappo, one should try never to use a digital domain volume control below 75% of it's full output, or you run the risk of "bit stripping" EG: instead of getting full 16bit resolution you'll get 14bit and 12bit the lower you go.

But saying that, if your gain is right so you can use the digital volume at or above 75% then your source will sound better direct to the amp/s and you get rid of another set of interconnects and yout LDR VC.

Cheers George
 
Well, I do need a volume control since I run my LDR control at around 12~1 o'clock = 24~20 dB attenuation. I'll just keep using it as I am very happy with the sound that way. :)

Should have realized I want to hit my outboard DAC with the digital signal entirely unchanged . . .
 
Last edited:
Maybe it's time to look at your system gain and try finding ways to preserve performance with a lower gain.

Changing the Power Amplifier from a typical gain of 26dB to 30dB down to around 20dB will generally work very well with most sources.

But you MUST check the stability margins of the Power Amplifier at the lower gain, some will require big changes to the compensation and most will require some change to the compensation.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.