Reading the work of some of the contributors here, I realise that I will never know a fraction of what they do, and I am in awe of the way they turn a few basic components into a high powered amplifier with low distortion.
But as an ignoramus in these matters, a subject that fascinates me is the idea of 'automatic' design using computers to compensate for my ignorance, and/or to compensate for arbitrary real world imperfections that the conventional maths doesn't allow for.
Since the moment I designed a digital filter by my own cobbled-together 'simulated evolution' algorithm (something I didn't have the knowledge or intelligence to do by conventional means), I have been hooked on the idea of using the incredibly fast but dumb power of a computer to home in on the perfect solution to real world problems. Could it be applied to DIY audio? I haven't been able to find any examples of this in the diyaudio archives, however (which doesn't mean they're not there).
Two possible examples:
(1) I read that designing a realistic speaker dummy load, in hardware or SPICE, is "very difficult". This doesn't surprise me, as it involves a crossover filter, followed by multiple drive units interacting with air in the cabinet and the outside world. Each drive unit has its own mass, resonances etc. Personally, I wouldn't even know where to start simulating such a load. I imagine it might include some inductors and resistors. Capacitors? Dunno. Could it be achieved entirely using simple linear components in a passive system? Dunno. What I do have at my disposal is SPICE, where I can test a configuration of components with any signal I choose and see what I get, without building it. I can generate a circuit automatically by simply writing out a text file and can run SPICE as a command line 'call', reading the results back from a text file, so I don't even need any special tools or skills beyond my favourite compiler and a free download of standard SPICE. But how do I know what I want? I suggest that an accurate-ish speaker simulator could be 'evolved': start with a number of real measurements feeding known signals into the real speaker I'm interested in, via a known small impedance at a full spread of amplitudes, frequencies etc., measuring the voltage across the impedance (or some other method?). Then, with a small amount of a priori knowledge, a basic SPICE circuit could be assembled and exhaustively tested with every permutation of component values to find the one that fitted the real response most closely, simply by chugging through a number of nested loops and calculating the overall error by some measure. Much more interestingly, without any previous knowledge (i.e. in my case!), the software could populate an arbitrary circuit 'matrix' with a mix of inductors, capacitors, resistors, say, and measure the results. It might take longer than the age of the universe to find an acceptable solution exhaustively, but this is where 'simulated evolution', or whatever you care to call it, would come in. It could work like this: start randomly, but every time a solution is found that beats the previous best in terms of accuracy, stick with it for a while, applying only small random variations. It could be stuck in a 'local minimum', however, so gradually ramp up the size of the variations in the hope of jumping out of it. Eventually start making large changes to the circuit configuration itself, the equivalent of 'mutations' in nature. Leave it running this algorithm for as long as you've got. The system must eventually find the optimum solution to the problem with the following caveats:
(a) the 'training data' must cover the 'problem space' adequately
(b) at whatever point you stop it, the solution may contain unnecessary complexity i.e. redundant components that contribute nothing or networks that can be reduced to a single 'lumped' component. Maybe this simplification can be done automatically or manually as a separate stage. Unless you're making a million of these things, maybe it's not worth worrying about.
(c) The solution may be too finely tuned to the training data. To counter this, some of the training data may be retained as 'test data' so that the solution's robustness to signals it hasn't seen before can be measured. Also, the simpler the circuit is, the less likely this is to happen, so the 'scoring' mechanism used for evaluating results might include a penalty for complexity – it's something of a black art, but that's what makes it fun.
(2) Ultra-simple amplifiers may have higher distortion than more complex ones, but perhaps they have other advantages – like they glow in the dark. 24 bit DACs are available for pennies. Could the audio source signal be pre-distorted to compensate for an amplifier's shortcomings? Could this be a static correction, or would it have to be adaptive to compensate for thermal effects, ageing etc.? Could the correction be a simple lookup table, or would the previous (or even future-) samples have an effect e.g. compensating for a dip in PSU voltage caused by a recent transient? Something like an FIR filter could be trained to do this pre-distortion to any level of complexity and could adapt over time if the amplifier's output was fed back. It could be an interesting blend of digital high tech and 1920s electronics.
Does anybody else agree with me that this sort of thing could be fascinating, and might allow those of us without the years of design experience and razor sharp brains to do something useful?
But as an ignoramus in these matters, a subject that fascinates me is the idea of 'automatic' design using computers to compensate for my ignorance, and/or to compensate for arbitrary real world imperfections that the conventional maths doesn't allow for.
Since the moment I designed a digital filter by my own cobbled-together 'simulated evolution' algorithm (something I didn't have the knowledge or intelligence to do by conventional means), I have been hooked on the idea of using the incredibly fast but dumb power of a computer to home in on the perfect solution to real world problems. Could it be applied to DIY audio? I haven't been able to find any examples of this in the diyaudio archives, however (which doesn't mean they're not there).
Two possible examples:
(1) I read that designing a realistic speaker dummy load, in hardware or SPICE, is "very difficult". This doesn't surprise me, as it involves a crossover filter, followed by multiple drive units interacting with air in the cabinet and the outside world. Each drive unit has its own mass, resonances etc. Personally, I wouldn't even know where to start simulating such a load. I imagine it might include some inductors and resistors. Capacitors? Dunno. Could it be achieved entirely using simple linear components in a passive system? Dunno. What I do have at my disposal is SPICE, where I can test a configuration of components with any signal I choose and see what I get, without building it. I can generate a circuit automatically by simply writing out a text file and can run SPICE as a command line 'call', reading the results back from a text file, so I don't even need any special tools or skills beyond my favourite compiler and a free download of standard SPICE. But how do I know what I want? I suggest that an accurate-ish speaker simulator could be 'evolved': start with a number of real measurements feeding known signals into the real speaker I'm interested in, via a known small impedance at a full spread of amplitudes, frequencies etc., measuring the voltage across the impedance (or some other method?). Then, with a small amount of a priori knowledge, a basic SPICE circuit could be assembled and exhaustively tested with every permutation of component values to find the one that fitted the real response most closely, simply by chugging through a number of nested loops and calculating the overall error by some measure. Much more interestingly, without any previous knowledge (i.e. in my case!), the software could populate an arbitrary circuit 'matrix' with a mix of inductors, capacitors, resistors, say, and measure the results. It might take longer than the age of the universe to find an acceptable solution exhaustively, but this is where 'simulated evolution', or whatever you care to call it, would come in. It could work like this: start randomly, but every time a solution is found that beats the previous best in terms of accuracy, stick with it for a while, applying only small random variations. It could be stuck in a 'local minimum', however, so gradually ramp up the size of the variations in the hope of jumping out of it. Eventually start making large changes to the circuit configuration itself, the equivalent of 'mutations' in nature. Leave it running this algorithm for as long as you've got. The system must eventually find the optimum solution to the problem with the following caveats:
(a) the 'training data' must cover the 'problem space' adequately
(b) at whatever point you stop it, the solution may contain unnecessary complexity i.e. redundant components that contribute nothing or networks that can be reduced to a single 'lumped' component. Maybe this simplification can be done automatically or manually as a separate stage. Unless you're making a million of these things, maybe it's not worth worrying about.
(c) The solution may be too finely tuned to the training data. To counter this, some of the training data may be retained as 'test data' so that the solution's robustness to signals it hasn't seen before can be measured. Also, the simpler the circuit is, the less likely this is to happen, so the 'scoring' mechanism used for evaluating results might include a penalty for complexity – it's something of a black art, but that's what makes it fun.
(2) Ultra-simple amplifiers may have higher distortion than more complex ones, but perhaps they have other advantages – like they glow in the dark. 24 bit DACs are available for pennies. Could the audio source signal be pre-distorted to compensate for an amplifier's shortcomings? Could this be a static correction, or would it have to be adaptive to compensate for thermal effects, ageing etc.? Could the correction be a simple lookup table, or would the previous (or even future-) samples have an effect e.g. compensating for a dip in PSU voltage caused by a recent transient? Something like an FIR filter could be trained to do this pre-distortion to any level of complexity and could adapt over time if the amplifier's output was fed back. It could be an interesting blend of digital high tech and 1920s electronics.
Does anybody else agree with me that this sort of thing could be fascinating, and might allow those of us without the years of design experience and razor sharp brains to do something useful?