Design by 'simulated evolution'

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Reading the work of some of the contributors here, I realise that I will never know a fraction of what they do, and I am in awe of the way they turn a few basic components into a high powered amplifier with low distortion.

But as an ignoramus in these matters, a subject that fascinates me is the idea of 'automatic' design using computers to compensate for my ignorance, and/or to compensate for arbitrary real world imperfections that the conventional maths doesn't allow for.

Since the moment I designed a digital filter by my own cobbled-together 'simulated evolution' algorithm (something I didn't have the knowledge or intelligence to do by conventional means), I have been hooked on the idea of using the incredibly fast but dumb power of a computer to home in on the perfect solution to real world problems. Could it be applied to DIY audio? I haven't been able to find any examples of this in the diyaudio archives, however (which doesn't mean they're not there).

Two possible examples:

(1) I read that designing a realistic speaker dummy load, in hardware or SPICE, is "very difficult". This doesn't surprise me, as it involves a crossover filter, followed by multiple drive units interacting with air in the cabinet and the outside world. Each drive unit has its own mass, resonances etc. Personally, I wouldn't even know where to start simulating such a load. I imagine it might include some inductors and resistors. Capacitors? Dunno. Could it be achieved entirely using simple linear components in a passive system? Dunno. What I do have at my disposal is SPICE, where I can test a configuration of components with any signal I choose and see what I get, without building it. I can generate a circuit automatically by simply writing out a text file and can run SPICE as a command line 'call', reading the results back from a text file, so I don't even need any special tools or skills beyond my favourite compiler and a free download of standard SPICE. But how do I know what I want? I suggest that an accurate-ish speaker simulator could be 'evolved': start with a number of real measurements feeding known signals into the real speaker I'm interested in, via a known small impedance at a full spread of amplitudes, frequencies etc., measuring the voltage across the impedance (or some other method?). Then, with a small amount of a priori knowledge, a basic SPICE circuit could be assembled and exhaustively tested with every permutation of component values to find the one that fitted the real response most closely, simply by chugging through a number of nested loops and calculating the overall error by some measure. Much more interestingly, without any previous knowledge (i.e. in my case!), the software could populate an arbitrary circuit 'matrix' with a mix of inductors, capacitors, resistors, say, and measure the results. It might take longer than the age of the universe to find an acceptable solution exhaustively, but this is where 'simulated evolution', or whatever you care to call it, would come in. It could work like this: start randomly, but every time a solution is found that beats the previous best in terms of accuracy, stick with it for a while, applying only small random variations. It could be stuck in a 'local minimum', however, so gradually ramp up the size of the variations in the hope of jumping out of it. Eventually start making large changes to the circuit configuration itself, the equivalent of 'mutations' in nature. Leave it running this algorithm for as long as you've got. The system must eventually find the optimum solution to the problem with the following caveats:
(a) the 'training data' must cover the 'problem space' adequately
(b) at whatever point you stop it, the solution may contain unnecessary complexity i.e. redundant components that contribute nothing or networks that can be reduced to a single 'lumped' component. Maybe this simplification can be done automatically or manually as a separate stage. Unless you're making a million of these things, maybe it's not worth worrying about.
(c) The solution may be too finely tuned to the training data. To counter this, some of the training data may be retained as 'test data' so that the solution's robustness to signals it hasn't seen before can be measured. Also, the simpler the circuit is, the less likely this is to happen, so the 'scoring' mechanism used for evaluating results might include a penalty for complexity – it's something of a black art, but that's what makes it fun.

(2) Ultra-simple amplifiers may have higher distortion than more complex ones, but perhaps they have other advantages – like they glow in the dark. 24 bit DACs are available for pennies. Could the audio source signal be pre-distorted to compensate for an amplifier's shortcomings? Could this be a static correction, or would it have to be adaptive to compensate for thermal effects, ageing etc.? Could the correction be a simple lookup table, or would the previous (or even future-) samples have an effect e.g. compensating for a dip in PSU voltage caused by a recent transient? Something like an FIR filter could be trained to do this pre-distortion to any level of complexity and could adapt over time if the amplifier's output was fed back. It could be an interesting blend of digital high tech and 1920s electronics.

Does anybody else agree with me that this sort of thing could be fascinating, and might allow those of us without the years of design experience and razor sharp brains to do something useful?
 
Hi Sawreyrw

I don't think traditional methods are capable of some things. I don't think that we could sit down and create a simulation of a dummy multi-way speaker load even if we had access to every internal detail of every part of it - which we don't. And once we had created our 'first cut' simulation we would then spend hours simply tweaking it, traditional theory probably not even coming into it. I am just suggesting bypassing any pretence that we know how the thing works, and setting the brute force of a computer onto the task.

If no one else is up to the challenge, I'll have to do it myself! I would love a reasonably accurate dummy load representing my speakers for amplifier distortion testing...
 
I'm primarily talking about item 1 above. If you try to develop an algorithm for this, one of the first questions you need to answer is what is the performance criteria and how do you evaluate it. In other words, what do you want to measure and how do you determine what is best? If you can do this, then traditional methods will work. If you can't do this, then you can't solve the problem. Try it and see what you come up with.
 
Genetic algorithms can be useful when a number of parameters interact in complex ways. Essentially you are just doing an optimisation problem. The big snag is that you have to specify the 'goodness' algorithm - this requires the ability to know exactly what you are looking for. Make it too tight and the algorithm may never find a solution. Make it too loose and it will find lots of poor 'solutions'.

Unfortunately there is not really an alternative to actually understanding what you are doing! Computers can help, but in the end the designer has to say what he wants to find - the computer may then be able to tell him where it is. Even the fairly simple task of specifying a 'good' amplifier is still a research problem.
 
Dave Jones and DF96

Thanks for the replies.

In my mind I see genetic algorithms, adaptive filters, simulated annealing, neural networks etc. as all inter-related. Yes, it's "just" an optimisation problem, but reading the work of audio designers you get the feeling that a lot of effort goes into tweaking an existing system to get the 'optimum' result, and that some basic criteria are used to determine this such as distortion vs. stability etc. So there is some measure of goodness being used, but the tweaking is manual and very laborious.

I disagree slightly that "you need to understand what you are doing" except in the sense that you must know what you are trying to achieve at a a higher level. I think "understanding" is often confused with 'formal training'.

Maybe it's just me. I often see people who can do calculations ten times as complex as I can, trying to solve real world problems that are, perhaps, another ten times too complex. In the end, they often just produce a 'kludge'. My instinctive 'trial-and-error' approach often produces a better result, I think - but with less impressive maths!

As an imaginary example, controlling a central heating system in an existing building 'optimally' could be tackled by calculation, considering the basic physics of flow, heat etc. but would soon balloon into calculations so complex that a software simulation was needed. This would be essential in designing the system from scratch (how big should the radiators be and how big the air conditioners). But manually testing the various permutations against measures of goodness would still take forever. The next logical step would be to automatically home in on the 'optimum' control solution against some specified criteria. However, this couldn't take into account all the real world usage of the building and a dogmatic approach to implementing what the computer said would surely run into practical problems.

I don't know much about the physics of heating and cooling, but "understanding" the system intuitively would tell me that exterior temperature (and sunlight and humidity and wind?) monitors would help in making the system 'pro-active' rather than simply reactive (but that they could break). Or I could access weather forecasts over the internet and feed those in, no doubt. Trying to work out exactly how they should influence the system in advance would simply be a guess, however.

Being a less well trained person I might tackle the problem without knowing anything about the physics, except in the most basic 'first order' sense (bricks retain heat, areas with glass roofs get hot in sunlight etc.). I might consider that I couldn't possibly attempt to home in on the optimum solution except by starting with a very basic algorithm running on the actual hardware in the actual building, perhaps with some extra temperature monitors if not already fitted, and slowly adapting the algorithm over time. I should understand that the system should not become too 'brittle' and fall over if the unexpected should happen - fail safes might be built in to ensure that major problems were bypassed. That sort of thing.

Sorry for prattling on about this but I just have the feeling that too much training can leave the brain so pulverised that it becomes impossible to think beyond the 'formal methods'...
 
Dave Jones and DF96

I disagree slightly that "you need to understand what you are doing" except in the sense that you must know what you are trying to achieve at a a higher level. I think "understanding" is often confused with 'formal training'. ...

I second that! :)
There are many people in all walks of life that do just that!
[eg. man/woman works on, then owns a small farm all his/her life and goes to college, gets a 2 year degree in Landscaping, works on same farm for another 3 years.. sells the farm then applies to a toaster mfg. company for a low level management position (Team Leader), gets hired and knows nothing of making toasters. He/she doesn't need to know how the toasters work nor how they are made (that is learnable short term at work). The important thing to remember is that he/she understands problems and is/are willing to work through the troubleshooting process.


As an imaginary example, controlling a central heating system in an existing building 'optimally' could be tackled by calculation, considering the basic physics of flow, heat etc. but would soon balloon into calculations so complex that a software simulation was needed. This would be essential in designing the system from scratch (how big should the radiators be and how big the air conditioners). But manually testing the various permutations against measures of goodness would still take forever. The next logical step would be to automatically home in on the 'optimum' control solution against some specified criteria. However, this couldn't take into account all the real world usage of the building and a dogmatic approach to implementing what the computer said would surely run into practical problems.

I don't know much about the physics of heating and cooling, but "understanding" the system intuitively would tell me that exterior temperature (and sunlight and humidity and wind?) monitors would help in making the system 'pro-active' rather than simply reactive (but that they could break). Or I could access weather forecasts over the internet and feed those in, no doubt. Trying to work out exactly how they should influence the system in advance would simply be a guess, however. ...

While I was in the USN BERT/RMA/RMC schools learning electronics was great but when we started to delve ino the 'how does the electron know' relm we called that "Nuking the theroy"
All we had to do when that sort-of thing came up for debate was to push the "I Believe' button" (I believe it works because someone smarter than I designed that piece to do exactly what they wanted it to do.) But it definitly does not take a well trained(schooled)/well seasoned person to understand the whole principal of all of thoes pieces working in unison.
It does however take a person willing to learn how each of thoes pieces affect/effect and are used in the system as a whole. This type of person will most likely "think outside the box" and not get trapped in the usual "it will never work because that piece was not designed for that" (textbook) mentality.

If that was the case then why is it that I have seen a post somewhere here on DIYA about an amp design using mosfets that were designed for high A/C phase control? It was a few years back but from what I read it worked. Maybe not to be best of "Audiophile" standards, but IT WORKED!!
Now that is what I call "Thinking outside the box".

I will jump off my soap box now and give the:smash: to someone more qualified than myself. (As I am prone to breaking rather than fixing with hammers).
 
I don't claim to be more qualified (even if I'm moderately well-read), but I'll take that hammer from you. You know what they say about the hammer being your only tool - every problem looks like a nail. :)

Now on to the OP ...

As far as a "reactive" dummy load that simulates a speaker, I actually don't see it as such a huge problem. The first, yet rather faulty, approximation is to get a crossover and replace each driver with a resistor of the nominal impedance. This should work pretty well for the midrange and tweeter frequnecy ranges, but the woofer has one or two big resonances around its lower acoustic cutoff frequency, depending on the design of the cabinet (respectfully, acoustic suspension or bass reflex). The woofer resistor can then be augmented with the appropriate tuned L-C circuits. For a slightly better approximation, most drivers become rather inductive at higher frequencies, so you can place some driver-specific value of inductance in series with each driver.

A more accurate simulation could be made by actually measuring the impedance (resistive and reactive components, or alternatively descrived in polar coordinates, the magnitude and phase angle), at each frequency, and plot these values. From these a circuit that generates the same approximate impedances could be made. This is slightly beyond my own circuit analysis and design capabilities, but I have no doubt it can be done analytically.

I think the biggest problem with electrically simulating "a speaker" is that there are so many models, and they all have different electrical characteristics. Sometimes a speaker with a "nominal" 8 ohm rating can have an impedance of half that or lower at some specific frequency within its operating range. In some frequency ranges the impedance may be capacitive, and at others inductive. These are of course all challenges for low distortion and stability of the amplifier. An amplifier may run fine with most models of speakers, but choke on some specific model.

As far as amplifier design, yeah, there are a lot of "small" tradeoffs, such as output bias current (more causes more heat and wasted power that can't drive the speaker) versus crossover distortion (more generally makes for lower crossover distortion). For there being multiple such parameters affecting distortion that could be tossed around with "simulated annealing" to find some "best fit" combinations, there surely are those, but I can't think of any offhand. Yet I think the best way to get ahead of the game is to know as much as you reasonably can about what you're studying, AS WELL AS "thinking outside the box." And you can just go through the threads here as well as look on Amazon to discover there are many books on power amplifier design.

One thing to take into consideration, a practical point that too often one doesn't see on schematics is that at some point an electrical connection is NOT an electrical connection, it's a resistance and/or inductance and/or it has capacitance to another "connection." It can be any or all of these depending on current, voltage, frequency, and how sensitive the circuit is. This can cause what many people think of as a "ground loop" where a high-value ground current causes a small but significant voltage drop across a ground conductor, putting a "signal" into a low-level circuit that shouldn't be getting that signal. This often happens in relation to power supplies. Back in the '80's I worked for a company that made a device to record and play cassette-based messages over a phone line. The company changed from a perfectly fine, in-house-designed linear power supply to a cheaper off-the-shelf switching supply. This supply of course gave substantial buzz in the tape playback, and it was up to me to fix it, which I did by figuring out which two "ground" points on the PCB being connected together by a 12-gauge wire would fix it.

Here's another specific example of this (I like this whole page, I came across it many years ago), at "5.7 DISTORTION 7: NFB Takeoff point distortion"
Distortion In Power Amplifiers
He draws these different points in the schematics, as if the wiring layout were going to be exactly like the schematics - I think that could be misleading (as "electrically" as far as the schematic shows, these are all the same point anyway, right???) and I'd prefer to show printed circuit board layouts as examples, but maybe that's just a personal minor quibble. But this shows where a "raw schematic" often does not tell everything one needs to know about a circuit to get it performing as best it can, or avoid problems.

You mention predistortion - there was some model of studio multitrack tape recorder back in the 1970's that did just this, a circuit added "predistortion" to the signal going to the tape head (this was of course in addition to the ultrasonic AC bias that was added to the recorded signal at the tape head, done by all but the cheapest home recorders) to improve the linearity of the waveform that gets played back. No doubt this has been done in other areas, perhaps in audio, perhaps elsewhere where linearity and signal fidelity are vital, yet something in the signal chain doesn't work as well as hoped).

Does anybody else agree with me that this sort of thing could be fascinating, and might allow those of us without the years of design experience and razor sharp brains to do something useful?
Yes, it could be interesting, yet I'd have to wonder: Traditionally, when designers try something different, they have some good idea of why they're doing it, why they think it should work better than other things. If you use this method and came up with some combination of components and values that "works great," (for whatever definition of "works" is) we would still be left with the question of HOW and WHY does it work?
 
Yes, it could be interesting, yet I'd have to wonder: Traditionally, when designers try something different, they have some good idea of why they're doing it, why they think it should work better than other things. If you use this method and came up with some combination of components and values that "works great," (for whatever definition of "works" is) we would still be left with the question of HOW and WHY does it work?

I've recently become fascinated by the Quad 'current dumping amplifier', particularly the controversy surrounding how it works, or doesn't. To me it's wonderful to see how such a simple intuitive(?) configuration can bamboozle the experts: is it feed-forward error correction or feedback, or even whether the name is an accurate description or not. It is mentioned and summarily dismissed in books on audio design, and described as though it cannot quite be real, even though its performance without any alignment or drift issues appears to be near perfect (well, if modern components are used).

The consensus seems to be that its functioning is not completely understood yet, but that even using real world components "it works". The use of reactive components in the bridge to achieve near-perfect efficiency is a further master stroke which, apparently, complicates the analysis and makes it even more controversial. Some people seem to suggest that Peter Walker and his colleagues didn't really understand what they were doing and stumbled upon the perfect design almost by luck.

It occurs to me that such a configuration could have been found by 'evolution' in a computer, particularly the choice and values of the components in 'the bridge' - not that the designers needed it, of course.
 
Some years ago I was trying to design an antenna, for an MSc assignment. I was using a simulation package (I forget which one), and playing around with the dimensions to get the frequency response and impedance right. I was trying to work out which physical changes gave rise to which electrical changes. The professor saw me at work and said that he wished his PhD students would do the same, as they usually fiddled randomly until they got a result but never really understood how antennas work.

It is foolish to set "formal training" in opposition to "experience" - the best engineers need both. Poor engineers generally have just one of these, and use either snobbery or inverted snobbery to convince themselves that they can manage without the other. In the extreme this can even lead to people being proud of their own ignorance! The history of engineering has many examples of "practical" people rejecting correct ideas because they seemed counter-intuitive and could only be understood by using more maths or science than the "practical" people had.

Genetic algorithms and the like are useful tools, but they are a poor substitute for knowledge and experience.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.