ChatGPT ideas about amplifier design

More than a thousand experts in the field of high technology and artificial intelligence have signed an open letter calling for the suspension of training of AI systems “more powerful than GPT-4”, the most popular and advanced neural network of this nature to date, for six months.
Among the signatories are researchers from DeepMind, Harvard, Oxford and Cambridge. As well as much more media personalities: Elon Musk, Apple co-founder Steve Wozniak, philanthropist Andrew Yang, Pinterest co-founder Evan Sharp and many others.

All of them believe that AI carries serious risks for society and humanity. In their opinion, powerful artificial intelligence systems should be developed only when they are sure that the effects will be positive.

During the six months for which the pause will be taken, the authors of the letter propose to develop common security protocols.
This must be done jointly with the authorities of states, it is necessary to form competent regulatory bodies and come up with systems of differences in products created by man and artificial intelligence.
 
  • Like
Reactions: huggygood and rayma
Heres Chat Gpts take

"
Yes, it's true that there are many variations of frequency compensation, and the specific approach taken will depend on the particular circuit design and its requirements. Frequency compensation is a critical aspect of amplifier design as it helps to ensure stability and prevent oscillation.

Douglas Self is a well-known authority in the field of audio amplifier design and his article on TPC (Transconductance-to-Current Conversion) provides some interesting insights into a particular compensation technique. However, as you mentioned, there are other compensation techniques that can also be used to address stability issues and shift dominant poles.

Some other common compensation techniques include Miller compensation, pole splitting, and nested Miller compensation. Each technique has its own advantages and disadvantages and may be more or less suitable depending on the specific application.

In any case, it's important to carefully analyze the stability of a circuit during the design process and choose the appropriate compensation technique to ensure that the amplifier operates correctly and reliably.
"
 
More than a thousand experts in the field of high technology and artificial intelligence have signed an open letter calling for the suspension of training of AI systems “more powerful than GPT-4”, the most popular and advanced neural network of this nature to date, for six months.
Among the signatories are researchers from DeepMind, Harvard, Oxford and Cambridge. As well as much more media personalities: Elon Musk, Apple co-founder Steve Wozniak, philanthropist Andrew Yang, Pinterest co-founder Evan Sharp and many others.

All of them believe that AI carries serious risks for society and humanity. In their opinion, powerful artificial intelligence systems should be developed only when they are sure that the effects will be positive.

During the six months for which the pause will be taken, the authors of the letter propose to develop common security protocols.
This must be done jointly with the authorities of states, it is necessary to form competent regulatory bodies and come up with systems of differences in products created by man and artificial intelligence.
I wonder from the start if they thought of a kill switch.
 
  • Like
Reactions: rayma
An LTP, or Long-tailed Pair, circuit is a type of circuit topology that can be used in amplifier designs. It can be used to convert a voltage feedback scheme into a current feedback scheme.
...
Is it you or the BOT that is talking? You I suppose. But the language is so well versed that I doubt it - but if wrong, congratulate you.

I suppose that if communicate a lot you will learn from the BOT and pick up its language skills an as such, probably improve yourself. This is nice 🙂

//
 
  • Like
Reactions: jacques antoine
In their opinion, powerful artificial intelligence systems should be developed only when they are sure that the effects will be positive.
So never. Its an absolute oxymoron. At least with the current trained with human input "AI". Perhaps future real AI could be let free as it will build a completely new knowledge base and might, if we are lucky, propose and induce a society scheme that is sustainable and make people in general happy. The current version is of course greedy - it has learnt from us. This is why Musk et al wants the future ones of the stage - navigating a greed saturated "killing field" is what they master.

//
 
Last edited:
Chat GPT still has a few kinks, for example in previous chat it extrapolated TPC (two pole compensation) wrong. Chat GPT gets these things wrong due to how it works. This approach of using educated guesses could generate new knowledge that may have been overlooked. Its akin to mining where the mineral is only a small percentage of the whole. In all the garbage it generates part of that garbage is gold.
 
"El-Mahdi El-Mhamdi, a former research scientist at Google, said he left the company in February over its refusal to engage with ethical AI issues head-on. Late last year, he said, he co-authored a paper that showed it was mathematically impossible for foundational AI models to be large, robust and remain privacy-preserving."

Bloomberg