ChatGPT ideas about amplifier design

An LTP, or Long-tailed Pair, circuit is a type of circuit topology that can be used in amplifier designs. It can be used to convert a voltage feedback scheme into a current feedback scheme.

In a typical voltage feedback circuit, a small portion of the output voltage is fed back to the input and compared to the original input voltage. The difference between the two voltages is used to adjust the gain of the amplifier.

An LTP circuit uses a pair of transistors, with one transistor connected in a common-base configuration and the other in a common-emitter configuration. This circuit topology allows for a high input impedance and a low output impedance, which can be used to convert the voltage feedback to a current feedback.

The current flowing through the common-emitter transistor is proportional to the input voltage. By using the current flowing through the common-emitter transistor as a feedback signal, the circuit becomes a current feedback amplifier, which allows for a faster response time and better bandwidth than a traditional voltage feedback amplifier.
Convincing, but not correct.

Jan
 
  • Like
Reactions: Bonsai
Ok I will use quotation marks for chat gpt content. Chat GPT is a collection of systems that mimic the early stages of the creation of parts of a brain, once they get it right the landscape will change. At the beginning it will just be a dictionary that is queried by natural language processing but add a few more scaffolding and it will be limitless.
You have already bought the marketing and drank the koolaid uh? This statement personifies this chrome calculator way to much. No breakthrough here, a step of progress but not nearly a seismic advance. Its potential to manipulate simple minds is nearly limitless however, greater then the TV or cell phone maybe? Dangerous
 
An LTP, or Long-tailed Pair, circuit is a type of circuit topology that can be used in amplifier designs. It can be used to convert a voltage feedback scheme into a current feedback scheme.

In a typical voltage feedback circuit, a small portion of the output voltage is fed back to the input and compared to the original input voltage. The difference between the two voltages is used to adjust the gain of the amplifier.

An LTP circuit uses a pair of transistors, with one transistor connected in a common-base configuration and the other in a common-emitter configuration. This circuit topology allows for a high input impedance and a low output impedance, which can be used to convert the voltage feedback to a current feedback.

The current flowing through the common-emitter transistor is proportional to the input voltage. By using the current flowing through the common-emitter transistor as a feedback signal, the circuit becomes a current feedback amplifier, which allows for a faster response time and better bandwidth than a traditional voltage feedback amplifier.
I always thought the difference between voltage feedback and current feedback is: In a VFB design the signal that is fed back is proportional to the output voltage, while in CFB it is proportional to the output current? What has this to do with the current trough a LTP common emitter resistor? Anyway, of course you can feed back a CFB signal into the negative inputs.
Complementary Feedback Pair (CFP): Uses a combination of NPN and PNP transistors in a cascode configuration. The CFP output stage is known for its low distortion and high power output. It is more complex and costly than the other output stages, but it offers the best overall performance.
Why? In a Sziklai/CFP output stage the power devices are just swapped in comparison with a 2EF? The overall parts count and complexity remain the same??

Best regards!
 
Chat gpt is quite interesting in how it relates things, you can imagine its power in the next coming years if they don't hold back and freak out. Imagine being able to negotiate with a non biological entity that knows you like the back of its hand
 
Last edited:
Well, it has stirred people up enough that they're afraid, and not just in this thread: various bans and counter-measures to its usage are already being drafted, even implemented.

The thread itself shows people wanting you to indicate via quotes that it's not you who posted - not sure you intended it this way as a fun experiment... Others replying to the AI as in a real conversation too. Others hiding behind words, etc... Others focusing on little mistakes here and there, missing the forest for the trees (while incidentally, human posts contain mistakes often as well) as if the current state means it will never be better. So it makes for a fascinating mini-study of it all.

What would be interesting to me is having it offline and working of pre-curated corpuses of knowledge in specific domains.
 
  • Like
Reactions: routhun
At the end of the day, they are just words on a screen, but many people will be easily fooled by how authoritative it can sound. Whereas in actuality, the AI is just "trying its best" as it were, with solely the knowledge from which it can train itself. There is no way to check at a glance whether its output is correct or useful - same as with any other "person" posting online.

Also, this AI will never be able to do original research, so, in a manner of speaking, it can only blend available colours, not discover new ones.
 
  • Like
Reactions: JRKO and rayma
Just my 2ct on the software side of things:

By the time I wrote the LADSPA plugins to be used in my first attempt at dsp software - the pulseaudio parametric equalizer - I did not understand a thing about IIR filter implementation (and still not fully grasp the concept of calculating biquad coefficients) so I copied formulae from the web and implemented them in C code. This is VERY common in the daily life of a software developer, we don't reinvent wheels all the time. What's more important is to package all the copied stuff into a easily useable program with a fine ui. All IMHO.

So ChatGPT might be of help but will - at least now - not be able to replace a software developer's job. It's just another tool that you will have to learn how to use, just as using "conventional" search engines (giving them the right input, parsing output for relevance/validity) is a skill that you will have to learn.
 
Actually I did play around with it quite a bit but TBH I never asked it to produce code yet. Might be some old habits, I just use google (leading to mostly stackoverflow hits) up to now. I will try to incorporate it in my workflow a little and report back.