But is it not more FUN to come to these conclusions organically ??
Why do do I need to outsource this to AI ?
What else would I be doing ? Breeding more useless organic's ?
like most current tech , "solutions in search of the problem".
OS
Why do do I need to outsource this to AI ?
What else would I be doing ? Breeding more useless organic's ?
like most current tech , "solutions in search of the problem".
OS
Entirely nonsense. Please don't go there and correct that fuckery.Quoting ChatGPT: Signal Integrity and Noise: TH components generally offer better signal integrity due to their larger lead structures, which can provide better electrical connections and reduce noise and interference. In some cases, the larger size of TH components can lead to improved performance in terms of minimizing signal degradation.
What about for voltage coefficient of resistors? IIRC isn't it to some extent dependent on the voltage gradient, which is more or less volts/length?In some cases, the larger size of TH components can lead to improved performance in terms of minimizing signal degradation.
You can make voltage gradient even smaller than that of a TH resistor by using a few smt resistors in series, and still be able to manage to save space on PCB or save on cost of additional assembly process, if it's relevant.What about for voltage coefficient of resistors? IIRC isn't it to some extent dependent on the voltage gradient, which is more or less volts/length?
Agreed.
However the question at hand seems to be as to whether or not the particular statement made by ChatGPT was false or not false. Appears to be that it was not necessarily false.
However the question at hand seems to be as to whether or not the particular statement made by ChatGPT was false or not false. Appears to be that it was not necessarily false.
Last edited:
It is false, in that a hidden presumption, that a small in dimension resistor was used where dimension of a resistor matters to its performance, was made, by GPT. Apparently, using an undersized resistor where size matters would be a design error, TH or SMT being irrelevant.
Not so sure. When I tried ChatGPT, if it said something like that I would ask it for some examples of what it meant by what it said. Sometimes it would give a plausible answer, other times it would sort of avoid a direct answer. In the latter case I tried asking it if it if it was true that it didn't know of any specific examples. Then it would would tend to be more honest and say its training was not that complete and it didn't have specific examples.
IOW, the only way to really find out its limits is to ask it more specific questions when it says something that seems a bit suspicious.
Regarding resistors in particular, if it got into details about end cap connections, specific metallization technology, etc. then I might find some truth in relation to some non-voltage coefficient resistor characteristics. Might also find some truth when it comes to some characteristic of some types of SMD or THT film caps.
Again, only point is that unless you dig deeper with questions you don't really know what it knows and what it doesn't.
IOW, the only way to really find out its limits is to ask it more specific questions when it says something that seems a bit suspicious.
Regarding resistors in particular, if it got into details about end cap connections, specific metallization technology, etc. then I might find some truth in relation to some non-voltage coefficient resistor characteristics. Might also find some truth when it comes to some characteristic of some types of SMD or THT film caps.
Again, only point is that unless you dig deeper with questions you don't really know what it knows and what it doesn't.
Last edited:
Answers are also dependent on the quality of the prompt.
Of course, it's always the users' fault, according to software guys.
You're wrong. It's always the fault of the software.
Don't know. Sometimes there are threads in the forum with zero replies. Usually they are posed in a way that is too ambiguous, too open-ended, or maybe a little confused. Something more or less like that
When I come across one of those I may ask some questions of the OP to find out what it is they really want to know, to see if they need to provide additional information, etc.
It may take one or more questions and responses to get more info about what the problem is, what the OP wants to know about, etc. Once issues start to become more clear then its not unusual for one, or a few, or even several other members to start participating, trying to help, offering advice, etc.
So my question would be it always the fault of the the more knowledgeable forum members for not eliciting more information from the OP in those cases? If not, is it always the fault of the OP?
Not so sure its always so cut and dried as to place blame on anyone in particular.
Also not sure if AI programs are developed to the point of being trained to help in such cases.
When I come across one of those I may ask some questions of the OP to find out what it is they really want to know, to see if they need to provide additional information, etc.
It may take one or more questions and responses to get more info about what the problem is, what the OP wants to know about, etc. Once issues start to become more clear then its not unusual for one, or a few, or even several other members to start participating, trying to help, offering advice, etc.
So my question would be it always the fault of the the more knowledgeable forum members for not eliciting more information from the OP in those cases? If not, is it always the fault of the OP?
Not so sure its always so cut and dried as to place blame on anyone in particular.
Also not sure if AI programs are developed to the point of being trained to help in such cases.
Last edited:
When it comes to offering an answer to a question, an AI such as GPT differs from a human in a fundamental way, in that it would have no knowledge, no understanding, and no comprehension to the question and its background, in fact, it is not at all interested in nor does it care what it would spit out. People who play with gpt should keep that difference in mind and anticipate the quality of its answers accordingly. Roger Penrose once hinted (not a word by word quote) that comprehension, is a process that is not computational. I think he pinpointed the difference between AI and human.
When hardware fails, you can often see the results: smoke, flames, pieces of stuff shooting out.
When software fails, it's just a "glitch", no biggie.
I've seen software guys start trying to build hardware, and it's hilarious. The first time they try to carry over
their bad habits to the hardware realm, they get spooked and put off by the results mentioned above,
and sheepishly go back to the safe realm of "glitches" instead.
It's this mentality that shows in the misnamed "AI" discussed here. Except that many of the problems
are actually deliberately made part of the design. On purpose. How do these guys sleep at night?
And how do they call themselves engineers? Software that lies to you? Software that makes things up,
out of thin air? How many more lives will be lost in airplane crashes or car accidents? How much more
electronic theft will occur?
When software fails, it's just a "glitch", no biggie.
I've seen software guys start trying to build hardware, and it's hilarious. The first time they try to carry over
their bad habits to the hardware realm, they get spooked and put off by the results mentioned above,
and sheepishly go back to the safe realm of "glitches" instead.
It's this mentality that shows in the misnamed "AI" discussed here. Except that many of the problems
are actually deliberately made part of the design. On purpose. How do these guys sleep at night?
And how do they call themselves engineers? Software that lies to you? Software that makes things up,
out of thin air? How many more lives will be lost in airplane crashes or car accidents? How much more
electronic theft will occur?
Last edited:
Nah, here is how it works: at the core it just predicts the next token (a word, to simplify). That's all it does. Turns out if you start your prompt by framing it with a context of 'expert' in a certain domain, it will find chunks of documents of 'authoritative' origin and then will complete the rest with better results.Of course, it's always the users' fault, according to software guys.
You're wrong. It's always the fault of the software.
Another thing you can prompt with is 'Let's think about this step by step to reach the correct results'.
Thus the subject 'Prompt Engineering' as there are emergent behaviours that were never explicitly specified, but which we found out experimentally. It's not real engineering, but that's how it's called and the field can command big salaries from what I've seen.
Without software guys, you would have nothing: no internet, no keyboard no O.S., no browser and nothing to type in this thread, nothing to look at for DIY in Google search or this forum, no online ordering of parts, etc...
When you're on your death bed in a hospital, you will like that the software guys made no mistake too.
Your current life depends way more on software guys than you think, including but not limited to your bank account, so now what?
Every little key-stroke you make to communicate online is a celebration of the software guys. So, poor you...
It was fun for me to philosophise about AI with the books by Penrose, Dennett, many others, and research papers and so on last Century.Roger Penrose once hinted (not a word by word quote) that comprehension, is a process that is not computational. I think he pinpointed the difference between AI and human.
It's here, and it's here to stay and grow.
Whatever evaluation you want to tack onto it doesn't matter anymore.
Maybe not computational, if, say, comprehension is believed to occur in the soul. If instead comprehension occurs in the brain, then there has to some kind of biological mechanism or mechanisms that make it happen. We might think of it as biological computation....comprehension, is a process that is not computational.
Of course someone wouldn't be able to see all the details of how it works from the inside view, since most of what a brain does is not directly observable by conscious awareness.
Question then would seem to be whether or not the presumptive biological mechanism can eventually be understood well enough to either model it computationally, or else maybe whether someone will create/discover some new technology that is more or less equivalent to biological. Maybe quantum computing could do it in some way, remains to be seen.
BTW, I am not advocating for or against more and better AI. However its proliferation is probably going to continue indefinitely. We will have to deal with it, is all.
Last edited:
Not if the responder is intelligent enough.Answers are also dependent on the quality of the prompt.
Last edited:
- Home
- Amplifiers
- Solid State
- ChatGPT ideas about amplifier design