ChatGPT ideas about amplifier design

I had deleted that phrase from my post because then I understood that you don't believe in one word of what I say.
But you have quoted the same that deleted sentence.

Sure, no longer grateful because of your irreducible grudge that seems excessive to me.
 
I had deleted that phrase from my post because then I understood that you don't believe in one word of what I say.
But you have quoted the same that deleted sentence.
I used the forum quote function to quote what was on my screen at the time. Most likely you deleted the line as soon as you saw I quoted it.

BTW, there is no grudge here. Only disappointment at your failed attempts to deceive.
 
Beware of ChatGPT shortcomings/mistakes.
I extensively tried it in several fields of scientific research of my daytime job where it fails in many areas outside of simple definitions.
Relevant to this forum: it failed big time at suggesting replacement latfets 👎👎👎
On the other hand it is quite good at generating programming code.
That it is. It did a remarkably good job writing a Python data reduction stage. But the poetry…. Oh, the poetry sucked.
 
That actually brings up a good point. Programmers will have a new tool in their toolbox, but the most crucial part of software development is defining its purpose, either for the end user or the end user who suddenly finds herself with lots of free time.

I plan on using it to write event-based code in C. Which is pretty difficult and tedious to write (well, and without subtle bugs). If that works out, I call that a win.
 
Craig Martell, director of digital technology and artificial intelligence (AI) at the Pentagon, said generative AI systems like ChatGPT scared him to death.
"Yes, I'm scared to death. Here's my opinion," Martell's response to the question of what he thinks about generative AI leads the Breaking Defense publication. According to him, systems like ChatGPT do not understand the context, but at the same time answer questions "authoritatively".
"That's why you trust it (the system) even when it's wrong... And that means it's the perfect tool for disinformation," Martell added. However, he stressed, the Pentagon does not have the tools to detect disinformation and warn about it.
The administration of US President Joe Biden has begun looking into the need to test AI-based programs such as the ChatGPT chatbot amid fears they could be used to commit crimes and spread disinformation.
The Cyberspace Administration of the People's Republic of China has submitted a draft measure to regulate generative artificial intelligence services, which, in particular, states that the generated content should embody "basic socialist values."
 
  • Like
Reactions: levo0101
Craig Martell, director of digital technology and artificial intelligence (AI) at the Pentagon, said generative AI systems like ChatGPT scared him to death.
"Yes, I'm scared to death. Here's my opinion," Martell's response to the question of what he thinks about generative AI leads the Breaking Defense publication. According to him, systems like ChatGPT do not understand the context, but at the same time answer questions "authoritatively".
"That's why you trust it (the system) even when it's wrong... And that means it's the perfect tool for disinformation," Martell added. However, he stressed, the Pentagon does not have the tools to detect disinformation and warn about it.
The administration of US President Joe Biden has begun looking into the need to test AI-based programs such as the ChatGPT chatbot amid fears they could be used to commit crimes and spread disinformation.
The Cyberspace Administration of the People's Republic of China has submitted a draft measure to regulate generative artificial intelligence services, which, in particular, states that the generated content should embody "basic socialist values."
I’ve been discussing how to use these systems to identify disinformation with someone far smarter than me, and this is one of the reasons I’ve been shouting into the void about running these systems locally without giving up our training data and ideas to godknowswho.

I spend a lot of time reading about Russia’s war in Ukraine, and harvest all kinds of troll comments. The thing is, the most successful trolls are a bit more subtle than most, but there’s a limit to how subtle they can be, while still being effective. I’m pretty confident that these systems can be trained to identify it.

The real problem is degraded or missing critical thinking skills. The EU sponsors some really important initiatives that I really wish would be emulated in the U.S. One that I ran across called EUNOMIA was sort of a combination of educational material to help users make vetting of content reflexive. And they were developing some software (very early stages) that would be intended to assist that process. That’s a really good idea. And here’s where I sink into despair thinking about the likelihood anyone will care. Only half joking.
 
- GPT updated to be able to correct itself on re-prompt. One more step: self-correct and give the proper answer first-time

- A form of GPT with extended memory. It remembers previous chats, on top of Human and AI correction feedback

- Auto-prompting of GPT by GPT

The exponential nature of progress, as Kurzweil envisions it.
 
However as noted earlier these AI systems are quite exciting because will the goal to cook an egg, this AI could print its body using a 3D printer. Create its own electronics. Put together a robot body, install an instance of itself in the robot body, operate the robot body and cook the egg for you, they actually do need this in many countries now like Japan etc however health experts are hinting that humans might still need other humans as we might be hive creatures.
The lives we live are partly a creation of our minds. There are destructive though patterns as well as constructive though patterns. We fashion our own nightmares as well as our own sweet dreams. Learning to calm, control and steer the mind towards the light is no easy task. While we all exist in our own prisons, some are better off than others. To pet a grown pet such as a dog or cat, this is an animal that was raised with human affection
People are disconnecting from the network, loving less, trusting less, accommodating less and demonstrating destructive tendencies
 
- GPT updated to be able to correct itself on re-prompt. One more step: self-correct and give the proper answer first-time

- A form of GPT with extended memory. It remembers previous chats, on top of Human and AI correction feedback

- Auto-prompting of GPT by GPT

The exponential nature of progress, as Kurzweil envisions it.
“Prompt” in what context? I’m familiar with the use of the term in Whisper, where you feed it a sentence or two of example speech to improve audio transcription (for instance, speech interspersed with lots of “like” and “um”, etc.