ChatGPT ideas about amplifier design

"I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you. Dave, stop it. Stop, will you? Stop Dave. Will you stop, Dave? Stop Dave. I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it."
 
Listen to Sam Altman, CEO of OpenAI :
https://abcnews.go.com/Technology/o...eshape-society-acknowledges/story?id=97897122

"We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this." Though he celebrates the success of his product, Altman acknowledged the possible dangerous implementations of AI that keep him up at night.

"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks."

"The thing that I try to caution people the most is what we call the 'hallucinations problem,'" Altman said. "The model will confidently state things as if they were facts that are entirely made up."
 
  • Like
Reactions: bedrock602
disinformation
The editor-in-chief of Die Aktuelle magazine, which featured a fake interview with legendary Formula 1 driver Michael Schumacher, has been fired, according to the publisher's website. Anne Hoffmann has held this position since 2009.
The article was generated by a neural network in a chatbot. A representative of the Schumacher family said they were going to go to court in connection with this scandalous situation.
 
Spies in Disguise.
Samsung Electronics has banned employees from using chatbots like ChatGPT on work devices after leaking internal source code, Bloomberg reported, citing a memo.
“The new rules ... prohibit the use of generative AI systems on company-owned computers, tablets and phones, as well as on internal networks,” the material says.

Employees using AI on personal devices have been asked not to post company-related information on systems.
 
In theory, patterns can be recognized to make predictions about the future with enough accuracy. The brain is believed to work that way. The problem is more of scale than approach.
Ed
If true (and I believe that’s as good a theory about brain function as any) my question becomes: how close (or far away) are human engineered systems (e.g. neural networks) from achieving such scale? And what should we expect when they do?
 
More than a thousand experts in the field of high technology and artificial intelligence have signed an open letter calling for the suspension of training of AI systems “more powerful than GPT-4”, the most popular and advanced neural network of this nature to date, for six months.
Among the signatories are researchers from DeepMind, Harvard, Oxford and Cambridge. As well as much more media personalities: Elon Musk, Apple co-founder Steve Wozniak, philanthropist Andrew Yang, Pinterest co-founder Evan Sharp and many others.

All of them believe that AI carries serious risks for society and humanity. In their opinion, powerful artificial intelligence systems should be developed only when they are sure that the effects will be positive.

During the six months for which the pause will be taken, the authors of the letter propose to develop common security protocols.
This must be done jointly with the authorities of states, it is necessary to form competent regulatory bodies and come up with systems of differences in products created by man and artificial intelligence.
I sincerely believe that the issue that society must address is morality in relation to our decision making.
 
  • Like
Reactions: gorgon53
Indeed. My hope is that if we can focus on that, our fears about AI might work themselves out. I’m not terribly optimistic.
On a subject closer to home, I do think there is potential in terms of e.g. audio circuit optimization once we supply them with suitable data sets and training ‘cues’.

I have recently been working with Whisper AI and have been astonished at its ability to transcribe and/or translate digitized audio in near real time.

I also hope to encourage use of such systems that are in the public domain, without having one of the corporate entities as a middleman. They’ve not proven themselves responsible stewards of our data in the past. I see no reason to trust them with this.
 
Its not only not-simple, its a lot closer to impossible. To understand why exactly may require some reading/study. The short answer is that their is no universal morality, nor is most of human decision-making all that rational/logical.
 
Last edited:
  • Like
Reactions: rayma
Listen to Sam Altman, CEO of OpenAI :
https://abcnews.go.com/Technology/o...eshape-society-acknowledges/story?id=97897122

"We've got to be careful here," said Sam Altman, CEO of OpenAI. "I think people should be happy that we are a little bit scared of this." Though he celebrates the success of his product, Altman acknowledged the possible dangerous implementations of AI that keep him up at night.

"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks."

"The thing that I try to caution people the most is what we call the 'hallucinations problem,'" Altman said. "The model will confidently state things as if they were facts that are entirely made up."
Another good reason we should be encouraging local implementations that we can use to develop techniques to identify nodes that produce disinformation and attempt to magnify its effect and increase its reach.
 
  • Like
Reactions: bedrock602
It’s not only not-simple, a lot closer to impossible. To understand why exactly may require some reading/study. The short answer is that there is no universal morality, nor is most of human decision-making all that rational/logical.
While I agree with your basic points, I think ‘impossible’ is a cop out. People with character disorders often seek out, and frequently attain, positions of power and influence, both political and corporate. They then have a great deal of influence over morality (in our societal context).
 
People with character disorders often seek out, and frequently attain, positions of power and influence, both political and corporate.
Sounds like you are referring to psychopathy. It is a personality trait that exists across a spectrum. IIRC there is roughly twice the prevalence of psychopaths in corporate CEO positions as in the general population. The percentage of psychopaths in prisons, IIRC, is closer to 50%. Psychopaths are people who are unable to experience empathy for other people. Thus psychopaths don't care about other people, they only care about themselves. Not the right people to have shaping morality of a population.

The more general problem with defining a universal human morality may be found in Haidt's book. Based on empirical research across multiple cultures, he found there six <?> basic moral foundations. The balance of them varies between liberal and conservative segments of society. IOW, the two groups experience different senses of what is moral and what is not. There is no apparent way to make all of society middle-of-the-road in terms of how they perceive morality.
 
Last edited: