ChatGPT ideas about amplifier design

We mostly overlook the electromagnetic fields. We analogue living beings are electric beings who need the analogue electromagnetic fields of the other analogue electric beings. Digital electromagnetic fields sometimes disorient and destroy our analogue electromagnetic fields. Destroy us.
 
I'm not an expert and I'm not even sure I've understood the authentic scope of AI (if any), however I just asked myself some questions and I gave the following impulsive answers...
Tell me what you think.

Can AI create?
No, it can manipulate data from its databases, calculate huge amount of data and behaves in so many way, but it do not create a single true new thing.
It can do it very quickly, very easily and quite simply (as user interface), but it does not create, it manipulates data.

Can AI be dangerous?
Yes, just as humans can.
Its use can be hostile, not AI by itself is.
The history of Humanity teaches that humans who hold the legislative, executive and financial power even if they could improve the condition of the weakest (from every point of view) humans, they instead will increase the power of humans already powerful.

Has AI a consciusness?
No, just as any other machine.

Has AI knowledge?
No, it has just databases.

Is AI ethernal?
Yes, of course, not in a supernatural meaning though.

Can AI learn?
Not exactly.
It can improve its statistics data.
Also it can attribute those data to a single human, or situation, or condition and then to continue to manipulate those data.
Learning is just a suggestion of an human activity.

Is AI really intelligent?
I do not believe so at all.
It is just a well organized and widespread kind of gigantic memory with quick access and highly manipulation programming.

Are humans intelligent?
True intelligence would be to use AI for good of Humanity, but I'm afraid that it will not happen.
 
  • Like
Reactions: 1 user
;-)

Can humans create anything?
No. It is only a section of the context.

Can humans be dangerous?
Yes, just as AI can.

Do humans have a consciousness?
No, just like any other machine. Moreover, before consciousness comes awareness.

Does human being have knowledge?
No, he has databases.

Is human being ethnic?
Yes, of course, but not in a supernatural sense.

Can humans learn?
Not really.
It can improve its statistical data.

Is human being really intelligent?
Intelligence" is also only a part of the context.

Is AI intelligent?
True intelligence would be AI working for the good of mankind, but I'm afraid that's not going to happen.
 

Dr. Barrie Trower - "The Truth About 5G & Wi-Fi" - Part 1​

Barrie Trower is a pseudoscientist, conspiracy theorist, and crank who believes that microwaves and related technologies are a major threat to public health. He is frequently cited by other conspiracy theorists and cranks, especially the paranoid crowd that believes that electromagnetic radiation is part of various nefarious plots against humanity.

https://rationalwiki.org/wiki/Barrie_Trower
 
  • Like
Reactions: 1 user
@cumbb

Nice way to quote without quoting. :)

Humans have uniqueness: the spark that is in the creative act, in the act of love, and in the ability to go beyond.
What is that "beyond"?
I can't say it, it is ineffable and you have only to try it.
If it happens you will recognize that spark.
For sure.
 
  • Like
Reactions: 1 users
I'm not an expert and I'm not even sure I've understood the authentic scope of AI (if any), however I just asked myself some questions and I gave the following impulsive answers...
Tell me what you think.

Can AI create?
No, it can manipulate data from its databases, calculate huge amount of data and behaves in so many way, but it do not create a single true new thing.
It can do it very quickly, very easily and quite simply (as user interface), but it does not create, it manipulates data.

Can AI be dangerous?
Yes, just as humans can.
Its use can be hostile, not AI by itself is.
The history of Humanity teaches that humans who hold the legislative, executive and financial power even if they could improve the condition of the weakest (from every point of view) humans, they instead will increase the power of humans already powerful.

Has AI a consciusness?
No, just as any other machine.

Has AI knowledge?
No, it has databases.

Is AI ethernal?
Yes, of course, not in a supernatural meaning though.

Can AI learn?
Not exactly.
It can improve its statistics data.
Also it can attribute those data to a single human, or situation, or condition and then to continue to manipulate those data.
Learning is just a suggestion of an human activity.

Is AI really intelligent?
I do not believe so at all.
It is just a well organized and widespread kind of gigantic memory with quick access and highly manipulation programming.

Are humans intelligent?
True intelligence would be to use AI for good of Humanity, but I'm afraid that it will not happen.
Thanks for sharing your personal “Q&A” about this subject. This is similar to how I try to learn about complex subjects, and then revisit/iterate over time both the questions and the answers.

I’ll comment one question only for now (quite late here): “Is AI really intelligent?”

I also believe (currently) that the answer is pretty clearly ‘no’. The systems, models, training, mathematics theory and underlying code differ, but in a very general sense I agree with your statement.

What I’ve been discussing with a couple other people recently are)

The code structure: these have been growing from their humble beginnings for quite some time, with contributions from many, some of the original authors moving on to other projects, and so forth. And often this growth occurs in layers, via abstractions, forks, etc. and sadly, I’m some cases documentation is an afterthought at best (and no, well-commented code is NOT self-documenting).

And therefore, it’s reaching the point where no single individual or group understands how the code works in detail, operates with other layers, etc. For me, I’ve had to start at the highest layer, that provides programmatic access to key functions so I can begin to understand its capabilities and limitations. And with a lot of help, I’m beginning to get a tenuous grasp on the tensor mathematics used in some systems (which is just one of a number of theories incorporated into these systems).

Based on my very basic understanding of how some believe the brain works, I try to make some notes/personal ideas about the importance of connections/communication between adjacent layers of such systems, not merely the ‘lateral’ for lack of a better word. It’s these connections that, as they are created/destroyed/recreated that are both very hard for me to grasp from a technical standpoint, but also how they may relate to how our own brain works (I’m not only referring specifically to neural networks, either).

Someone mentioned something about this simply being a matter of scale, and I questioned “so, at what scale do our electronic systems reach the scale where they begin to approximate the brain (not equal it in any way), and what should we expect when they do?” <shrug>. 😁

And throw in the possibility that certain brain functions may rely on quantum physical properties like superposition, and here’s where my notes end…

2) Are humans intelligent? Lol, such a loaded question; I’m not touching that one right now.

Again, thank you for sharing.
 
  • Like
Reactions: 1 user
Already our self-conception as actors is nonsense: Destroy, for example, the fungi in you, and you will
a) WANT to eat nothing sweet anymore, and
b) die soon afterwards;-)
Am "I" a complex, not only made of "endogenous" cells but also of "participants" from outside the body ;-)

And: at what stage of development is the relationship between AI and human society at the moment;-?
 
Another thread going off into the weeds again, like so many others, how about getting back on topic or does it just add fuel to folks attention deficit issues?
So I ask,
How’s that chatgpt amplifier design going?
Does it have a schematic entered into simulation yet?
How’s the pcb design going and all the rest of a design process?
Or is chapgpt going to re-invent a design and manufacturing process so we can all sit back and learn how AI would do it.
Good luck with that. One might want to focus on it inabilities vs it’s abilities. It’s ability to solve problems vs create problems is a good start as far as usefulness
 
Last edited:
I just watched a TV interview with a Google Research, Fellow (unfortunately, I don't recall his name) last evening, and he seemed rather alarmed about the pace of AI development world-wide. He thinks that the world needs serious controls placed on AI development, else 'digital intelligence' very soon outstrip human intelligence. With unpredictable, possibly existentially dangerous, consequences for humanity. This isn't the first time we've heard such dire warnings. Alon Musk has been ringing that alarm bell for quite some time now, but I personally consider him to be just a bit of an erratic kook, so wasn't sure of how seriously to take his warning. This high level research scientist from Google, however, came across as very rational and measured. Which made his similar warning much more unsettling.
 
Last edited:
  • Like
Reactions: 1 user
I agree that Elon Musk’s warnings, if one considers them at all, best be considered in light of his words and actions that demonstrate his poor character.

I also agree that the research scientist’s warning seems (a bit) more credible. And we have this group of, um, AI luminaries suggesting putting the brakes on AI research in order to better assess the ‘existential’ risk.

We have some data on the state of these systems. And much speculation.

We have a great deal of data about climate change. And I will argue that its existential risk is much less speculative. And on a far less speculative timeline.

And he suggests “serious controls” are needed. Yes, we try that in some other areas that will go unmentioned that are working out so very well. <gratuitous snark warning>
 
I took the single most driving factor behind his concern being that AI, by design, is self-learning. In other words, self-advancing. So, it's development is not completely under our control, and will be increasing not under our control. When it inevitably out-advance us in intelligence, what does it do with that superior intelligence?
 
When it inevitably out-advance us in intelligence, what does it do with that superior intelligence?
Why 'it' instead of 'they?' What if they are developed by two different sets of humans, and the AI have been trained to be, shall we say, biased against each other? Will they conclude only one of them can safely exist? IOW, does intelligence guarantee peacefulness? How about rationality? Human intelligence hasn't done so, especially not when two tribes/entities want the same resources for their own use.
 
Last edited: