Chatbot hallucination

Not surprising at all if you consider how they work, and the garbage in factor, just amusing
Yes, I agree since it was exactly my point of view, before to know how far a so-called "AI hallucination" can go wrong.

Since I believe that a Chatbot like Gemini can be assimilated to a machine/robot then Isaac Asimov's three laws of robotics came to mind.
www.britannica.com/topic/Three-Laws-of-Robotics

""(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”".

The goal is that a machine/robot should not harm a human being.

Since Gemini has been announced with great fanfare as one of the most advanced AI machines (as is also clear in the video in the article that I linked in the 1st post) and since these machines are also equipped with filters that should avoid certain errors/mistakes, I believe that a message like the following

chatbot.jpg


is not free from "risks" if read by users who are underage or incapable/unawared of having a correct vision of what is happening.
 
I think more likely an example from the billionaire class driving the AI revolution. I believe it was recently announced elon ordered 100K nVidea chips for AI. The top class would like nothing better than to eliminate the pesky humans that think with obedient robots that do what they are told. I'm always reminded of the simpson's episode where homer is invited onto bill's blimp to watch a football game. Homer says people look like ants from up here. Bill responds with They are ants."
 
Your comments are really interesting, especially the one about the ants, but I still don't realize how it could happen that a student in a chat with AI Gemini asks about the challenges and solutions for adult aging and Gemini responds that humans are **** and must die.
I just don't understand it.
The student seemed to be very upset about it, for days.

"The student, who received the message, told CBS News he was deeply shaken by the experience. "This seemed very direct. So it definitely scared me, for more than a day, I would say."
The 29-year-old student was seeking homework help from the AI chatbot while next to his sister,, who said they were both "thoroughly freaked out."
".
 
This demonstrates the real danger in LLMs, that people take them too seriously. These "hallucinations" are just part of what these thing put out. They don't know right from wrong, truth from falsehood, good from bad, etc. They're trained by feeding them English text, from every book written in the last two centuries, to every online message board including the Voluntary Human Extinction Movement and 4chan. They're NOT some magic oracle.
 
This demonstrates the real danger
Yes, I talked about it some time ago, the danger is not in AI itself, but in what people believe AI is and how they consequently use it and have their expectations about it.

Of course the human terminology used is functional to the purpose.
Just as an example, a machine cannot learn, but that is how the data entry process is defined.
Nor can a machine think.

I wonder what will happen when such errors will not be so blatantly recognizable as in the above example, but instead will appear plausible.
That's the real danger.
 
When people don’t know right from wrong, fact from fiction, and believe everything they read/see on the internet. Don’t worry, in 25 years Google, Microsoft, and the educational system will have everyone trained properly so they don’t - and they’ll just take whatever garbage the AI engine spits out as gospel. Very easy to lead people around by the nose, and right over a cliff. It won’t even matter who is in office.
 
  • Like
Reactions: Logon
Just as an example, a machine cannot learn, but that is how the data entry process is defined.
Nor can a machine think.

An artificial neural network can change its synaptic weights such that the response to the training data gets closer to the desired response. If you don't want to call that learning, why do you call it learning when essentially the same happens in a natural neural network?

Whether a machine can think is a difficult question to answer because of the lack of a sufficiently accurate definition of the word think. Alan Turing tried to answer it anyway in 1950, see https://www.csee.umbc.edu/courses/471/papers/turing.pdf
 
why do you call it learning when essentially the same happens in a natural neural network?
Just because a machine lacks and will always lack a human psychology and a human lived, that is, a human life experience.
IMHO

By the way, please note that the above is just my first knee-jerk response, but thanks for the pdf and for your appreciated comment, which I'll return to soon after thinking about it a bit more... 😉
 
An artificial neural network can change its synaptic weights such that the response to the training data gets closer to the desired response. If you don't want to call that learning, why do you call it learning when essentially the same happens in a natural neural network?
Because I believe that everything that is not biological should have names that clearly highlight its different origin.
If we really have to call it learning, it is only because we do not have a nomenclature dedicated to the purpose and it is not clear why.
Or maybe yes.

Then we should call it artificial learning, not just learning.
It is misleading not for those who are sufficiently informed, but for those who are not.

I do not like the charm that is attributed with that artfully lacking nomenclature to the so-called artificial intelligence.
Which probably should not even be called artificial, but imitating or miming.

Whether a machine can think is a difficult question to answer because of the lack of a sufficiently accurate definition of the word think.
Perhaps thinking is asking questions, while imitating intelligence only gives answers.
Often inappropriate.