Can I be both or do I have to take sides? 😉AI, for or against?
I've used ChatGPT for condensing text. I tend to write a bit too much and wanted the text to fit in a specific block of my website. So I fed ChatGPT what I wrote and asked it to reduce it by 50%. It did so quite well. It came up with some sentence structures that I would not have thought of. I, of course, edited and proofed the text before I used it.
I've also used it to analyze data. I downloaded all my orders from last year into a spreadsheet. Deleted all personal information out of it and fed it to ChatGPT. I then asked it to generate a pie chart that showed my orders and total revenue from these regions: Canada, USA, EU, Other. That's stuff I tried to code up in Python years ago and got stuck on how to organize the data into geographic regions automatically. I tried to find a list of countries grouped by regions, got side tracked, and lost interest in the project. With ChatGPT I had the data I needed within a few minutes.
That said, I've also had a bad experience with Google's AI. I recently wanted to convert 26 dBu to volt and I was too lazy to fish out my calculator and do the math, so I figured I'd let Google do it. Here's the result:
Google correctly says that 0 dBu is 0.775 V RMS. The definition is that 0 dBu is the voltage required to dissipate 1 mW in 600 Ω. Google is also correct in that 26 dBu is then 10^(26/20)*0.775, but it does the math wrong. 10^(26/20) = 19.95. 19.95*0.775 is 15.46 V RMS. Not 1.07 V RMS. Bad AI. Bad!
AI is like any other tool. It's powerful for those who master it and possibly scary for those who don't.
Tom
clearly I misunderstood. I was trying hard to veer away from a breach of forum rules. So I'm not going to discuss the other thing...Ostrich policy over the truth
AI will be abused. We won't be able to stop it in the same way nuclear proliferation has at least been slowed. The requirements to build an AI are much much lower than for a nuclear weapon.
Perhaps the current angst over a better Eliza will prepare us for Artificial General Intelligence then we can start arguing about sentience...
Perhaps the current angst over a better Eliza will prepare us for Artificial General Intelligence then we can start arguing about sentience...
Total nonsense. I don't know which AI or dumbkopf you have been listening too, but it's nonsense.Witness today that it is illegal in Great Britain to pray in public.
This thread seems to be discussing LLMs rather than AI. LLMs are not the only type of so-called AI, nor are they the owned by us, china.... The bit that is owned are the training data sets. They cost a lot to generate. I say owned, stolen is better. "AI" bots are crawling the web stealing copyright and using lots of underhand tactics to scrape sites, starting with ignoring robots.txt. Since the crazy concept of "vibe coding" came along, plus the likes of m$ copilot, some sites are seeing such a hammering by "ai" bots that it's amounts to a DOS attack.
Yes, the meaning of AI has been corrupted by the press and commerce who all have things they want to sell. Any analysis algorithm is touted as AI.This thread seems to be discussing LLMs rather than AI
So people have adopted General Artificial Intelligence to mean what AI used to mean.
I suspect that as time passes it will be easier to generate an LLM for a specific task and using fewer resources. Also of course if you're prepared to wait longer between releases you can get by on much less powerful kit.
Domain specific LLM will of course need less training.
Perhaps all of those bitcoin mining operations might move to something else...
As for the other thing, there are exclusion zones around some premises that offer specific services. People silently praying in those exclusion zones have been arrested.
That's my last word on that particular subject.
As the ts said, DIY and AI exclude each other by definition.
Art should be a matter of talent.
The use of AI in research and science is an opportunity which should be taken advantage of, but in strictly defined and secured environments.
Art should be a matter of talent.
The use of AI in research and science is an opportunity which should be taken advantage of, but in strictly defined and secured environments.
One thing that fascinates me is that the 'birth' of AI pretty much coincides with the REAL development of Quantum Computers!
I can only wonder what the 'marriage' of the two may create 😳
I can only wonder what the 'marriage' of the two may create 😳
Alan Turing was working on AI in the early 1969s, and it was certainly a topic in the 1960s, and in 1971 when I took Computer Programming 101.that the 'birth' of AI pretty much coincides with the REAL development of Quantum Computers!
1950's you mean? Turing died in 1954. His famous article about intelligent machines and the imitation game (Turing test) was published in 1950.
Last edited:
Yes, 1950s, typo, and BTW his Mind article has always been immensely over-rated. It surprises me that it even got published. There is a massive self-contradiction on page 1.
The most striking avoidance of replaying to the question of whether machines can think, is the redefinition of "think" and "machine". "Machine" can simply be defined as a computer and "think" is the conscious process of reasoning and imagination. Presently, science does not understand how a human brain or an animal brain generates conscious reasoning, imagination and thought. Therefore, at present, it is impossible to create a machine that can think in the same way a human brain does. Machines can presently only mimic human "thoughts" of the subconscious.
Last edited:
Won’t matter much if machines just make radical decisions put in by the programmer. Sometimes humans deliberately don’t want machines to do conscious reasoning.
“If you do the actions, I don’t need to have a conscience or do stuff against my principles”.
Just have a look at fully automated slaughter houses to see what is possible.
“If you do the actions, I don’t need to have a conscience or do stuff against my principles”.
Just have a look at fully automated slaughter houses to see what is possible.
Last edited:
... Sometimes humans deliberately don’t want machines to do conscious reasoning ...
Sometimes? I would say most of the time, or, in a more or less matter, even always.
Our brains are not inherently logic machines. Instead, linear/deductive logic is something we individually have to learn/aquire. This individually can succeed to only a more or less elaborated extent. And so, most (read: all) of us will pertain some more or less irrational basics inside of our thinking and our acting.
AI instead has the claim to follow logics first hand, and already is quite good at it. And therefore, AI, once a bit grown up further, might have the potential to inherently confront anynone (read: everyone) with his/her own irrationale. We will not be amused, because it's seldomly appreciated/pleasing to be told that you are wrong. Especially if the corrective would stem from a machine. Furthermore - do we really want a life based on logic? History has well shown that one really not has to have any logic skills in order to be "successful" in life.
And then, there is this view on progress saying that progress has come with at least three main offendings/insults for mankind:
1. With Leonardo da Vinci the earth was no longer the center of the universe. The church rather disliked to hear about that one.
2. Since Charles Darwin we no longer could consider ourselves as the crown of creation. Once again, there was a lot of opposition against this demotion of the human excellence and still is - think about the recent re-introduction of creativism as a school subject in the US ...
3. Then Siegmund Freud published a lot about das Unbewusste, the unconcious. It's quite an offending paradigm implying that we are not chief in our own house.
Finally, and in this succession, AI inevitably lurks to become the fourth main insult to mankind.
For or against AI, then? You will see both. And it's forseeable that in a near future there will be some clearvoyants starting to publicly burn down quantum computers in order to save the world.
Last edited:
I considered the IA like the negative feedback equivalent in our life. And I share the opinion of Barth Locanthi : “An amplifier should be designed for low distortion and wide bandwidth without feedback. Negative feedback is then added to make an already good design perform even better; it is not used to "clean up" problems in the basic design.”
IA must serve to bring more happiness, not created problems.
Primum non nocere.
IA must serve to bring more happiness, not created problems.
Primum non nocere.
BUT > [in analogy] what if aspects AI actually led to positive feedback > creating a situation of full-blown oscillation?
‘The question as to whether computers can think is about as interesting as whether submarines swim’.
- Edsger Dijkstra
- Edsger Dijkstra
There is no intelligence in AI. It is just software that gathers information and spews it out in an interactive and often farcical manner.
Julia Neigel gave an instructive interview on apolut platform about the violation of the rights of authors and musiciancs:
it is so: all what humans creatively produced is usually protected by law.
But already for the rising internet these laws were ignored and not enforced allowing youtube and the porn industry to rise economically on the backs of others.
So we cannot expect that new technologies like ai will respect laws. And the mighty people behind it promote it for their profit.
Many people speak of Fukushima and Tschernobyl.
But the most polluted areas in the world are Majak and Hanford where all the nuclear material was produced for the atomic bombs.
Hanford site cleaning costs 3 billion dollars a year and they don't know if it can ever be cleaned up.
Read on wikipedia about it:
Hanford site and Majak
Interview Julia Neigel:
https://t.me/tagesdosis_KenFM/5603
it is so: all what humans creatively produced is usually protected by law.
But already for the rising internet these laws were ignored and not enforced allowing youtube and the porn industry to rise economically on the backs of others.
So we cannot expect that new technologies like ai will respect laws. And the mighty people behind it promote it for their profit.
Many people speak of Fukushima and Tschernobyl.
But the most polluted areas in the world are Majak and Hanford where all the nuclear material was produced for the atomic bombs.
Hanford site cleaning costs 3 billion dollars a year and they don't know if it can ever be cleaned up.
Read on wikipedia about it:
Hanford site and Majak
Interview Julia Neigel:
https://t.me/tagesdosis_KenFM/5603
- Home
- Member Areas
- The Lounge
- AI, for or against