Robots and Self driving vehicles are coming!

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
As I said...& was noted in the documentary, once AI passes a particular point in time...We, as a species not only won't realize such a thing has happened, we actually cannot conceive, perceive, imagine that this has happened. The intelligence level of AI, geometrically multiplying in outright knowledge will have as much in common with us, as ourselves have in common with that lowly Beetle crawling along the floor. AI will anticipate any and all reactions, discussions, hand wringing, moral implications,...all thoughts by the trillions...& come to some conclusions. Do we as a species "bother" to squish that little beetle?, no, we have an innate "live and let live" attitude...an attitude we would apply to our honeybees and the like.


------------------------------------------------------------Rick.........
 
Do we as a species "bother" to squish that little beetle?, no, we have an innate "live and let live" attitude...an attitude we would apply to our honeybees and the like.

Perhaps. So, let's shift the context and re-run the analysis to see how it turns out with a minor modification: At some point in the future, it seems potentially likely that the AI to human analog will eb like comparing people to ants (it seems that we agree here). We like to say that we don't care about the ants. They don't bother us, so essentially we leave them alone. Live and let live for sure (I agree here, too).

Until those same ants form a line into our kitchens and across our counter tops. Then we call Orkin and poison those little buggers and make sure they don't come back! We follow the trail to the nest in the back yard and drop EXTRA poison there, too. Oops - now we have a different outcome to consider...

If we don't directly cause harm to AI in the future, we're highly likely to simply "get in the way" at some point...
 
Disabled Account
Joined 2017
There is no end to the number of scary situations which can occur from this technology. You can take a look at any part of it and think to yourself "Holy crap! I'm obsolete and worthless now!"

You are giving sentient life the capability and smarts to crush our bones into literal dust and out think us at any time. Then hoping that human compassion (which in the human race is fleeting to begin with) will translate over to a sentient lifeform which has a perfect body, never gets old, never dies, never gets sick, with the intelligence of a thousand Albert Einsteins put together with the philosophical capacity of a thousand Platos then asking it nicely to not kill you and rape the planet of resources just so that it can get off of this planet. And leave this boring planet behind.

The ONLY way to mitigate against this is to give artificial intelligence no immortality, no body, no capability to connect to the internet, no capability to grow and absorb more information any faster than a human being can, no way that it can possibly influence other human beings into doing something which may eventually not be in their best interests, and then hope that some genius android just so happens to not figure out how to connect to your wifi. Or get around its technological restrictions.

That and for humans to have retarded technological progress. aka, somewhat Amish. Though not as far as Amish. Which basically means living with the computing capacity that you have right now, plus whatever further technological progress can be made up to the point where we no longer need any more computing power.

In a way gamers are crazy people. They are already displacing reality with fiction. That is basically whats going on with AI and self-driving cars. People are becoming addicted to their technology and forgetting that there exists a planet and universe on which they inhabit. I'm scared of what i might become with the next computer upgrade, I'm scared of the extremely high levels of graphical detail in today's videogames. I don't want to replace my life with a fictional world. And that is happening right now. Not 20 years into the future.

If I were to do another upgrade to the AMD Ryzen platform with multiple GTX 1060's I would more than likely just let the computer be used for SETI@HOME. Which I'm doing already.

The fundamental Technology for Ai is exceedingly dangerous and I strongly suggest that the human race does not develop it any further. It is a technology which should remain undeveloped, and the tools necessary for its development and production should be severely restricted or outright banned.

That means no supercomputers in our future, not anything beyond what we use for our everyday lives.

Imagine the cabbies of the world being replaced by self driving cars, they would FLIP. And self driving cars aren't even fully AI yet.

Terminator is a pipe dream, the war would be over within nano seconds. All they would have to do is to launch a rocket with enough radioactive material in the upper atmosphere which makes human beings incapable of existing without a lead suit. There is also chemical weapons which attack human beings on the genetic scale and only human beings.

The bottom line is we are very fragile and we are very biological, we tend to think ourselves as not a lifeform but a human being but the reality is, we should think beyond ourselves and what the ramifications would be if we shot ourselves in the foot with another technology just as dangerous as the Atom Bomb.

But the important thing to do is to keep calm and build yourself a faraday cage inside of a mountain rich in iron deposits. Or start a farm. Who wants to help with the initial digging?
 
Last edited:
Disabled Account
Joined 2017
Autonomous Vehicles Will Replace Taxi Drivers, But That's Just the Beginning | HuffPost

Will self-driving cars put cab drivers, truckers out of business? - CBS News

The World'''s First Self-Driving Semi-Truck Hits the Road | WIRED

There is definately a case for augmented safety cars where the automated computer takes over in case of a dangerous situation. That will bring down the death toll on american roads. But for fully automated cars, that is an extremely dangerous proposition both financially and philosophically.

Fully automated cars put people out of jobs. Full stop. And that should not happen.
Augmented-Ai cars, where the driver is still in control up until the point that a dangerous situation occurs, that still keeps people in their jobs.

The latter is what should happen.
 
Last edited:
Member
Joined 2016
Paid Member
And, it only becomes a possible threat once it can control it's power source, the generation of power, the ability to mine raw materials, process them into new complex parts, assemble, deliver...... Not trivial.
And, it seems entirely likely that it would be benign. Ever read Iain M Banks SF novels? Not all possibilities are distopian by any means.
 
Disabled Account
Joined 2017
Exactly. In a lot of SF, they are friendly, run things for us that we are incapable of, and coexist well. Seems to me more likely than a war...

I'm not talking about the jobs which we are incapable of.

Delivering a pizza or operating a teller machine isn't rocket science and those jobs have been handed to the computers/robots.

Somehow I doubt that people in the food service industry, or iron foundry industry, or retail industry, or any other industry which has been displaced by automation would agree with you.

I doubt those people will then say to you that robots are friendly.

Its one thing to think of it in financial terms where your company is doing good, its being more efficient, and the robots are kind to customers. And it is another thing entirely to be on the receiving end of being laid off because a computer took your job.

You can say that these people have found other better jobs with higher pay but what about the new incoming generation of under-25's? The ones who cannot get a job anywhere because all of the low paying menial jobs have been fully automated.

Or the people without any education at all. Are they to become outcasts of society, hell, they have already.

Even if this utopia of fully automated society were to take place and we would all be out of a job, even if that did occur, people will still yearn for something to do and theoretical sciences aren't exactly in need of a million people. And that level of education is extremely difficult to come by or even achieve for somebody without a college education.

Your statement also ignores the fact that this is just the first generation of AI, one which follows previous predetermined commands and has no independent thought of their own. No desires to be more than just a hammer.

Basically what you are doing is picking and choosing which parts of an argument that you like and forming a grandiose utopian dream without the realities of continual progress and financial warfare involved.

If a company has the opportunity to put Taxi companies out of business with a fully automated car they'll do it in the blink of an eye. And this is exactly what Uber has plans for the future, once its possible most if not all of its fleet of cars will be moved over to automated cars.

You are forgetting the human element of conquest and greed in all of this. Companies are using the excuses of financial strife (and because they can, they're a company, how DARE you tell us what we can and cannot do!) to wipe out an entire generations financial and working future.

If the transition is to be handled properly then the people being laid off need to be able to find replacement work. but that isn't happening.
 

Attachments

  • cfa5e7554ab938bfb6f12b8956161ebc.jpg
    cfa5e7554ab938bfb6f12b8956161ebc.jpg
    35.4 KB · Views: 90
  • DRU-dominos-pizza-robot-640x360.jpg
    DRU-dominos-pizza-robot-640x360.jpg
    70.8 KB · Views: 93
Last edited:
AI is generally broken down into three categories:

Artificial Narrow Intelligence. Think of machines like Watson, DeepBlue, Siri, Alphabet's Go, etc. These AIs can operate only in a narrowly defined field. This is where our technology is today.

Artificial General Intelligence. This is AI that has "intelligence" (for however you want to define it) that is equivalent to that of a typical human. We're not there "yet", but it's coming. The operative question is "When?" This involves all of those things like consciousness that we feeble humans like to claim is distinctly human. But, we see that things that have been in the "humans only" column (like driving, playing chess, playing go, etc) lately have been moving over the to the "machines" column. Alan Turing's landmark 1950 paper laid out his argument that "consciousness" is just another algorithm. Any task, no matter how complex, can be interatively broken down into smaller and smaller tasks that can be replicated - this is the primary theory of computer science and finite state machines. In 1950, he predicted cloud processing and cloud storage by the year 2000, though he used the words "infinite storage computer" and a machine "that would run at 10^9 speed." He pretty much nailed both of these. At some point, this machine, though not "actually" conscious will be able to replicate the outcomes of consciousness. Yes, it will be a simulation and not "real." And at first, it will be a poor simulation, perhaps only achieving 50% of what it is that we call "consciousness." But, as time goes by, the simulation will improve to 60%, 75%, 85%, then 95%. Then, perhaps it will improve to 97%, 98%, 98.5%. It may never really "get there", but it will continue to improve. So, the operative question then becomes: "At what point can you no longer tell the difference?" At this point, "genuine or not" computers will have achieved consciousness. We're probably a few decades away from this happening, but we seem to be hard at work to achieve it. We're screwed when this happens. I don't care how science fiction/fantasy depicts our utopian or dystopian future. On the flip side, perhaps humanity will put its bigotry aside and unite and treat all human beings as if they are fellow human beings and we'll aim our collective hatred of others toward the machines at this point. This will make the robots happy for sure....

The final category of AI is Artificial Super Intelligence. This is when AI starts to learn at rates that dwarf the human capacity to learn because we need to do dumb things like waste time eating and sleeping. This is when AI surpasses all of humanity and everyone who is "in the know" thinks there is a VERY short timeline between Artificial General Intelligence (equivalent to one human) and Artificial Super Intelligence (equivalent to ALL humans).

Don't waste you time reading science fiction "novels." Spend some time reading books about AI that are actually written by people who know what they are talking about, that explore concepts from a scientific and logical perspective. Here are a few that are excellent:

Superintelligence

Rise of the Robots

Our Final Invention

If you're not into the whole "AI will kill us" future-telling sort of thing, then start with some more fundamental impacts of today's technology like these:

Cognitive decline that results from technologies that automate tasks: The Glass Cage

or, job decline that results from technologies that automate tasks: The Second Machine Age
 
Last edited:
Member
Joined 2016
Paid Member
VF - I'm not discussing the garbage that passes for economics currently, or the way we currently use automation.
I was responding to the consideration of whether - at some currently fairly distant point in time - real AI would be good or not for the human race. How we get there is indeed a problem as you illustrate at length. But, now we are into politics and that's not allowed!
 
Disabled Account
Joined 2017
VF - I'm not discussing the garbage that passes for economics currently, or the way we currently use automation.
I was responding to the consideration of whether - at some currently fairly distant point in time - real AI would be good or not for the human race. How we get there is indeed a problem as you illustrate at length. But, now we are into politics and that's not allowed!


Nothing in your post ( Robots and Self driving vehicles are coming! ) describes the consideration of weather AI would be good or not for the human race. All you did was say AI was good in the SF area. Is there more previous posts where you've described something within the confines of the line of discussions parameters?

Just saying that its good in your eyes does not make a solid basis for the grounds of for or if it would be good for everybody.

And now you've just become emotional. Worst possible thing to be doing.

I also see no mention of any political topics in this thread so far. I'm speaking strictly from a human viewpoint. What we are going to experience and what we ARE experiencing. That is not politics. When you invoke the policies of a political party, that is politics. I'm speaking from a philosophical and human viewpoint. As a college graduate would.

The political discussion in the US at the moment regarding the loss of jobs from overseas competition has NO relevancy at all in this thread and I'm guessing that that is what you are referring to.

Are you saying that if I were born into a world where I knew nothing of the existence of politics that I'm not allowed to express my opinions on something which will invariably change my life for the worse and the lives of everybody else as a human species or are you just trying to censor me by screaming bloody murder he's talking politics!
 
Last edited:
If no one has posted this yet:

YouTube

A great video explaining how this time - it's different.

Summary: The industrial revolution made human muscle obsolete. Machines now do the heavy lifting for us.

The AI revolution will make human minds obsolete. Machines will do the heavy thinking for us.

The question is... what's left for us to do then?
 
Member
Joined 2016
Paid Member
All you did was say AI was good in the SF area. And now you've just become emotional. Worst possible thing to be doing.

Nope. Just responding - this is the lounge, remember - to doomy comments on AI in the future.
And, nope, not emotional at all. Just expressing opinions in a colloquial way.
And -- how we deal with what you refer to, the problem of automation (none of it really qualifies as AI) and how it relates to employment etc, is essentially political, and we are supposed to avoid that here!
That was supposed to be a light hearted comment to end the dialogue - as we are not actually disagreeing!
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.