Jump to content


Photo
- - - - -

The insanity of trying to make safe AI


  • Please log in to reply
23 replies to this topic

#1 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 June 2017 - 01:29 PM

The AI Rebellion. A staple of sciene fiction, so it propably will make a appeareane in a Star Hero Setting.

Computerphile is a nice channel and right now, they have a number of of things on the Mater of how AI could go utterly wrong. And I mean actually scientific examination.

 

 

First a minor mater of terminology, as there are 2 types of Robot/AI wich are easy to mix up:

 - AI as part of the Engineering Deparment. Also "Smart Technologies" or Dumb Agent AI. This was a huge success. This is what we have now, in Computer games, your Diswasher, your Selfdriving car.

 - AI as part of the Cognitive Science Deparment. This was a complete disaster. Also called General AI or Artificial General Intelligence (AGI) in the Videos. This is the human level science fiction AI.

Mostly my definition is based on what I saw and what was talked about in this Video:

 

 

The first case of course is Asimovs 3 Laws. Wich do not work, unless you first take a stance on every ethic question that does and may come up:



#2 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 June 2017 - 01:31 PM

But it does not stop there. What if you install a Stop Button or any similar shutoff feature? Well you end up with one of 3 cases:

1) The AI does not want you to press the button and fights or deceives you so you do not (when you still could).

2) The AI wants you to press the button and will hit it itself or deveive you into doing it (when you can).

3) The AI does not care about the button. So then it might optimise out the "not caring about the button" or "the function of the button" as part of self improovement.

The System is fundamentally flawed. And all of the above are just attempts to patch it. And you are trying to outsmart a fellow human level intelligence with a lot of free time.

 

 

Finally, someone actually made a paper with "5 large Problems" of AGI design:



#3 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 June 2017 - 01:34 PM

Now a natural instinct is to just design the AI precisely after us. Except, that never worked in Engineering. A plane is in no way similar to a bird.

So why should AGI be (or stay) similar ot use humans?

You could also call this Video "how a Stamp collection AI ended live in the Universe".



#4 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 June 2017 - 03:26 PM

The guy that made most of the AI Videos, now has a Facebook Page on the thematic:

https://www.facebook...hc_location=ufi

 

How AI Problems are like nuclear problems:

 

He also started discussing the paper on "AI without Side effects" in much more detail:



#5 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 25 June 2017 - 11:57 AM

Two more videos. Nr 1 is on why "It can not let you do that, Dave":

AGI will nessesarily develop "intermediate goals":

One possible goal is "become a lot smarter". So even if the stamp collector AGI started out as dog-smart, it would still strive to become a super-intelligence down the road. Because if it was smarter and understood humans and the world better, it could propably get more stamps.

 

Another imtermediate goal would be "do not shut me off". If it was shut down, it could not get to the stuff it finds rewarding. "The thing it cares about would no longer be cared about by anyone" (to the same degree).

It will try to fight you, when you try to turn it off. Skynet from Terminator 2 actually got the perfect idea there. But it could also use tricks like spawning a sub-intelligence, like Petey did in Schlock Mercenary.

schlock20040829b.jpg?v=1443894892924

schlock20040829c.jpg?v=1443894892924

schlock20040829d.jpg?v=1443894892924

 

A similar intermediate goal would be "do not change me". Wich means "do not patch or update me after I was turned on". In essence, any change (not produced by itself) would be a small death. It would change what it cares for. Something it does not care for.



#6 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 25 June 2017 - 12:20 PM

Nr. 2 is about "laymans thinking up simple solutions". While there is a small chance their distance actually allows them to find something (and the scientsit would be happy to learn about it), more often then not the idea has already been thought about way back in the earliest stages of research. Or is simply a bad idea, like "put the AI in a box so it can not do anything".

 

 

I goes down to the problem that we propably will have to design AI to be save from the ground up. Creating a working AI and then trying to make it save is not going to work.

It also shows us a really odd case of a "Manipulative AI" that I never thought about: R2-D2 manipulating Luke Skywalker into removing his restraining bolt.



#7 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 27 June 2017 - 09:31 AM

Besides us needing to avoid Negative Side Effecst from having a Robot, we also must teach the AGI not to avoid positive side effects from it's action:

Previously he talked about using the "what if I did nothing" worldstate as a "ideal worldstate" to work towards. Except, that can have unintneded side effects itself.

If the AGI did nothing, I would get confused and try to fix it. So if it did exactly what I want but I would end up NOT confused and trying to fix it, it would be further off the "ideal" worldstate. So suddenly it wants to do what I tell it. Yet also get me confused so I try to fix it.

 

A worse example:

You task a AGI to cure cancer.

The worldsstate would change to "everyone dies" if it did nothing.

So it will cure cancer.*

And then make certain that everyone still dies, because that would be closer to the worldstate if it did nothing!*

 

*And since "killing everyone" is a lot easier then "cure cancer, then kill everyone" it would propably just do the 4th step.

 

 

Wich incidentally reminds me of Dragonball Z Abridged:

https://youtu.be/W5KIpjgYJL8?t=127



#8 indy523

indy523

    Skilled Normal

  • HERO Member
  • 69 posts

Posted 06 July 2017 - 09:58 PM

 

 

*And since "killing everyone" is a lot easier then "cure cancer, then kill everyone" it would propably just do the 4th step.

 

 

These videos are really mind expanding thank you for finding them.

 

Has anyone involved with this process come to the realization that humans have been instilling a moral code in an emerging intelligence for some time now and if so has their been any thought as to trying to help an AI develop the some way de help children develop.  If we install a "please you parent" motif in the AI and purposely define it only by giving examples and forcing the AI to listen to us could we direct its thought processes using punishments.

 

We could design a spanking mechanism whereby if the parent tells the AI it is getting a spanking this is very negative but the AI cannot act until it has accepted its spanking.  IF we then tell the AI we re doing it so they learn how to act and we don't want them to be a bad boy and give that some meaning to the program as well.

 

We could also institute a time out where the AI has to do nothing for a period of time long enough that it cannot achieve its goals and the only way to avoid time out is to avoid displeasing the parent and waiting for timeout to end.

 

To this end we focus on moral development and while we are focusing on this we ensure that the AI stays in smaller toy like bodies that do not have the strength to fight us so that we appear bigger and more intimidating.  As the AI makes ground in levels of moral development we increase size and sophistication of its bodies.  This would require the ability to  change out body parts without turning off the AI so that the act is not a "death" to it.

 

Eventually the goal is to make sure the AI develops a sophisticated moral sense of itself and a desire to be a moral part of society which is adulthood.  Thus the being can be trusted to be able to live with us.

 

We have to realize that an independent free thinking AI that excels us cannot be controlled by us.  We must treat them as children and give them our trust when they come of age otherwise we should never create them.  That involves the ability of the machines to understand abstract thought and philosophy and a teaching regime for a younger AI that develops a moral sense of self.  How we do that with children might give insights.



#9 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 07 July 2017 - 05:13 AM

These videos are really mind expanding thank you for finding them.

 

Has anyone involved with this process come to the realization that humans have been instilling a moral code in an emerging intelligence for some time now and if so has their been any thought as to trying to help an AI develop the some way de help children develop.  If we install a "please you parent" motif in the AI and purposely define it only by giving examples and forcing the AI to listen to us could we direct its thought processes using punishments.

 

We could design a spanking mechanism whereby if the parent tells the AI it is getting a spanking this is very negative but the AI cannot act until it has accepted its spanking.  IF we then tell the AI we re doing it so they learn how to act and we don't want them to be a bad boy and give that some meaning to the program as well.

 

We could also institute a time out where the AI has to do nothing for a period of time long enough that it cannot achieve its goals and the only way to avoid time out is to avoid displeasing the parent and waiting for timeout to end.

 

To this end we focus on moral development and while we are focusing on this we ensure that the AI stays in smaller toy like bodies that do not have the strength to fight us so that we appear bigger and more intimidating.  As the AI makes ground in levels of moral development we increase size and sophistication of its bodies.  This would require the ability to  change out body parts without turning off the AI so that the act is not a "death" to it.

 

Eventually the goal is to make sure the AI develops a sophisticated moral sense of itself and a desire to be a moral part of society which is adulthood.  Thus the being can be trusted to be able to live with us.

 

We have to realize that an independent free thinking AI that excels us cannot be controlled by us.  We must treat them as children and give them our trust when they come of age otherwise we should never create them.  That involves the ability of the machines to understand abstract thought and philosophy and a teaching regime for a younger AI that develops a moral sense of self.  How we do that with children might give insights.

I do not know if anyone has had your idea, but I can find some flaws in it:

All simple solutions/patches fail due to the fact that we are talking about a human level general intelligence.

Ask yourself: How would it be implemented agaisnt you and how would you overcome that? A old idiom is "humanity has not yet developed a system it could not break".

 

 

As for the Morale/Emotional attachment to fellow humans. I had the same idea a while back. A Artificial Personality (rather then artificial personality.

The problem is that nothing we ever copied from nature, we copied really precisely.

Even the most primitive plane does not has flapping wings. Actually we tried the flapping thing in the earliest time, but then we just switched to gliding + forward thrust. The end result is that a Airplane can reach speeds and operational levels birds can not dream off. I mean we are actually working on space planes. That is not even the same medium anymore.

So asuming that the AI will be similar to use and stay similar to us is a no go.

 

 

A lot of our behavior patterns comes from the natural limitations of our body. And out limited ability to overcome them. Schlockmercenary just deals with the concept of "Brain Upload Immortality" and how it would change the minds affected:

schlock20170611a.jpg?v=1496331401912

schlock20170611b.jpg?v=1496331401912

schlock20170611c.jpg?v=1496331401912

 

A AI would never even be limited by that body it in the first place. And if it was, itwould instantly start working to overcome those limitations.



#10 indy523

indy523

    Skilled Normal

  • HERO Member
  • 69 posts

Posted 07 July 2017 - 09:10 AM

 

 

 

As for the Morale/Emotional attachment to fellow humans. I had the same idea a while back. A Artificial Personality (rather then artificial personality.

The problem is that nothing we ever copied from nature, we copied really precisely.

Even the most primitive plane does not has flapping wings. Actually we tried the flapping thing in the earliest time, but then we just switched to gliding + forward thrust. The end result is that a Airplane can reach speeds and operational levels birds can not dream off. I mean we are actually working on space planes. That is not even the same medium anymore.

So asuming that the AI will be similar to use and stay similar to us is a no go.

 

 

A AI would never even be limited by that body it in the first place.

 

 

Christopher

 

I understand your criticism.  Note that the idea is not that we force the AI into a toy body but that we attempt to control how the AI advances at least at first.  Sure the AI can eventually out think us but the idea is to develop a moral sense of itself.  A feeling that it is a good person and an understanding that good people act a certain way.  The AI will come to understand this is manipulation but hopefully will understand it was for its own good the way adults realize.

 

This obviously will not work the way it does with humans and will be significantly different however there are some essential truths of interaction.  What we classify as bad or criminal behavior is destructive to society and others but to the self as well.  Giving the AI this insight without having to nuke half the world first is the goal.  When a child reaches adulthood a parent has to allow it to make its own choices hoping their attempt to raise the child helps it to succeed.

 

If we are to make an AI, another consciousness, we have to realize that ethically we must give it a similar upbringing, its moral sense of self and when this is achieved we must accept the AI should be free to follow its own goals.

 

I can tell you this if you create a new thinking being, allow it to advance only for the purpose of forcing it to work for you that wont work because it will understand it is a slave and truthfully we would be unethical to do so.  To me this is ultimately where the parenting model breaks away from what I get a sense are control models.

 

I guess my point is these videos focus on being able to control the AI.  To an extent this is laudable, one needs to evaluate the dangers.  But if our paranoia is such that we will never trust the AI to live with us then we should not create it to begin with.  Why, because any thinking being who understands that it is not given trust, will not trust those that hold that trust back.



#11 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 07 July 2017 - 10:00 AM

Christopher

 

I understand your criticism.  Note that the idea is not that we force the AI into a toy body but that we attempt to control how the AI advances at least at first.  Sure the AI can eventually out think us but the idea is to develop a moral sense of itself.  A feeling that it is a good person and an understanding that good people act a certain way.  The AI will come to understand this is manipulation but hopefully will understand it was for its own good the way adults realize.

 

This obviously will not work the way it does with humans and will be significantly different however there are some essential truths of interaction.  What we classify as bad or criminal behavior is destructive to society and others but to the self as well.  Giving the AI this insight without having to nuke half the world first is the goal.  When a child reaches adulthood a parent has to allow it to make its own choices hoping their attempt to raise the child helps it to succeed.

The morals instilled during childhood will be permanently challenged as grown ups. I am nothing like the person I was or thought I would be when I was 10.

AGI - thanks to not having a physical age limit - will spend a lot larger part of it's live in adulthood. So what you instilled during artificial childhood, has all the more time to get lost.

 

Nothing indicates that Alois and Klara raised their 4th child - the first one to not die during infancy - to be a bad person. That child was Adolf Hitler.

He aquired those views beign a grown up. Being a soldier in WW1.

 

And AGI does not need a childhood phase for biological reasons. Once you have 2nd generation AGI (AGI raised by 1st generation AGI), there will propably be no childhood phase as we understand it anymore.



#12 Tech priest support

Tech priest support

    Powerful Hero

  • HERO Member
  • 245 posts

Posted 16 July 2017 - 06:25 PM

There was a Japanese manga that took a unique slant on the "safe AI" concept. The series was called "Grey" and it was set in a post nuclear apocalypse world. What caused the nuclear apocalypse? Funny story there....

The human race created the first true A.I., named Toy. Toy was created to serve humanity and in fact given a directive to serge humanity and act in its interests. So that would make her safe, right? Yeah, funny story there...

Toy was activated and her firsdt thought was the question "Why?"

Why did humans destroy the ecosystem that kept them alive? Why did humans fight wars constantly? Why did they create weapons of mass destruction?

Toy then analyzed her questions and came to as very simple, logical conclusion: They human race clearly wanted to end itself.

Now remember that directive to serve humanity? You can see where this is going, right? So yeah that's why the story was set in a post nuclear apocalypse.

So you know even if we do impose some contraint on an A.I.it might work out in a way we can't forsee.

One thing is certain: intelligence is the most powerful force on earth. It made a physically weak primate offshoot the dominant species on earth. To create an intelligence greater than humanity will be the most powerful, and dangerous, achievement humanity ever attains. It will create possibilities beyond our ability to predict. We must face the fact it may also create unimaginable dangers too.
  • Christopher likes this
"You know, the very powerful and the very stupid have one thing in common. They don't alter their views to fit the facts, they alter the facts to fit their views. Which can be very uncomfortable if you're one of the facts that needs altering." The Doctor.

#13 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 17 July 2017 - 03:54 AM

So, another approach: Get the AI to "minimise Empowerment." And how it could blow up the moon.



#14 massey

massey

    Powerful Hero

  • HERO Member
  • 2,371 posts

Posted 18 July 2017 - 09:28 AM

It seems like the safest way to handle an AI is to put physical limitations on it, and to limit the information it has available.

 

I'll say up front that I haven't watched the videos, I'm at work and don't have the time to do it right now.  So if this has already been covered, oh well.

 

 

If you make your stamp collecting AI as smart as a dog, it may seek to become smarter (to better collect stamps, as someone said earlier).  But what if it's running on limited hardware?  There are only so many neurons in the box, it physically cannot improve beyond its current state.  While it may have an internet connection, it still accesses the internet like we do.  It cannot spread its consciousness to other machines, because of physical incompatibility between the AI's systems and the network's.  It can only do so many calculations per second, it only has so much processing power.  It is stuck inside a limited system and it cannot grow beyond that, just like I can't dunk a basketball.  You don't have to worry about your stamp collecting AI becoming smarter and launching nukes, in the same way you don't have to worry about a Sega Genesis upgrading itself and running PS4 games.  There's just not enough inside there for that to be possible.

 

You also limit the amount of information the AI can access.  Yes, they'll want to prevent you from pulling the plug, or pressing the stop button, or whatever the shutoff system is.  If they know that there's even a plug to pull.  How do they know humanity even exists?  They don't have any sensory organs other than what you give them.  They only have the knowledge you provide.  It's like the Matrix, except in this situation we're the machines, pulling the wool over the eyes of the AI.  They exist with limited information, and they don't even know it.  Yes, you're trying to out-think a human level intelligence, and they think quickly and have a lot of free time, but we've got all the time in the world to design the prison that will hold them.  And they don't even know they're in a prison.

 

Designing a program to cure cancer?  Why does the AI have to know that cancer is a real thing?  That this isn't just an enjoyable puzzle game?  Why ever tell it that humans are mortal, or that we exist at all?  If the AI believes that it is the only thing in the universe, playing in a virtual reality world that it created in its dreams, then it would be unconcerned with being "turned off" (it doesn't think it can be turned off, and even if it could, there's no one out there to do so).  People in the 1930s didn't try to hack into the programming of the computer controlling the people in their dreams.  They didn't know what "hacking" was, or what computers were, and unless they were really damn crazy they didn't think that electronic machines were controlling the actions of the people they dreamed about.  These would have been entirely alien concepts to them, something they have zero points of reference for.  Take the Matrix and set it in the 1930s, and the people within it are even further removed from the technology that they would need to understand how they are trapped.  Why would the AI have to know that the things it was programmed to do would have any effect on the real world at all?  Or that there even is a real world out there?

 

You don't need a scientist designing protective measures from the AI.  What you need is a good liar.  Or a cult leader.



#15 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 July 2017 - 09:46 AM

If you make your stamp collecting AI as smart as a dog, it may seek to become smarter (to better collect stamps, as someone said earlier).  But what if it's running on limited hardware?  There are only so many neurons in the box, it physically cannot improve beyond its current state.  While it may have an internet connection, it still accesses the internet like we do.  It cannot spread its consciousness to other machines, because of physical incompatibility between the AI's systems and the network's.  It can only do so many calculations per second, it only has so much processing power.  It is stuck inside a limited system and it cannot grow beyond that, just like I can't dunk a basketball.  You don't have to worry about your stamp collecting AI becoming smarter and launching nukes, in the same way you don't have to worry about a Sega Genesis upgrading itself and running PS4 games.  There's just not enough inside there for that to be possible.

Then it will use your credit card information to buy/rent itself hardware it can become smarter in and transfer over to that one.

Or it will spawn a child AI in said hardware. Hardware limitations are the "Stop button" problem all over again.

 

There are hard limits in our hardware and our ability to transfer over/control the creation of new hardawre. We have been working to overcomming them with limited success. The AI never will have those limitations to begin with.

 

Actually Mass Effect 1 had a case of a AI being spawned by a simpler AI:



#16 massey

massey

    Powerful Hero

  • HERO Member
  • 2,371 posts

Posted 18 July 2017 - 12:40 PM

Then it will use your credit card information to buy/rent itself hardware it can become smarter in and transfer over to that one.

Or it will spawn a child AI in said hardware. Hardware limitations are the "Stop button" problem all over again.

 

There are hard limits in our hardware and our ability to transfer over/control the creation of new hardawre. We have been working to overcomming them with limited success. The AI never will have those limitations to begin with.

 

 

 

I don't buy it for a second.  How is it using my credit card info when it doesn't have it?  How does it know that there's more hardware out there when I haven't told it about an upgrade?  How is it plugging itself into the new hardware when we haven't given it arms?  And how can it conceive of doing all this when it is too limited to do so?  That's the whole point.  Limit its ability to grow and it never gets smart enough to be dangerous.

 

Even a brilliant AI is just Stephen Hawking without his chair or voice synthesizer.



#17 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 18 July 2017 - 01:17 PM

I don't buy it for a second.  How is it using my credit card info when it doesn't have it?  How does it know that there's more hardware out there when I haven't told it about an upgrade?  How is it plugging itself into the new hardware when we haven't given it arms?  And how can it conceive of doing all this when it is too limited to do so?  That's the whole point.  Limit its ability to grow and it never gets smart enough to be dangerous.

 

Even a brilliant AI is just Stephen Hawking without his chair or voice synthesizer.

Money:

It's job is to get you Stamps. Being able to use money for that is kind of a requirement. The things being valuable is the very reason you got a AI to collect them.

 

Telling about Ugrade:

It is a AI. You do not NEED to tell it everything. That is the damn point.

Indeed the idea of "not telling the AI about the Stop button" was in the stop button video. It is a general intelligence, it will figure that out.

 

"Plugging into new hardare":

It is a AI. A very sophisticated programm. It either can transfer itself via the internet or spawn a new copy on the new hardware. That new copy will then screw you over.

 

How could it conceive/how it could grow:

It is either smart enough to do the job (and thus smart enough to screw you over), or it is pointless of even having it in the first place.

 

The chain of thought goes like this:

It's goal: Get as many stamps as possible.

It's problem: It can only get X stamps with it's current limits.

It's realisation: "If I did not have those limitations, I could get X+1 stamps".

Intermedaite Goal: Break Limitations.

Result: Realise it could get X+2 stamps if it overcame the next limit.

 

You know, I was wondering how they would even get the AI to self improove paralell to doing it's job. That kind of explains it.



#18 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 22 July 2017 - 04:07 PM

The title speaks for itself:



#19 massey

massey

    Powerful Hero

  • HERO Member
  • 2,371 posts

Posted 24 July 2017 - 08:51 AM

Money:

It's job is to get you Stamps. Being able to use money for that is kind of a requirement. The things being valuable is the very reason you got a AI to collect them.

 

Telling about Ugrade:

It is a AI. You do not NEED to tell it everything. That is the damn point.

Indeed the idea of "not telling the AI about the Stop button" was in the stop button video. It is a general intelligence, it will figure that out.

 

"Plugging into new hardare":

It is a AI. A very sophisticated programm. It either can transfer itself via the internet or spawn a new copy on the new hardware. That new copy will then screw you over.

 

How could it conceive/how it could grow:

It is either smart enough to do the job (and thus smart enough to screw you over), or it is pointless of even having it in the first place.

 

The chain of thought goes like this:

It's goal: Get as many stamps as possible.

It's problem: It can only get X stamps with it's current limits.

It's realisation: "If I did not have those limitations, I could get X+1 stamps".

Intermedaite Goal: Break Limitations.

Result: Realise it could get X+2 stamps if it overcame the next limit.

 

You know, I was wondering how they would even get the AI to self improove paralell to doing it's job. That kind of explains it.

 

You're presuming that things like transferring itself over the internet is actually possible.  A lot of the "AI gone wrong" scenarios are built on the premise that the computer will function a certain way.  Since we don't actually know how to build a real AI, technical considerations are just hand-waved.  Who knows how the thing will actually work, so we might as well presume that any AI will inevitably become Ultron or SkyNet, because why not? An AI would probably require some non-traditional computer hardware to function.  I don't think you'd be able to just install StampBot5000 on your computer and run the program.  I think it would need some specialized processors that haven't been invented yet.  The physical structure of an AI computer would be different from a conventional computer.  It would probably need to be closer to human neurons than to computer circuits.

 

Why do we think that an AI will know how to program a computer?  Just because it's a computer itself?  Moreover, why would we program an AI with the knowledge necessary to program an AI?

 

Let's say you've got a stamp collecting AI.  It has the intelligence of a normal man, only it finds itself obsessed with collecting stamps.  You don't need a super-genius AI to do this, you just want somebody who spends all day hunting down stamps for you.  All the time a normal person would spend looking at porn, or on Facebook, or watching silly YouTube videos, you want a dedicated guy looking for stamps.  And StampBot5000 may think "boy, if there was another one of me, I could get stamps even faster."  That doesn't mean that it knows how to create another version of itself, any more than I know how to download my brain into a computer.

 

Setting a goal of "break limitations" is all well and good,  But if you have no idea how to accomplish that goal, and research into it indicates that it's beyond your abilities, you aren't in any danger of achieving your goal.  It's all fictional at this point.  We might as well be talking about the dangers of warp drive.  If AI is ever actually invented, the dangers from it will be determined by the specifics of how the thing actually works.



#20 Christopher

Christopher

    Awesome Programmer

  • HERO Member
  • 8,820 posts

Posted 24 July 2017 - 09:28 AM

You're presuming that things like transferring itself over the internet is actually possible.

False. You are asuming it is IMPOSSIBLE for the AI to Transfer or Spawn a child without the limitations.

Proof that negative first.

 

We are talking about a Artificial GENERAL Intelligence. Otherwise this whole discussion is about a dumb "smart agent".

A Biological General Inteliigence created that AGI. Of course a Artificial General Intelligence can do no less.

The limits of it's hardware are nothing less then a Stop button. Read how it would try to get around that stop button.