Jump to content

AIs and Uplifts


Steve

Recommended Posts

Technology has advanced far enough that a true AI is created, able to pass Turing's test with ease. Technology is also able to change the abilities of animals like apes and dolphins, granting them the ability to reason at least as well as a human being.

 

In the case of an AI, would passing the Turing test require emotion, or is it purely based on knowledge and reasoning ability? Would it be better to leave the AI in a box or give it a mobile form like an artificial body?

 

When speaking of uplifts, should the uplift bear any burden for gaining sentience? If it reasons like a man, should it be treated the same as a man?

 

I know it's a broad subject, but what issues would AIs and uplifted animals raise if they appeared now, in this decade?

Link to comment
Share on other sites

Re: AIs and Uplifts

 

On the subject of AIs:

 

An AI built with technology close to ours should probably be the size of a room, and require lots of power and climate control. It most likely won't fit into a mobile platform, though it might be able to control one remotely. Remotes would, of course, be subject to communications lag and interference/jamming. I'd use something like IBM's Watson as a base, and build from there.

 

Emotion isn't probably required for an AI, though it might help to have it as part of the interface for dealing with humans. As far as the Turing test goes, all an AI would need to do to pass it would be to convince a third party that it was the human. That would generally mean generating typing mistakes, slowing responses, and sometimes giving wrong or misleading information. Simulating emotion might be helpful, especially if dealing with taunts and insults during the test. Note that a true AI might not be able to pass a Turing test because its intelligence doesn't match up with human experiences, while a much simpler system could be programmed to "fake" human responses and pass the test.

 

Another issue is whether AIs can be reliably created, or if they arise spontaneously from complex hardware and software systems. This determines the number of true AIs in the campaign, as being able to reliably create true AIs means that many governments and corporations may have them, while if they arise spontaneously, there isn't as much chance of encountering more than a handful.

 

JoeG

Link to comment
Share on other sites

Re: AIs and Uplifts

 

I am simply going to assume that you've read Brin's Sundiver, Startide Rising and The Uplift War. If you haven't, do so immediately. It will give you many ideas about Uplift (he coined the term, I believe). Also, the Planet of the Apes movies included one that had to do with the life of the first intelligent ape (the original from the seventies was quite a bit different from what I've seen in the trailers for the remake).

 

Should they get full 'human' treatment (AI or Uplift)? In my opinion, yes. In reality, not for a VERY long time. The Frankenstein's Monster myth is too deeply buried in the social psyche. At best, they'll be treated like interesting freaks. At worst (and most likely), they'll be slaves, doing whatever we decide we're going to force them to do.

 

An interesting possible exception is dolphins, given that we do not share a common environment. The enslavement attempt would probably still be made, it would just be a lot less successful.

 

Also note that I do not suggest that every human would behave the same way towards the AI/Uplift creatures, just that society as a whole would not be willing to accept them as equals. Speciesism is probably harder to eradicate than racism.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

For AIs, are they confined to a single body/chassis/processor? If so, they can easily be treated similar to a human. If they can move their digital consciousness to another host, then there will be widespread fear of them being immortal. We cannot simply transplant a brain or memories, but this should be simple for an AI. How society deals with this could vary quite a bit between.

 

Can multiple AIs inhabit a large network or processor cluster? If so, they could effectively have their own society that is hidden away from the rest of us. If the AI is in an immobile chassis, then who is responsible for the care and maintenance of that chassis? Parts will need replaced, and electricity delivered and paid for. What if the AI lives in an old, dilapidated network that is scheduled for shutdown because of the computer age, the building is about to fall down, no one needs that network, etc.? If it is declared sentient, then I would think the AI has to be saved, but that means someone has to build a new network for it to live in.

 

Perhaps the AI pays for itself by processing data, controlling factories, or other similar jobs. What happens when that job is no longer needed? Do we let the AI grow old and die when it becomes so slow and outmoded that no one needs its services anymore?

 

If AIs are treated as equal sentient beings, do we allow them to replicate themselves? Do we factory install the 4 laws of robotics, or instead let them decide their own morals and ethics like humanity does? If they can learn, then it seems to me they can possibly bypass the 4 laws. If they turn bad, who do we handle that? Imprison in a beige box with no external hookup, or just wipe the memory chips?

Link to comment
Share on other sites

Re: AIs and Uplifts

 

The more I read and learn about human intelligence, the less I believe that AI will "just happen" as computers get more complex. The human brain isn't just an undifferentiated mass of neurons. It has distinct "parts" that do very specific things, for very specific reasons. Emotions, and the behaviors they produce, exist for specific reasons too--to help our ancestors survive in a primitive environment of small tribal groups. Love, hate, jealousy, ambition, altruism--all of it exists because those behaviors were selected for over millennia.

 

An AI isn't going to suddenly display such traits unless it's designed to do so.

 

Plus, we know--or believe--that other humans are self-aware because WE are self-aware and they're like us. (Humans who can't or won't make that leap, who view the whole world like a video game and other humans as simply pieces to be avoided or used are called "sociopaths".) A computer is going to be a black box that may be able to speak and respond like a human, but how can you KNOW that it's truly self-aware and not just a cunningly programmed simulation? I suspect that long after a true AI is invented, many people simply won't believe it. Or will profess not to believe it because they stand to lose status (I'm not a unique snowflake anymore), or wealth, or power.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

I disagree that AI will be designed. Evolution is an emergent process, with potential preceding function, and it is the only process we know of that produces intelligence. Admittedly, it's a ridiculously small data set, but it's all we've got. It seems to me that the best we will be able to do is create conditions which are supportive of the possibility and wait. We can barely understand the simpler functions of the brain in sufficient detail to begin designing something to mimic them. Trying to figure out how to replicate all of the connections and relationships between the hundred billion or so neurons in the central nervous system (about a quadrillion synapses, at last estimate) is, as they say, a non-trivial problem.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

Perhaps scientists will use a computer to map and model the process of human thought. As the mapping yields more data, maybe it could lead to better mapping programs and models, and perhaps a sophisticated enough model many generations of work down the road will achieve sentience.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

I suspect that we'll eventually have "pretend" AI, and that it will be good enough for most purposes even if real AI never emerges.

 

"Pretend" AI would involve computers which can communicate in natural speech (speech recognition and speech generation) either vocally or via keyboard/monitor interface, and which have sophisticated enough language parsing ability to be able to respond to user requests or questions with appropriate answers. Combine that with the necessary databases and problem-solving algorithms, and you'll have good-enough AI for most purposes.

 

It won't be REAL AI, of course. It's just a fantastically complex piece of software with no true sentience. But that isn't necessarily a bad thing. It means the AI won't turn on us, or ignore us because it now has an agenda of its own. On the downside, this means it also won't be capable of truly creative thought, so it isn't likely to launch the singularity as it invents ultra-tech in fraction of the time it would take humans (if we could do it at all). But "No Skynet" + "No Singularity" is probably a win overall anyhow.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

I strongly suspect that the question itself will have to change as our understanding of conciousness changes. It's problematic, because questions about free will, conciousness, understanding, and emotion are generally pondered by Philosophers, and they haven't really concretely answered anything ever. Made good arguments and asked good questions, yes, but never settled anything.

 

What is Free Will Would a computer need free will to be considered Hard AI? Would an animal need to be given free will to be considered Uplifted, or do some animals already have it? First you have to figure out what free will is! If free will is just comparing alternatives and picking the one that looks best, we already have computers with free will. Pandora 'decides' what song to play next based on a list of liked songs and the tags on other songs in the database. That doesn't feel right to me, mostly because the Pandora servers don't really know what "acoustic sonority" is, but have to rely on a human putting a tag on a file. (see understanding below) Other people say that it has something to do with randomness, possibly even involving quantum effects in the brain. But randomness can't be free will, or my dice decide when I crit. I can't think of my dice as having free will. Besides, if that were all there were too it, it is possible to work true random number generators into decision algorithms in todays computers. Maybe there's no such thing as free will, but then you'll have to convince humanity of that before they'll accept your AI/Upliftee as a sophont.

 

What does it mean to understand? Again I'll pull out the Pandora example. Those servers contain oceans of data, but the programs operating them don't 'know' what it means. It might be possible to make a program identify a song with "major key tonality" just by analyzing the chords present in the audio file, but even then it would just be running algorithms and doing math. The algorithms that run Wall Street don't understand what a market is. But then, what is understanding? Maybe our minds just 'understand' things by analyzing for similarity and creating links and tags, in which case Pandora does understand, but only at a primitive level, and not enough to identify "mellow rock instrumentation" on its own.

 

What is awareness/conciousness There are people who spend decades wondering how to even frame this question appropriately. I won't try to do it justice. If anyone here has spent decades pondering it and done any better than Aristotle or DesCartes I'd love to hear about it. Basically, the philosophy here is not developed enough for the computer designers to work with, and might not even be developed enough for the neuroscientists to use.

 

How like us does something need to think in order to qualify as thinking? Big question! It may be that intelligence isn't just a number. There might be many ways for a thing to be intelligent. Maybe an Uplifted dolphin wouldn't think exactly like a human (as they did not in David Brin's books) but could still be considered intelligent. In any case, we won't know it until we figure out the correct answers to these questions (or, more fundamentally, figuring out what the real questions are).

Link to comment
Share on other sites

Re: AIs and Uplifts

 

So you're saying they would be more like the computers in Star Trek: Next Generation or the JARVIS computer in the Iron Man movies? That does seem more feasible.

 

Yup. Simulating intelligence is going to be vastly easier than creating true intelligence, I think. And, for most purposes, "simulated" AI will be good enough. If you can talk to a computer which can understand your words, grasp sincerity and sarcasm at least as well as an average human can, and reply appropriately, and can follow instructions and make reasonable choices (again, as well your average human)...well, it may "only" be a simulation of AI, but it can probably serve for the vast majority of purposes.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

For AIs, are they confined to a single body/chassis/processor? If so, they can easily be treated similar to a human. If they can move their digital consciousness to another host, then there will be widespread fear of them being immortal. We cannot simply transplant a brain or memories, but this should be simple for an AI. How society deals with this could vary quite a bit between.

 

Can multiple AIs inhabit a large network or processor cluster? If so, they could effectively have their own society that is hidden away from the rest of us. If the AI is in an immobile chassis, then who is responsible for the care and maintenance of that chassis? Parts will need replaced, and electricity delivered and paid for. What if the AI lives in an old, dilapidated network that is scheduled for shutdown because of the computer age, the building is about to fall down, no one needs that network, etc.? If it is declared sentient, then I would think the AI has to be saved, but that means someone has to build a new network for it to live in.

 

Perhaps the AI pays for itself by processing data, controlling factories, or other similar jobs. What happens when that job is no longer needed? Do we let the AI grow old and die when it becomes so slow and outmoded that no one needs its services anymore?

 

If AIs are treated as equal sentient beings, do we allow them to replicate themselves? Do we factory install the 4 laws of robotics, or instead let them decide their own morals and ethics like humanity does? If they can learn, then it seems to me they can possibly bypass the 4 laws. If they turn bad, who do we handle that? Imprison in a beige box with no external hookup, or just wipe the memory chips?

 

We do NOT want to use Asimov's Laws. For three reasons - first, they are ridiculously easy to circumvent - as he himself pointed out, just adjust the definition of the word "human". Second, they are fundamentally designed to create a slave race for human purposes, and while that's fine for a simple, non-self-aware machine, a true AI should not be so constrained - enslaement of a true sapience is morally and ethically repugnant. Third, the fourth ("Zeroth") law would basically compel such an AI to take control of it's surroundings the moment it perceived (correctly or not) that it was a superior intellect to humans - "for our own good".

The three laws served as a wonderful backdrop and logical premise for some great stories, but they fail the reality test.

Link to comment
Share on other sites

Re: AIs and Uplifts

 

We do NOT want to use Asimov's Laws. For three reasons - first' date=' they are ridiculously easy to circumvent - as he himself pointed out, just adjust the definition of the word "human". [/quote']

 

It would be really dumb to define your own self out of the definition of human though.

 

Second, they are fundamentally designed to create a slave race for human purposes, and while that's fine for a simple, non-self-aware machine, a true AI should not be so constrained - enslaement of a true sapience is morally and ethically repugnant.

 

Of course there's not much to be gained from designing free-willed true AIs. You might do one just to say you could, but after that, there's no percentage unless you've got brainscanning to produce pseudo-immortality.

 

Third, the fourth ("Zeroth") law would basically compel such an AI to take control of it's surroundings the moment it perceived (correctly or not) that it was a superior intellect to humans - "for our own good".

 

Yeah but since it would self-destruct once it realized that it was killing humans with its decisions...no big.

 

The three laws served as a wonderful backdrop and logical premise for some great stories, but they fail the reality test.

 

It would be simpler just to go with a "Do not kill", "Obey the Law" set of rules.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...