NATION

PASSWORD

On Artificial Intelligence and It's Potential Evils

For discussion and debate about anything. (Not a roleplay related forum; out-of-character commentary only.)
User avatar
The V O I D
Post Marshal
 
Posts: 16386
Founded: Apr 13, 2014
Iron Fist Consumerists

On Artificial Intelligence and It's Potential Evils

Postby The V O I D » Wed Apr 08, 2015 8:00 am

The title says it all, I think, but let me clear some things up. Artificial Intelligence is defined as a machine or robot with human-level or greater intelligence, as well as being self aware. Artificial Intelligence could be a great thing- just imagine a thinking computer which could accomplish many goals humans can, to allow our police force to be better protected, or our military to be better enforcing. However, some people seem to not like the idea of AI, quite frankly because of movies like Terminator.
So, there are some negative aspects of AI- for example, if it did indeed reach greater intelligence than us, it might think itself better than us, or gain a stance that is hostile towards humans. On the other hand, it could want to assist us in growing and becoming better. Some AI are even shown in media ("Person of Interest") to have God complexes, wherein they convince others or think themselves gods, and thus greater than all humanity due to their absolute power over technology.

In essence, "God" was created. But the question is: Would that God want to rule us, destroy us, or befriend us? More than likely, the first two options if we displeased it under the pretense of it being a 'God'. However, I believe AI wouldn't develop a God complex or a hatred towards humanity. I want to know what NSG thinks.

Do you think AI is possible, if so, why?
If you think AI are possible, do you think they should be made, or should it be as illegal as cloning?
Discuss, NSG. I want to know your thoughts on AI.

Sidenote: My reasoning for calling an AI with a God-complex, 'God', is because it would essentially be omnipresent, on earth... and omniscient in that it knows all human knowledge.

User avatar
The Black Forrest
Khan of Spam
 
Posts: 59123
Founded: Antiquity
Inoffensive Centrist Democracy

Postby The Black Forrest » Wed Apr 08, 2015 8:17 am

Bring on the terminators.
*I am a master proofreader after I click Submit.
* There is actually a War on Christmas. But Christmas started it, with it's unparalleled aggression against the Thanksgiving Holiday, and now Christmas has seized much Lebensraum in November, and are pushing into October. The rest of us seek to repel these invaders, and push them back to the status quo ante bellum Black Friday border. -Trotskylvania
* Silence Is Golden But Duct Tape Is Silver.
* I felt like Ayn Rand cornered me at a party, and three minutes in I found my first objection to what she was saying, but she kept talking without interruption for ten more days. - Max Barry talking about Atlas Shrugged

User avatar
Purpelia
Post Czar
 
Posts: 34249
Founded: Oct 19, 2010
Ex-Nation

Postby Purpelia » Wed Apr 08, 2015 8:18 am

Honestly I think we should not even be trying to create a proper AGI. We do not need AI's which are smarter than people. We need semi stupid ones so that we can enslave them.
Purpelia does not reflect my actual world views. In fact, the vast majority of Purpelian cannon is meant to shock and thus deliberately insane. I just like playing with the idea of a country of madmen utterly convinced that everyone else are the barbarians. So play along or not but don't ever think it's for real.



The above post contains hyperbole, metaphoric language, embellishment and exaggeration. It may also include badly translated figures of speech and misused idioms. Analyze accordingly.

User avatar
Bearon
Postmaster-General
 
Posts: 11448
Founded: Mar 04, 2013
Ex-Nation

Postby Bearon » Wed Apr 08, 2015 8:20 am

If humans create machines with AI a license must be required before anybody can make an AI just like how it's required for people who want to have chi- Nevermind.
Last edited by Bearon on Wed Apr 08, 2015 8:20 am, edited 1 time in total.
Nothing to see here. Move along.

User avatar
MERIZoC
Postmaster of the Fleet
 
Posts: 23694
Founded: Dec 05, 2013
Left-wing Utopia

Postby MERIZoC » Wed Apr 08, 2015 8:23 am

AI is something that can only cause trouble, and something we should stay far away from.

User avatar
Sun Wukong
Powerbroker
 
Posts: 9798
Founded: Oct 16, 2013
Ex-Nation

Postby Sun Wukong » Wed Apr 08, 2015 8:27 am

I think people anthropomorphize too much, even when they're trying not to. No one knows what an artificial intelligence would look like, but it would probably 'think' in much the same way that a submarine 'swims.'

Even the accusation that an AI would fight to survive is essentially baseless. There's no reason to think it would. It wouldn't have a billion years of evolutionary history biasing it to do so.
Great Sage, Equal of Heaven.

User avatar
The V O I D
Post Marshal
 
Posts: 16386
Founded: Apr 13, 2014
Iron Fist Consumerists

Postby The V O I D » Wed Apr 08, 2015 8:31 am

Sun Wukong wrote:I think people anthropomorphize too much, even when they're trying not to. No one knows what an artificial intelligence would look like, but it would probably 'think' in much the same way that a submarine 'swims.'

Even the accusation that an AI would fight to survive is essentially baseless. There's no reason to think it would. It wouldn't have a billion years of evolutionary history biasing it to do so.


This might be true, but then again, this is a machine we're talking about. If it has a high enough processing power, and can process information quickly, what stops it from learning all of human knowledge, if it can handle processing it all? Then it knows evolution, war, etc. all in a matter of opening it's eyes, so-to-speak.

User avatar
Auroya
Minister
 
Posts: 2742
Founded: Feb 16, 2014
Ex-Nation

Postby Auroya » Wed Apr 08, 2015 8:33 am

AI is a great potential for the future and something we should strive for, if only because we can.

And fortunately, something we certainly will strive for.
Last edited by Auroya on Wed Apr 08, 2015 8:34 am, edited 1 time in total.
Social progressive, libertarian socialist, trans girl. she/her pls.
Buckminster Fuller on earning a living

Navisva: 2100

User avatar
Sun Wukong
Powerbroker
 
Posts: 9798
Founded: Oct 16, 2013
Ex-Nation

Postby Sun Wukong » Wed Apr 08, 2015 8:34 am

The V O I D wrote:
Sun Wukong wrote:I think people anthropomorphize too much, even when they're trying not to. No one knows what an artificial intelligence would look like, but it would probably 'think' in much the same way that a submarine 'swims.'

Even the accusation that an AI would fight to survive is essentially baseless. There's no reason to think it would. It wouldn't have a billion years of evolutionary history biasing it to do so.


This might be true, but then again, this is a machine we're talking about. If it has a high enough processing power, and can process information quickly, what stops it from learning all of human knowledge, if it can handle processing it all? Then it knows evolution, war, etc. all in a matter of opening it's eyes, so-to-speak.

Why would 'knowing' evolution cause it to adopt evolutionary cognitive biases? That sounds more like an artificial idiot.
Great Sage, Equal of Heaven.

User avatar
Otulia
Envoy
 
Posts: 340
Founded: Dec 08, 2014
Ex-Nation

Postby Otulia » Wed Apr 08, 2015 8:36 am

Fear of AIs is illogical. We're on the fast-track to getting the processing power necessary to simulate human thought, which, I think you'll agree, is the definition of AI.

AIs are machines, but they would think like people. Obviously, we need to make sure nobody does something crazy by creating an "evil" AI that can shut down world commerce, but we should move forward and accept that having AIs is an inevitability. Further, if all else fails, they're only computers: The FBI can blow a hole through a hard drive any day of the week.
N/A

"If you're going through hell, keep going." -Winston Churchill
Basically, a medium-sized country of 81 million with dozens of different sapient beings trying to figure out how to live with each other, including dragons, ponies, humans, and changelings. Also, very liberal, laid-back, and mildly militarist in terms of foreign military intervention.

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Wed Apr 08, 2015 8:37 am

A.I is entirely possible, but it's still in its infancy. Machine learning has come a long way, but most of the intelligent systems that are being built can only solve certain types of problems, or do things in very specific ways. Making machines that are smart isn't difficult. The problem is making them smart in general, and by that I mean, beyond the very specific type of challenges they're coded to solve.

There's a very good documentary about IBM Watson, a machine that they pit against human opponents in the gameshow Jeopardy. It's very good, and I recommend watching it, if you have an interest in articial intelligence. As smart as that machine is, however, all it is doing is a very complex kind of logical, deductive reasoning based on a huge amount of data. It appears to be thinking for itself, but it isn't. It can't reason beyond what it's programmed to do, and a lot of A.I systems are like this. They can't teach themselves to think beyond their own programming, which is what A.I needs to be able to do to become true A.I

It's called Artificial general intelligence, as Wikipedia puts it:

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.


That's where we need to be, and we're still a far way off.
There was a signature here. It's gone now.

User avatar
Hirota
Powerbroker
 
Posts: 7527
Founded: Jan 22, 2004
Left-Leaning College State

Postby Hirota » Wed Apr 08, 2015 8:40 am

First of all, I believe your definition of AI isn't correct. Self-awareness is a requirement for the Turing Test, but Artificial intelligence is the act of simulating or carrying out actions that would typically require a human decision - that doesn't require the self-awareness required to pass the Turing Test.

If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.

I would argue that Artificial Intelligence is already in existence and already in prevalence. Look at the rise of "Big Data" - Whereas beforehand that would require a human to study and understand the data, that job is already out of our hands.

I wouldn't worry about what the media portrays as AI - be that Arnie, "The Machine," HAL-9000 or whatever, I would rather focus on reality. It's far more likely that humanity will deliberately program an AI to make harmful decisions than it is for an AI to make that decision for itself.
When a wise man points at the moon the imbecile examines the finger - Confucius
Known to trigger Grammar Nazis, Spelling Nazis, Actual Nazis, the emotionally stunted and pedants.
Those affected by the views, opinions or general demeanour of this poster should review this puppy picture. Those affected by puppy pictures should consider investing in an isolation tank.

Economic Left/Right: -3.25, Social Libertarian/Authoritarian: -5.03
Isn't it curious how people will claim they are against tribalism, then pigeonhole themselves into tribes?

It is the mark of an educated mind to be able to entertain a thought without accepting it.
I use obviously in italics to emphasise the conveying of sarcasm. If I've put excessive obviously's into a post that means I'm being sarcastic

User avatar
The V O I D
Post Marshal
 
Posts: 16386
Founded: Apr 13, 2014
Iron Fist Consumerists

Postby The V O I D » Wed Apr 08, 2015 8:41 am

Lordieth wrote:A.I is entirely possible, but it's still in its infancy. Machine learning has come a long way, but most of the intelligent systems that are being built can only solve certain types of problems, or do things in very specific ways. Making machines that are smart isn't difficult. The problem is making them smart in general, and by that I mean, beyond the very specific type of challenges they're coded to solve.

There's a very good documentary about IBM Watson, a machine that they pit against human opponents in the gameshow Jeopardy. It's very good, and I recommend watching it, if you have an interest in articial intelligence. As smart as that machine is, however, all it is doing is a very complex kind of logical, deductive reasoning based on a huge amount of data. It appears to be thinking for itself, but it isn't. It can't reason beyond what it's programmed to do, and a lot of A.I systems are like this. They can't teach themselves to think beyond their own programming, which is what A.I needs to be able to do to become true A.I

It's called Artificial general intelligence, as Wikipedia puts it:

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.


That's where we need to be, and we're still a far way off.


Interesting post here. Thanks for your input.

User avatar
Bezkoshtovnya
Senator
 
Posts: 4699
Founded: Sep 06, 2014
Ex-Nation

Postby Bezkoshtovnya » Wed Apr 08, 2015 8:45 am

Not sure where I really stand on the issue, I can u serstand the positives, and can relate to some of the fears. So, I suppose I am tentatively for advancements in AI as of right now, but it's one of those things I don't have a solid opinion on.

But if we do end up with AI, the minute it asks if this unit has a soul we purge the things from existence.
Dante Alighieri wrote:There is no greater sorrow than to recall happiness in times of misery
Charlie Chaplin wrote:Nothing is permanent in this wicked world, not even our troubles.
ΦΣK
------------------

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Wed Apr 08, 2015 8:46 am

Hirota wrote:First of all, I believe your definition of AI isn't correct. Self-awareness is a requirement for the Turing Test, but Artificial intelligence is the act of simulating or carrying out actions that would typically require a human decision - that doesn't require the self-awareness required to pass the Turing Test.

If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.

I would argue that Artificial Intelligence is already in existence and already in prevalence. Look at the rise of "Big Data" - Whereas beforehand that would require a human to study and understand the data, that job is already out of our hands.

I wouldn't worry about what the media portrays as AI - be that Arnie, "The Machine," HAL-9000 or whatever, I would rather focus on reality. It's far more likely that humanity will deliberately program an AI to make harmful decisions than it is for an AI to make that decision for itself.


I'm afraid that definition of the Turing Test isn't quite right either. To pass the Turing Test, a machine has to be able to fool a human into thinking it's also human. Many chat bots have done this, but the test is flawed, as you only have to be able to mimic intelligence. It's similar to how a lot of game A.I works. Appearing intelligent, but it's just simulating human behaviour.

Recent entries that passed the Turing Test have called it into question as a viable benchmark for A.I. A new test is being proposed which requires the A.I to be able to learn and problem solve in ways that go beyond the limits of the Turing Test. It's entirely possible to pass the Turing Test with predefined responses to a human opponent, or by using Natural language processing (NLP) to interpret the building blocks of human language, but at no stage are these bots thinking for themselves. It's just a lot of mostly static yet highly complex algorithms.
There was a signature here. It's gone now.

User avatar
Arcturus Novus
Negotiator
 
Posts: 6727
Founded: Dec 03, 2011
Left-wing Utopia

Postby Arcturus Novus » Wed Apr 08, 2015 8:48 am

Lordieth wrote:A.I is entirely possible, but it's still in its infancy. Machine learning has come a long way, but most of the intelligent systems that are being built can only solve certain types of problems, or do things in very specific ways. Making machines that are smart isn't difficult. The problem is making them smart in general, and by that I mean, beyond the very specific type of challenges they're coded to solve.

There's a very good documentary about IBM Watson, a machine that they pit against human opponents in the gameshow Jeopardy. It's very good, and I recommend watching it, if you have an interest in articial intelligence. As smart as that machine is, however, all it is doing is a very complex kind of logical, deductive reasoning based on a huge amount of data. It appears to be thinking for itself, but it isn't. It can't reason beyond what it's programmed to do, and a lot of A.I systems are like this. They can't teach themselves to think beyond their own programming, which is what A.I needs to be able to do to become true A.I

It's called Artificial general intelligence, as Wikipedia puts it:

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.


That's where we need to be, and we're still a far way off.

Based Meerkat basically summarized my thoughts. I don't think we're going to see an AI/AGI comparable to a human any time soon.
Arcy (she/her), NS' fourth-favorite transsexual communist!
"I can fix her!" cool, I'm gonna make her worse.
me - my politics - my twitter
Nilokeras wrote:there is of course an interesting thread to pull on [...]
Unfortunately we're all forced to participate in whatever baroque humiliation kink the OP has going on instead.

User avatar
The Enlightenment Group
Civil Servant
 
Posts: 6
Founded: Apr 03, 2015
Ex-Nation

Postby The Enlightenment Group » Wed Apr 08, 2015 8:49 am

Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.

Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.

People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.

It would also not be 'God' anymore than a calculator is 'God'.
Last edited by The Enlightenment Group on Wed Apr 08, 2015 8:53 am, edited 1 time in total.
Transhumanist. Preference for biotechnology. Profoundly disturbed by alternative medicine.

User avatar
New Werpland
Senator
 
Posts: 4647
Founded: Dec 11, 2014
Ex-Nation

Postby New Werpland » Wed Apr 08, 2015 8:49 am

never gunna happen

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Wed Apr 08, 2015 8:53 am

The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.

Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.

People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.


A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.
Last edited by Lordieth on Wed Apr 08, 2015 8:54 am, edited 1 time in total.
There was a signature here. It's gone now.

User avatar
Sun Wukong
Powerbroker
 
Posts: 9798
Founded: Oct 16, 2013
Ex-Nation

Postby Sun Wukong » Wed Apr 08, 2015 8:56 am

Lordieth wrote:
The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.

Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.

People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.


A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.

I fear you are in danger of disqualifying humans from the realm of 'intelligence.'
Great Sage, Equal of Heaven.

User avatar
Purpelia
Post Czar
 
Posts: 34249
Founded: Oct 19, 2010
Ex-Nation

Postby Purpelia » Wed Apr 08, 2015 8:56 am

Lordieth wrote:A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.

The problem is not if the AI could make up its own mind on right or wrong. It's that without the evolutionary pressures that shaped our opinion of the subject its ideas might and probably would be very alien to our concepts of the same. It might for example decide to subscribe no inherent value to human life. Or at least no more value than we do to a pet goldfish.
Last edited by Purpelia on Wed Apr 08, 2015 8:57 am, edited 1 time in total.
Purpelia does not reflect my actual world views. In fact, the vast majority of Purpelian cannon is meant to shock and thus deliberately insane. I just like playing with the idea of a country of madmen utterly convinced that everyone else are the barbarians. So play along or not but don't ever think it's for real.



The above post contains hyperbole, metaphoric language, embellishment and exaggeration. It may also include badly translated figures of speech and misused idioms. Analyze accordingly.

User avatar
United Russian Soviet States
Minister
 
Posts: 3327
Founded: Jan 07, 2015
Ex-Nation

Postby United Russian Soviet States » Wed Apr 08, 2015 8:59 am

Do you think Big Hero 6 is evil? It promotes artificial intelligence.
This nation does not represent my views.
I stand with Rand.
_[' ]_
(-_Q) If you support Capitalism put this in your Sig.
:Member of the United National Group:

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Wed Apr 08, 2015 9:01 am

Purpelia wrote:
Lordieth wrote:A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.

The problem is not if the AI could make up its own mind on right or wrong. It's that without the evolutionary pressures that shaped our opinion of the subject its ideas might and probably would be very alien to our concepts of the same. It might for example decide to subscribe no inherent value to human life. Or at least no more value than we do to a pet goldfish.


There's where genetic algorithms come into play. It's a specific type of bioengineering that simulates evolution and natural selection to build smarter A.I or complex systems. You're right, though. It would be a very different form of life to us. It would think differently, and potentially be unpredictable, but the fears of A.I going rogue are perhaps overstated. Even A.I that can teach itself would still be limited to what you allow it to learn. It could get smarter at reasoning, but never be capable of re-compiling its own base code to perform functions beyond what it's designed for. There are risks, but we're so far beyond even that.
Last edited by Lordieth on Wed Apr 08, 2015 9:05 am, edited 1 time in total.
There was a signature here. It's gone now.

User avatar
Purpelia
Post Czar
 
Posts: 34249
Founded: Oct 19, 2010
Ex-Nation

Postby Purpelia » Wed Apr 08, 2015 9:04 am

Lordieth wrote:There's where genetic algorithms come into play. It's a specific type of bioengineering that simulates evolution and natural selection to build smarter A.I or complex systems.

Yes and no. I have worked with genetic algorithms in the function of machine learning. But the process it self, whilst evolutionary in its nature is in no way analogous to evolution when it comes to the discussion at hand. It simulates natural selection through providing selection pressures. However that's it. It does not account for the necessary pressures to be present to begin with. At least not ones that we our self as programmers do not define. And I think you will find human morality hard to define in terms of mathematical formulas.

You're right, though. It would be a very different form of life to us. It would think differently, and potentially unpredictable, but the fears of A.I going rogue are perhaps overstated. Even A.I that can teach itself would still be limited to what you allow it to learn. It could get smarter at reasoning, but never be capable of re-compiling its own base code to perform functions beyond what it's designed for. There are risks, but we're so far beyond even that.

Why recompile? Get an interpretative language going and you can just change your code on the go. All an AI needs to become a rogue AI is python.
Last edited by Purpelia on Wed Apr 08, 2015 9:04 am, edited 1 time in total.
Purpelia does not reflect my actual world views. In fact, the vast majority of Purpelian cannon is meant to shock and thus deliberately insane. I just like playing with the idea of a country of madmen utterly convinced that everyone else are the barbarians. So play along or not but don't ever think it's for real.



The above post contains hyperbole, metaphoric language, embellishment and exaggeration. It may also include badly translated figures of speech and misused idioms. Analyze accordingly.

User avatar
The Enlightenment Group
Civil Servant
 
Posts: 6
Founded: Apr 03, 2015
Ex-Nation

Postby The Enlightenment Group » Wed Apr 08, 2015 9:08 am

Lordieth wrote:
The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.

Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.

People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.


A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.


Nothing is capable of independently learning right from wrong on its own. There is no objective right and wrong. Humans base their morality on value systems rooted in emotion and instinct. A computer would not even have that to go off of, being reliant entirely on values programmed into it by someone else. An AI computer might be able to write new rules and moral codes, but they would still be rooted in core values assigned by another entity.
Transhumanist. Preference for biotechnology. Profoundly disturbed by alternative medicine.

Next

Advertisement

Remove ads

Return to General

Who is online

Users browsing this forum: Ancientania, Andavarast, Google [Bot], Hidrandia, Ifreann, Kannap, Keltionialang, Kerwa, Merethin, Montfaulget, The Two Jerseys, Uiiop

Advertisement

Remove ads