NATION

PASSWORD

On the Topic of Artificial Intelligence

For discussion and debate about anything. (Not a roleplay related forum; out-of-character commentary only.)

Advertisement

Remove ads

What do you think about AI?

It's dangerous, we should keep developing it.
25
44%
It's dangerous, we should stop all research.
5
9%
It's not dangerous, we should keep developing it.
27
47%
It's not dangerous, we should stop all research.
0
No votes
 
Total votes : 57

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Tue Oct 28, 2014 5:29 pm

Utceforp wrote:
Shaggai wrote:An AI has a weapon inherently built into it: its intelligence. It may not have physical weapons to start out with, but it's still dangerous as hell.

How would an intelligent AI be more dangerous than an intelligent human?

A sufficiently intelligent AI would recursively self-improve, until it reached the limits of its hardware. It would most likely be vastly more intelligent than a human.
piss

User avatar
Valica
Ambassador
 
Posts: 1527
Founded: Feb 08, 2014
Ex-Nation

Postby Valica » Wed Oct 29, 2014 6:13 am

Utceforp wrote:(1a) Sentient AI is a long, long way away.
(1b) It's not a threat to us simply because it won't exist in our lifetimes.

(2) If it did exist, however, it wouldn't be a threat.
(3) An AI has as much potential to be a threat as a human does, it depends on what you give them.
(4) An AI with access to nuclear weapons is a potential threat, but so is a human.

(5) Anyway, by the time we have true AI, trans-humanism will have made us so similar to them it would be hard to tell the difference.


1a. Not really.
1b. We will almost certainly have full AI within 100 years. An optimist would say by 2040.
2. That's iffy.
3. If they were a full AI, the goal would probably be to make them as humanoid as possible at some point. So everything a human has.
4. An AI with a knife is a threat. Same as a human. Why are you trying to make them out to be separate entities?
5. Doubtful. Trans-humanism doesn't directly include removing the blood, brain, etc. AI humanoids wouldn't exactly need these.

The only threat is if we crate an AI meant to function through (a) computer(s).
If we create a standalone AI that functions on a single "brain", then we are almost certainly safe.
I'm a cis-het male. Ask me about my privilege.


Valica is like America with a very conservative economy and a liberal social policy.



Population - 750,500,000



Army - 3,250,500
Navy - 2,000,000
Special Forces - 300,000



5 districts
20 members per district in the House of Representatives
10 members per district in the Senate


Political affiliation - Centrist / Humanist



Religion - Druid



For: Privacy, LGBT Equality, Cryptocurrencies, Free Web, The Middle Class, One-World Government



Against: Nationalism, Creationism, Right to Segregate, Fundamentalism, ISIS, Communism
( -4.38 | -4.31 )
"If you don't use Linux, you're doing it wrong."

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Wed Oct 29, 2014 4:52 pm

Valica wrote:
Utceforp wrote:(1a) Sentient AI is a long, long way away.
(1b) It's not a threat to us simply because it won't exist in our lifetimes.

(2) If it did exist, however, it wouldn't be a threat.
(3) An AI has as much potential to be a threat as a human does, it depends on what you give them.
(4) An AI with access to nuclear weapons is a potential threat, but so is a human.

(5) Anyway, by the time we have true AI, trans-humanism will have made us so similar to them it would be hard to tell the difference.


1a. Not really.
1b. We will almost certainly have full AI within 100 years. An optimist would say by 2040.
2. That's iffy.
3. If they were a full AI, the goal would probably be to make them as humanoid as possible at some point. So everything a human has.
4. An AI with a knife is a threat. Same as a human. Why are you trying to make them out to be separate entities?
5. Doubtful. Trans-humanism doesn't directly include removing the blood, brain, etc. AI humanoids wouldn't exactly need these.

The only threat is if we crate an AI meant to function through (a) computer(s).
If we create a standalone AI that functions on a single "brain", then we are almost certainly safe.

I think that the 100-year estimate is rather quick. I would be willing to bet a nonzero amount of money that it would happen in the next two or three hundred, certainly, but we're far enough away from AI that 100 years is rather unlikely. I mean, I would only be a little bit surprised, but it's nowhere near "almost certain".
piss

User avatar
Valica
Ambassador
 
Posts: 1527
Founded: Feb 08, 2014
Ex-Nation

Postby Valica » Thu Oct 30, 2014 5:27 am

Shaggai wrote:I think that the 100-year estimate is rather quick. I would be willing to bet a nonzero amount of money that it would happen in the next two or three hundred, certainly, but we're far enough away from AI that 100 years is rather unlikely. I mean, I would only be a little bit surprised, but it's nowhere near "almost certain".


Technology evolves at exponential rates.
What we do now was thought impossible 20 years ago.
What we do in 20 years will probably be called impossible now.

Some people say the singularity of technology will occur in 2040.
I think that estimate is possible but not likely.
I'm betting on 2050-2070.
I'm a cis-het male. Ask me about my privilege.


Valica is like America with a very conservative economy and a liberal social policy.



Population - 750,500,000



Army - 3,250,500
Navy - 2,000,000
Special Forces - 300,000



5 districts
20 members per district in the House of Representatives
10 members per district in the Senate


Political affiliation - Centrist / Humanist



Religion - Druid



For: Privacy, LGBT Equality, Cryptocurrencies, Free Web, The Middle Class, One-World Government



Against: Nationalism, Creationism, Right to Segregate, Fundamentalism, ISIS, Communism
( -4.38 | -4.31 )
"If you don't use Linux, you're doing it wrong."

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Thu Oct 30, 2014 6:01 am

Shaggai wrote:
Utceforp wrote:How would an intelligent AI be more dangerous than an intelligent human?

A sufficiently intelligent AI would recursively self-improve, until it reached the limits of its hardware. It would most likely be vastly more intelligent than a human.


Also the ability to reproduce beyond human means. If released into the wild, with modern connectivity it could control a vast array of electronic devices. A single A.I could bring about a technological dark age.
There was a signature here. It's gone now.

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Thu Oct 30, 2014 5:36 pm

Valica wrote:
Shaggai wrote:I think that the 100-year estimate is rather quick. I would be willing to bet a nonzero amount of money that it would happen in the next two or three hundred, certainly, but we're far enough away from AI that 100 years is rather unlikely. I mean, I would only be a little bit surprised, but it's nowhere near "almost certain".


Technology evolves at exponential rates.
What we do now was thought impossible 20 years ago.
What we do in 20 years will probably be called impossible now.

Some people say the singularity of technology will occur in 2040.
I think that estimate is possible but not likely.
I'm betting on 2050-2070.

We're far enough away from AI that strong AI is unlikely to happen in the next hundred years, even given exponential growth.
Lordieth wrote:
Shaggai wrote:A sufficiently intelligent AI would recursively self-improve, until it reached the limits of its hardware. It would most likely be vastly more intelligent than a human.


Also the ability to reproduce beyond human means. If released into the wild, with modern connectivity it could control a vast array of electronic devices. A single A.I could bring about a technological dark age.

Yup. Anything which potentially could contain a copy of the AI becomes an existential threat.
piss

User avatar
Sociobiology
Post Marshal
 
Posts: 18396
Founded: Aug 18, 2010
Ex-Nation

Postby Sociobiology » Thu Oct 30, 2014 6:22 pm

Shaggai wrote:
Valica wrote:
Technology evolves at exponential rates.
What we do now was thought impossible 20 years ago.
What we do in 20 years will probably be called impossible now.

Some people say the singularity of technology will occur in 2040.
I think that estimate is possible but not likely.
I'm betting on 2050-2070.

We're far enough away from AI that strong AI is unlikely to happen in the next hundred years, even given exponential growth.


with the advent of memristors AI became much more likely, because they use the same type of information as neurons.
Our real limitation is how well we can image the brain, since it shows how to do most of the things we want an AI to do.
Lordieth wrote:
Also the ability to reproduce beyond human means. If released into the wild, with modern connectivity it could control a vast array of electronic devices. A single A.I could bring about a technological dark age.

Yup. Anything which potentially could contain a copy of the AI becomes an existential threat.

not as much as you think, a mind capable of creative thought must by definition be equally capable of error, so don't expect too much from an AI. the more intelligent we make them the more error prone they become.
I think we risk becoming the best informed society that has ever died of ignorance. ~Reuben Blades

I got quite annoyed after the Haiti earthquake. A baby was taken from the wreckage and people said it was a miracle. It would have been a miracle had God stopped the earthquake. More wonderful was that a load of evolved monkeys got together to save the life of a child that wasn't theirs. ~Terry Pratchett

User avatar
Salandriagado
Postmaster of the Fleet
 
Posts: 22831
Founded: Apr 03, 2008
Ex-Nation

Postby Salandriagado » Thu Oct 30, 2014 6:30 pm

Shaggai wrote:
Utceforp wrote:How would an intelligent AI be more dangerous than an intelligent human?

A sufficiently intelligent AI would recursively self-improve, until it reached the limits of its hardware. It would most likely be vastly more intelligent than a human.


That depends entirely on how much hardware you give it. Additionally, "more intelligent" does not imply "more dangerous". Quite the opposite, in my experience.
Cosara wrote:
Anachronous Rex wrote:Good thing most a majority of people aren't so small-minded, and frightened of other's sexuality.

Over 40% (including me), are, so I fixed the post for accuracy.

Vilatania wrote:
Salandriagado wrote:
Notice that the link is to the notes from a university course on probability. You clearly have nothing beyond the most absurdly simplistic understanding of the subject.
By choosing 1, you no longer have 0 probability of choosing 1. End of subject.

(read up the quote stack)

Deal. £3000 do?[/quote]

Of course.[/quote]

User avatar
Wisconsin9
Post Czar
 
Posts: 35753
Founded: May 18, 2012
Ex-Nation

Postby Wisconsin9 » Thu Oct 30, 2014 6:33 pm

Solution to an AI threat: pull the plug. Literally. How is this not ludicrously obvious? Keep 'em on a leash until we make sure that they're as safe as we can make them.
~~~~~~~~
We are currently 33% through the Trump administration.
................................................................................................................................................................................................................
................................................................................................................................................................................................................

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Thu Oct 30, 2014 6:34 pm

Salandriagado wrote:
Shaggai wrote:A sufficiently intelligent AI would recursively self-improve, until it reached the limits of its hardware. It would most likely be vastly more intelligent than a human.


That depends entirely on how much hardware you give it. Additionally, "more intelligent" does not imply "more dangerous". Quite the opposite, in my experience.

If it's got enough hardware to be smart enough to improve itself, it probably has at least as much hardware as a human. Since it can recursively self-improve, it can make much better use of said hardware than a human.

That's because you were dealing with humans, presumably humans with relatively similar goals. An AI would not necessarily share the same goals as a human, and if you think intelligent people aren't dangerous then you have only been interacting with a specific subset of intelligent people.
piss

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Thu Oct 30, 2014 6:41 pm

Wisconsin9 wrote:Solution to an AI threat: pull the plug. Literally. How is this not ludicrously obvious? Keep 'em on a leash until we make sure that they're as safe as we can make them.

That works if and only if you can be sure they're safe.
piss

User avatar
Wisconsin9
Post Czar
 
Posts: 35753
Founded: May 18, 2012
Ex-Nation

Postby Wisconsin9 » Thu Oct 30, 2014 6:45 pm

Shaggai wrote:
Wisconsin9 wrote:Solution to an AI threat: pull the plug. Literally. How is this not ludicrously obvious? Keep 'em on a leash until we make sure that they're as safe as we can make them.

That works if and only if you can be sure they're safe.

You can never be 100% certain that something's safe.
~~~~~~~~
We are currently 33% through the Trump administration.
................................................................................................................................................................................................................
................................................................................................................................................................................................................

User avatar
Vladislavija
Chargé d'Affaires
 
Posts: 469
Founded: Mar 14, 2012
Ex-Nation

Postby Vladislavija » Thu Oct 30, 2014 7:06 pm

Valica wrote:Technology evolves at exponential rates.
What we do now was thought impossible 20 years ago.
What we do in 20 years will probably be called impossible now.


I tend to disagree on all three parts.

User avatar
Aleckandor REDUX
Attaché
 
Posts: 84
Founded: Sep 09, 2014
New York Times Democracy

Postby Aleckandor REDUX » Fri Oct 31, 2014 8:46 am

AI would change the face of global human society, culture, economics, and perhaps biology forever. If the applied sciences can deliver it to the world correctly, there would be some major historical shift (whether or not this'll be bloody or not is uncertain) that I guess would see Earth connected to some post-scarcity grid of sustainable efficiency, automating virtually all labor and industry. Human endeavors, since all immediate physical necessities and luxuries can now be met, will probably focus on areas of cultural creativity or now wonderfully AI-assisted scientific research. What I'm hoping for is some sort of Iain M. Banks-esque Culture kind of pan-human anarcho-socialist stellar civilization to arise from this, that is, if we can program these predominant AIs to develop humanocentric morals and the ability to empathize on a meaningful level.
FORMERLY ALECKANDOR ~ FOUNDED 05/30/2011; + 2767 POSTS
• Demonym: Aleckandorean(s) | Government: Democratic Multinationalist Confederation
• Global Population: 19.6 Bill. (Not NS Stat)| Tech: MT/PMT
• Military: 6% From Pop. (11% In Total War)
• Special Links: {All W.I.P.}
Unless I am participating in some huge war thread that is multi-theater and protracted, I usually limit my population use to be fair in each set-piece RP and to keep some realism. But I don't just do wars and geopolitics, I can do character-based content and world-building as well. Just send a TG my way if you're interested in something or bored.

17. Centrist Authoritarian [Indep./Swing]. Catholic. Chinese-Filipino. SoCal, USA.

User avatar
Valica
Ambassador
 
Posts: 1527
Founded: Feb 08, 2014
Ex-Nation

Postby Valica » Fri Oct 31, 2014 8:58 am

Vladislavija wrote:
Valica wrote:Technology evolves at exponential rates.
What we do now was thought impossible 20 years ago.
What we do in 20 years will probably be called impossible now.


I tend to disagree on all three parts.


And? That simply makes you wrong.

Technology does grow at exponential rates. It's a fact.

We went thousands of years with slow innovation and inventions.
Then came the 19th and 20th century.

After the industrial age, machines were booming.

Fast forward to WW2; we were pumping out inventions like it was nobody's business.

Even further to the Cold War; we went to fucking Luna.

Then you get the 90s and the dawn of serious home computing.
Internet speeds were atrocious and all we had was dial-up, but people loved the internet.

Then you come upon the last 10 years and what do you find?
Fiber internet, 3D printing, virtual reality technology, cloning advancements, the ability to render the Earth in 3D in a browser, etc.
I could keep listing shit, but you get the idea.

Technology advancements allow for more technology advancements. It's literally exponential.

Secondly, imagine going back to 1994 and tell people we would have gigabit internet or that we could play games at 4k resolution...
Or show them a picture of the newest iPhone.
They would have called you insane.

Don't even get me started on quantum computing.
The technology wasn't even starting to emerge 20 years ago.

The third point might be the most flimsy because we understand now that tech evolves rapidly, so we can better gauge what will exist.
But even then, there's no telling what kinds of advancements might be made.
I'm a cis-het male. Ask me about my privilege.


Valica is like America with a very conservative economy and a liberal social policy.



Population - 750,500,000



Army - 3,250,500
Navy - 2,000,000
Special Forces - 300,000



5 districts
20 members per district in the House of Representatives
10 members per district in the Senate


Political affiliation - Centrist / Humanist



Religion - Druid



For: Privacy, LGBT Equality, Cryptocurrencies, Free Web, The Middle Class, One-World Government



Against: Nationalism, Creationism, Right to Segregate, Fundamentalism, ISIS, Communism
( -4.38 | -4.31 )
"If you don't use Linux, you're doing it wrong."

User avatar
Vladislavija
Chargé d'Affaires
 
Posts: 469
Founded: Mar 14, 2012
Ex-Nation

Postby Vladislavija » Fri Oct 31, 2014 10:27 am

Valica wrote:
Vladislavija wrote:
I tend to disagree on all three parts.


And? That simply makes you wrong.

Technology does grow at exponential rates. It's a fact.

We went thousands of years with slow innovation and inventions.
Then came the 19th and 20th century.

After the industrial age, machines were booming.

Fast forward to WW2; we were pumping out inventions like it was nobody's business.

Even further to the Cold War; we went to fucking Luna.

Then you get the 90s and the dawn of serious home computing.
Internet speeds were atrocious and all we had was dial-up, but people loved the internet.

Then you come upon the last 10 years and what do you find?
Fiber internet, 3D printing, virtual reality technology, cloning advancements, the ability to render the Earth in 3D in a browser, etc.
I could keep listing shit, but you get the idea.

Technology advancements allow for more technology advancements. It's literally exponential.

Secondly, imagine going back to 1994 and tell people we would have gigabit internet or that we could play games at 4k resolution...
Or show them a picture of the newest iPhone.
They would have called you insane.

Don't even get me started on quantum computing.
The technology wasn't even starting to emerge 20 years ago.

The third point might be the most flimsy because we understand now that tech evolves rapidly, so we can better gauge what will exist.
But even then, there's no telling what kinds of advancements might be made.


I don't see how any milestones you mention mean anything. Here let me show you some additional "milestones".
Image


Now I wonder what do those flat lines mean. Could it be that we're reaching a plateau in our exponential technological growth? If you want I can match for every milestone you make one limit we have or are approaching.

Extrapolation is very dangerous. ;)
Last edited by Vladislavija on Fri Oct 31, 2014 10:28 am, edited 1 time in total.

User avatar
Lordieth
Post Czar
 
Posts: 31603
Founded: Jun 18, 2010
New York Times Democracy

Postby Lordieth » Fri Oct 31, 2014 10:30 am

Sociobiology wrote:
Shaggai wrote:We're far enough away from AI that strong AI is unlikely to happen in the next hundred years, even given exponential growth.


with the advent of memristors AI became much more likely, because they use the same type of information as neurons.
Our real limitation is how well we can image the brain, since it shows how to do most of the things we want an AI to do.
Yup. Anything which potentially could contain a copy of the AI becomes an existential threat.

not as much as you think, a mind capable of creative thought must by definition be equally capable of error, so don't expect too much from an AI. the more intelligent we make them the more error prone they become.


I'm not sure I agree. Computers are capable of a great level of accuracy, and creativity doesn't necessarily mean error. A lot of what causes human error wouldn't apply to an artificial intelligence. At the very least, its capacity to learn will be beyond any level of a human, so while it may make mistakes, it will almost certainly learn from them.

It could make errors in judgement, yes. Morality is subject to such things.
Last edited by Lordieth on Fri Oct 31, 2014 10:31 am, edited 1 time in total.
There was a signature here. It's gone now.

User avatar
Vladislavija
Chargé d'Affaires
 
Posts: 469
Founded: Mar 14, 2012
Ex-Nation

Postby Vladislavija » Fri Oct 31, 2014 10:32 am

Lordieth wrote:I'm not sure I agree. Computers are capable of a great level of accuracy, and creativity doesn't necessarily mean error. A lot of what causes human error wouldn't apply to an artificial intelligence. At the very least, its capacity to learn will be beyond any level of a human, so while it may make mistakes, it will almost certainly learn from them.

It could make errors in judgement, yes. Morality is subject to such things.


I say computers would be as prone to errors as humans. Software bugs can be a bitch to weed out. Especially when using data mining. Just remember that AI which was supposed to diff tanks from humans and failed miserably.

User avatar
Utceforp
Postmaster-General
 
Posts: 10328
Founded: Apr 10, 2012
Left-wing Utopia

Postby Utceforp » Fri Oct 31, 2014 12:47 pm

Vladislavija wrote:
Lordieth wrote:I'm not sure I agree. Computers are capable of a great level of accuracy, and creativity doesn't necessarily mean error. A lot of what causes human error wouldn't apply to an artificial intelligence. At the very least, its capacity to learn will be beyond any level of a human, so while it may make mistakes, it will almost certainly learn from them.

It could make errors in judgement, yes. Morality is subject to such things.


I say computers would be as prone to errors as humans. Software bugs can be a bitch to weed out. Especially when using data mining. Just remember that AI which was supposed to diff tanks from humans and failed miserably.

The difference being that you can fix bugs in AI, but bugs in humans are hard/impossible to identify or fix.
Signatures are so 2014.

User avatar
Vladislavija
Chargé d'Affaires
 
Posts: 469
Founded: Mar 14, 2012
Ex-Nation

Postby Vladislavija » Fri Oct 31, 2014 1:53 pm

Utceforp wrote:
Vladislavija wrote:
I say computers would be as prone to errors as humans. Software bugs can be a bitch to weed out. Especially when using data mining. Just remember that AI which was supposed to diff tanks from humans and failed miserably.

The difference being that you can fix bugs in AI, but bugs in humans are hard/impossible to identify or fix.


Uhhhhh, I'm really not convinced about that. Have you worked with AI ever?

User avatar
The Black Forrest
Khan of Spam
 
Posts: 59144
Founded: Antiquity
Inoffensive Centrist Democracy

Postby The Black Forrest » Fri Oct 31, 2014 1:57 pm

Utceforp wrote:
Shaggai wrote:An AI has a weapon inherently built into it: its intelligence. It may not have physical weapons to start out with, but it's still dangerous as hell.

How would an intelligent AI be more dangerous than an intelligent human?


How would it understand compassion?
*I am a master proofreader after I click Submit.
* There is actually a War on Christmas. But Christmas started it, with it's unparalleled aggression against the Thanksgiving Holiday, and now Christmas has seized much Lebensraum in November, and are pushing into October. The rest of us seek to repel these invaders, and push them back to the status quo ante bellum Black Friday border. -Trotskylvania
* Silence Is Golden But Duct Tape Is Silver.
* I felt like Ayn Rand cornered me at a party, and three minutes in I found my first objection to what she was saying, but she kept talking without interruption for ten more days. - Max Barry talking about Atlas Shrugged

User avatar
Norstal
Post Czar
 
Posts: 41465
Founded: Mar 07, 2008
Ex-Nation

Postby Norstal » Fri Oct 31, 2014 2:02 pm

Vladislavija wrote:
Utceforp wrote:The difference being that you can fix bugs in AI, but bugs in humans are hard/impossible to identify or fix.


Uhhhhh, I'm really not convinced about that. Have you worked with AI ever?

I'm not sure a lot of people worked with computers here.

AI has many things but they are mainly these:
  • Getting to the correct state
  • Optimization of state traversal

The danger in AI is not that they become sentient. The danger of AI is that it is too optimized. An AI that staples paper can be become dangerous as it may be able to staple everything and anything.
Toronto Sun wrote:Best poster ever. ★★★★★


New York Times wrote:No one can beat him in debates. 5/5.


IGN wrote:Literally the best game I've ever played. 10/10


NSG Public wrote:What a fucking douchebag.



Supreme Chairman for Life of the Itty Bitty Kitty Committee

User avatar
Mefpan
Negotiator
 
Posts: 5872
Founded: Oct 23, 2012
Ex-Nation

Postby Mefpan » Fri Oct 31, 2014 2:06 pm

The Black Forrest wrote:
Utceforp wrote:How would an intelligent AI be more dangerous than an intelligent human?


How would it understand compassion?

By programming the AI in a way that includes giving actual values to formless things like trust, loyalty and reliability which could very well be seen as valuable resources given that they're non-negligible factors in causing a preferred reaction in a person?

I mean, we write numbers on fancy slips of paper, call it money and suddenly it's extremely important. Shouldn't be too difficult to make it so that repeated positive interaction with us fleshbags is given some value as well.
I support thermonuclear warfare. Do you want to play a game of chess?
NationStates' umpteenth dirty ex-leftist class traitor.
I left the Left when it turned Right. Now I'm going back to the Right because it's all that's Left.
Yeah, Screw Realism!
Loyal Planet of Mankind

User avatar
Shaggai
Powerbroker
 
Posts: 9342
Founded: Mar 27, 2013
Ex-Nation

Postby Shaggai » Sun Nov 02, 2014 2:37 pm

Mefpan wrote:
The Black Forrest wrote:
How would it understand compassion?

By programming the AI in a way that includes giving actual values to formless things like trust, loyalty and reliability which could very well be seen as valuable resources given that they're non-negligible factors in causing a preferred reaction in a person?

I mean, we write numbers on fancy slips of paper, call it money and suddenly it's extremely important. Shouldn't be too difficult to make it so that repeated positive interaction with us fleshbags is given some value as well.

Define trust, loyalty, reliability, compassion, and positive interaction.
piss

User avatar
Mefpan
Negotiator
 
Posts: 5872
Founded: Oct 23, 2012
Ex-Nation

Postby Mefpan » Sun Nov 02, 2014 2:46 pm

Shaggai wrote:
Mefpan wrote:By programming the AI in a way that includes giving actual values to formless things like trust, loyalty and reliability which could very well be seen as valuable resources given that they're non-negligible factors in causing a preferred reaction in a person?

I mean, we write numbers on fancy slips of paper, call it money and suddenly it's extremely important. Shouldn't be too difficult to make it so that repeated positive interaction with us fleshbags is given some value as well.

Define trust, loyalty, reliability, compassion, and positive interaction.

Well, that teaches me to keep my mouth shut because I'd have to avoid answering that request due to a lack of ability to do so in any satisfactory way.

Though, I'm still holding my fingers crossed that people smarter than me can figure out a system of recognizing those things based on an evaluation of past man-machine interaction records. I mean, facial recognition's a thing to some extent now and I think companies are already looking into ways to decipher the exact mood a person is in based on their facial expressions. Can't find a reputable source on that though, so take that with a mountain of salt.
I support thermonuclear warfare. Do you want to play a game of chess?
NationStates' umpteenth dirty ex-leftist class traitor.
I left the Left when it turned Right. Now I'm going back to the Right because it's all that's Left.
Yeah, Screw Realism!
Loyal Planet of Mankind

PreviousNext

Advertisement

Remove ads

Return to General

Who is online

Users browsing this forum: Click Ests Vimgalevytopia, Eahland, Grinning Dragon, Keltionialang, Maximum Imperium Rex, Paddy O Fernature, Plan Neonie, Senkaku, Washington Resistance Army

Advertisement

Remove ads