by The V O I D » Wed Apr 08, 2015 8:00 am
by The Black Forrest » Wed Apr 08, 2015 8:17 am
by Purpelia » Wed Apr 08, 2015 8:18 am
by Bearon » Wed Apr 08, 2015 8:20 am
by Sun Wukong » Wed Apr 08, 2015 8:27 am
by The V O I D » Wed Apr 08, 2015 8:31 am
Sun Wukong wrote:I think people anthropomorphize too much, even when they're trying not to. No one knows what an artificial intelligence would look like, but it would probably 'think' in much the same way that a submarine 'swims.'
Even the accusation that an AI would fight to survive is essentially baseless. There's no reason to think it would. It wouldn't have a billion years of evolutionary history biasing it to do so.
by Auroya » Wed Apr 08, 2015 8:33 am
by Sun Wukong » Wed Apr 08, 2015 8:34 am
The V O I D wrote:Sun Wukong wrote:I think people anthropomorphize too much, even when they're trying not to. No one knows what an artificial intelligence would look like, but it would probably 'think' in much the same way that a submarine 'swims.'
Even the accusation that an AI would fight to survive is essentially baseless. There's no reason to think it would. It wouldn't have a billion years of evolutionary history biasing it to do so.
This might be true, but then again, this is a machine we're talking about. If it has a high enough processing power, and can process information quickly, what stops it from learning all of human knowledge, if it can handle processing it all? Then it knows evolution, war, etc. all in a matter of opening it's eyes, so-to-speak.
by Otulia » Wed Apr 08, 2015 8:36 am
by Lordieth » Wed Apr 08, 2015 8:37 am
Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.
by Hirota » Wed Apr 08, 2015 8:40 am
by The V O I D » Wed Apr 08, 2015 8:41 am
Lordieth wrote:A.I is entirely possible, but it's still in its infancy. Machine learning has come a long way, but most of the intelligent systems that are being built can only solve certain types of problems, or do things in very specific ways. Making machines that are smart isn't difficult. The problem is making them smart in general, and by that I mean, beyond the very specific type of challenges they're coded to solve.
There's a very good documentary about IBM Watson, a machine that they pit against human opponents in the gameshow Jeopardy. It's very good, and I recommend watching it, if you have an interest in articial intelligence. As smart as that machine is, however, all it is doing is a very complex kind of logical, deductive reasoning based on a huge amount of data. It appears to be thinking for itself, but it isn't. It can't reason beyond what it's programmed to do, and a lot of A.I systems are like this. They can't teach themselves to think beyond their own programming, which is what A.I needs to be able to do to become true A.I
It's called Artificial general intelligence, as Wikipedia puts it:Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.
That's where we need to be, and we're still a far way off.
by Bezkoshtovnya » Wed Apr 08, 2015 8:45 am
Dante Alighieri wrote:There is no greater sorrow than to recall happiness in times of misery
Charlie Chaplin wrote:Nothing is permanent in this wicked world, not even our troubles.
by Lordieth » Wed Apr 08, 2015 8:46 am
Hirota wrote:First of all, I believe your definition of AI isn't correct. Self-awareness is a requirement for the Turing Test, but Artificial intelligence is the act of simulating or carrying out actions that would typically require a human decision - that doesn't require the self-awareness required to pass the Turing Test.
If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.
I would argue that Artificial Intelligence is already in existence and already in prevalence. Look at the rise of "Big Data" - Whereas beforehand that would require a human to study and understand the data, that job is already out of our hands.
I wouldn't worry about what the media portrays as AI - be that Arnie, "The Machine," HAL-9000 or whatever, I would rather focus on reality. It's far more likely that humanity will deliberately program an AI to make harmful decisions than it is for an AI to make that decision for itself.
by Arcturus Novus » Wed Apr 08, 2015 8:48 am
Lordieth wrote:A.I is entirely possible, but it's still in its infancy. Machine learning has come a long way, but most of the intelligent systems that are being built can only solve certain types of problems, or do things in very specific ways. Making machines that are smart isn't difficult. The problem is making them smart in general, and by that I mean, beyond the very specific type of challenges they're coded to solve.
There's a very good documentary about IBM Watson, a machine that they pit against human opponents in the gameshow Jeopardy. It's very good, and I recommend watching it, if you have an interest in articial intelligence. As smart as that machine is, however, all it is doing is a very complex kind of logical, deductive reasoning based on a huge amount of data. It appears to be thinking for itself, but it isn't. It can't reason beyond what it's programmed to do, and a lot of A.I systems are like this. They can't teach themselves to think beyond their own programming, which is what A.I needs to be able to do to become true A.I
It's called Artificial general intelligence, as Wikipedia puts it:Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists.
That's where we need to be, and we're still a far way off.
Nilokeras wrote:there is of course an interesting thread to pull on [...]
Unfortunately we're all forced to participate in whatever baroque humiliation kink the OP has going on instead.
by The Enlightenment Group » Wed Apr 08, 2015 8:49 am
by Lordieth » Wed Apr 08, 2015 8:53 am
The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.
Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.
People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.
by Sun Wukong » Wed Apr 08, 2015 8:56 am
Lordieth wrote:The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.
Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.
People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.
A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.
by Purpelia » Wed Apr 08, 2015 8:56 am
Lordieth wrote:A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.
by United Russian Soviet States » Wed Apr 08, 2015 8:59 am
by Lordieth » Wed Apr 08, 2015 9:01 am
Purpelia wrote:Lordieth wrote:A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.
The problem is not if the AI could make up its own mind on right or wrong. It's that without the evolutionary pressures that shaped our opinion of the subject its ideas might and probably would be very alien to our concepts of the same. It might for example decide to subscribe no inherent value to human life. Or at least no more value than we do to a pet goldfish.
by Purpelia » Wed Apr 08, 2015 9:04 am
Lordieth wrote:There's where genetic algorithms come into play. It's a specific type of bioengineering that simulates evolution and natural selection to build smarter A.I or complex systems.
You're right, though. It would be a very different form of life to us. It would think differently, and potentially unpredictable, but the fears of A.I going rogue are perhaps overstated. Even A.I that can teach itself would still be limited to what you allow it to learn. It could get smarter at reasoning, but never be capable of re-compiling its own base code to perform functions beyond what it's designed for. There are risks, but we're so far beyond even that.
by The Enlightenment Group » Wed Apr 08, 2015 9:08 am
Lordieth wrote:The Enlightenment Group wrote:Computation is not cognition. Even were a machine (specifically a computer) advanced enough to make complex choices automatically, it would still require another entity to assign values and priorities to it. It cannot be evil because it is doesn't have any values of its own.
Humans only prioritize certain things (eating, mating, not dying) because of evolution and instinct wiring them that way. Someone could assign values to an artificially intelligent computer based off of human evolutionary history, but that would still be someone programming it rather than actual independent action. The computer would still be acting as an extension of the programmer. Of course, in turn the programmer could be said to act as an extension of human instinctual urges. But the question of whether free will exists is not the topic here so I refrain from further comment on that matter.
People could assign really stupid values to an AI computer. For example, prioritizing the production of paper clips above all else. But that would still be human error rather than the fault of the computer itself.
A machine would be capable of learning right from wrong on its own, it's just difficult to comprehend. Artificial neural networks are complex enough to make moral deductions, but an A.I smart enough to learn right from wrong itself would have to be able to write its own algorithms, rather than rely on just the ones the programmers have given it. It's not enough to give an A.I static rules, however complex. It must be able to write its own rules.
Advertisement
Users browsing this forum: Ancientania, Andavarast, Google [Bot], Hidrandia, Ifreann, Kannap, Keltionialang, Kerwa, Merethin, Montfaulget, The Two Jerseys, Uiiop
Advertisement