Alowwvia wrote:This entire thread is a fuck-up with semantics that dodge the central point.
The scenario listed is this:
An AI become sapient. This doesn't mean it can mimic human thought patterns or some shit, but that it follows Cognito Ergo Sum. Think about yourself for a minute: You are a thinker. You perceive the world around you, and think thoughts, and are. You are.
Machines, and most animals to our knowledge, are not. They don't have the cognition to realize that they are an individual thinker that is actively perceiving reality, rather than simply reacting as part of your enviorment. To our knowledge, only humans, as far as we know, have the neurological hardware to pull this feat, to imagine and philosophize and ponder our own existence and selves.
However the hypothetical here states that, if an artificially-created intelligence becomes this state of Cognito Ergo Sum, where it realizes that is in in fact a thinking entity independent of its own environment, and that it realizes it can 'imagine' things it cannot fully 'compute', and that it can address that it doesn't know certain knowledge or that it does things it doesn't understand, then, being like us, is it ethical to end this consciousness it if it has a desire to continue existing?
It is still a mimic, nothing more.




)