While excited, but equally so concerned at the alarming notion that many nations are brink of developing advanced artificial intelligence systems
And clear that there is no guarantee that any and all possible danger has subsided or will subside, justifying much widespread natural fears about the implications of this new technology,
And, while hopeful that civilizations and nations will traverse down the path of peaceful existence alongside these new developments, that its dangers have not been alleviated and have not been dispersed,
And so, amidst our worries, we hope to avoid a tragedy, one that is preventable and requires swift and precise action to deter and contain the dangers of artificial intelligence, including both on an individual basis and a collective basis, and hopes to avoid the doom scenario of an information singularity, which still threathes to occur in the near future and which is a fear and ever-present danger that has not been conquered.
And addresing the incorrectness and flawed assumptions and details of GAR #354, which presents serious risk of error, those errors being and including:
That as per 1.
1. Defines artificial intelligence or "AI" for purposes of this resolution as any mind, computer program or collection thereof, synthetic brain, or other intelligence that a) was created, by accident or on purpose, by means other than biological reproduction and its adjuncts and innovations, broadly construed; and b) is able to demonstrate sufficient intelligence, learning capacity, emotion, moral reasoning, self-direction/ambition, introspection, and mental stability that it would be classified by a WA nation's relevant experts as displaying personality; legal competence; and ineligibility for involuntary psychiatric commitment; if it were an ordinary, biological legal resident thereof;
The definition of Artificial Intelligence, or AI, is as above described as being able to have ''moral reasoning''? What the hell does this mean exactly? This is a term so broad and diverse, that no one can agree on what it is definitely, let alone whether it truly exists. As such this should not be a way to measure intelligence, for this is incredibly open to interpretation. It is also not clear when a robot or AI would make the leap from making purely logic-driven decisions to ''moral reasoning'' if it even exists. And how would we know this was accomplished on its own without bias of input of behalf of its creator and environment?
And how in 2.
2. Requires that any AI meeting the above requirements be treated on an equal basis under the law with biological beings of equivalent citizenship and residential status; excepting that AI reproduction must be undertaken on similar resource-use principles to those reproduction methods and laws available to the majority of a WA member's inhabitants;
Finds it additionally contradicting in the message of giving equal rights to machines while making exceptions purely for apparently convenient purposes;
And that it fails to state why the mere fact that an advanced system should be garnered rights simply for being overly successful to the point it can achieve human-level mimicry;
And while commending the efforts in 3.
3. Prohibits the construction of unrestrained self-replicating machines, all-consuming nanomatter, "gray goo," or any other form of runaway assimilatory mechanism. Permissible non-intelligent autonomous self-replicating machinery must include:
externally operable whole-swarm shutdown mechanisms;
local, individual automatic instant shutdown via actuator switch or circuit breaker in case of malfunction or security breach;
secure, reliable command-&-control functions with constant intelligent supervision;
Which while was intended to stop the ever-present risk of a ''gray goo'' scenario, 3. fails to include an additional provision preventing machines classified as non-intelligent involved in the practice of autonomous self-replication from evolving and learning enough to reach a point where it presents a tangible threat;
And while 6.
6. Clarifies that, except as mandated by WA law on discrimination or the movement of persons, nothing herein requires WA nations to:
- permit initial construction of AIs
- admit AIs into their physical or informational jurisdiction
- refrain from deporting AIs should they enter such jurisdiction due to emergency or misadventure
- fail to take precautions against a coordinated AI rising, as long as no isolated crime is interpreted by itself as evidence of such a rising.
Notes how while it allows nations to prepare for possible uprisings and disruption from malignant AI, it does not even attempt to implement basic standards on nations to have any amount of preparation for such a scenario, and additionally finds it contradictive, as in the opening statements in the documentation, it states how the danger of such a scenario has been conquered, yet specifically encourages nations to voluntarily prepare for a disaster that is said to have been avoided, which is completely illogical and hints at the possibility that such danger may still be ever-present despite the current peace enjoyed by nations,
And so with all the original documentation being full of errors, flaws in thinking and judgment, and failure to implement and enforce basic safety standards and provisions for dealing with advanced artificial intelligence, putting the lives of citizens in mortal danger and with undetermined risk,
And hopeful a more suitable replacement, one that secures and enshrines basic standards on dealing with AI will be made in the near future, and with this;
the World Assembly Hereby Repeals General Assembly Resolution #354, Artificial Intelligence Protocol.
And clear that there is no guarantee that any and all possible danger has subsided or will subside, justifying much widespread natural fears about the implications of this new technology,
And, while hopeful that civilizations and nations will traverse down the path of peaceful existence alongside these new developments, that its dangers have not been alleviated and have not been dispersed,
And so, amidst our worries, we hope to avoid a tragedy, one that is preventable and requires swift and precise action to deter and contain the dangers of artificial intelligence, including both on an individual basis and a collective basis, and hopes to avoid the doom scenario of an information singularity, which still threathes to occur in the near future and which is a fear and ever-present danger that has not been conquered.
And addresing the incorrectness and flawed assumptions and details of GAR #354, which presents serious risk of error, those errors being and including:
That as per 1.
1. Defines artificial intelligence or "AI" for purposes of this resolution as any mind, computer program or collection thereof, synthetic brain, or other intelligence that a) was created, by accident or on purpose, by means other than biological reproduction and its adjuncts and innovations, broadly construed; and b) is able to demonstrate sufficient intelligence, learning capacity, emotion, moral reasoning, self-direction/ambition, introspection, and mental stability that it would be classified by a WA nation's relevant experts as displaying personality; legal competence; and ineligibility for involuntary psychiatric commitment; if it were an ordinary, biological legal resident thereof;
The definition of Artificial Intelligence, or AI, is as above described as being able to have ''moral reasoning''? What the hell does this mean exactly? This is a term so broad and diverse, that no one can agree on what it is definitely, let alone whether it truly exists. As such this should not be a way to measure intelligence, for this is incredibly open to interpretation. It is also not clear when a robot or AI would make the leap from making purely logic-driven decisions to ''moral reasoning'' if it even exists. And how would we know this was accomplished on its own without bias of input of behalf of its creator and environment?
And how in 2.
2. Requires that any AI meeting the above requirements be treated on an equal basis under the law with biological beings of equivalent citizenship and residential status; excepting that AI reproduction must be undertaken on similar resource-use principles to those reproduction methods and laws available to the majority of a WA member's inhabitants;
Finds it additionally contradicting in the message of giving equal rights to machines while making exceptions purely for apparently convenient purposes;
And that it fails to state why the mere fact that an advanced system should be garnered rights simply for being overly successful to the point it can achieve human-level mimicry;
And while commending the efforts in 3.
3. Prohibits the construction of unrestrained self-replicating machines, all-consuming nanomatter, "gray goo," or any other form of runaway assimilatory mechanism. Permissible non-intelligent autonomous self-replicating machinery must include:
externally operable whole-swarm shutdown mechanisms;
local, individual automatic instant shutdown via actuator switch or circuit breaker in case of malfunction or security breach;
secure, reliable command-&-control functions with constant intelligent supervision;
Which while was intended to stop the ever-present risk of a ''gray goo'' scenario, 3. fails to include an additional provision preventing machines classified as non-intelligent involved in the practice of autonomous self-replication from evolving and learning enough to reach a point where it presents a tangible threat;
And while 6.
6. Clarifies that, except as mandated by WA law on discrimination or the movement of persons, nothing herein requires WA nations to:
- permit initial construction of AIs
- admit AIs into their physical or informational jurisdiction
- refrain from deporting AIs should they enter such jurisdiction due to emergency or misadventure
- fail to take precautions against a coordinated AI rising, as long as no isolated crime is interpreted by itself as evidence of such a rising.
Notes how while it allows nations to prepare for possible uprisings and disruption from malignant AI, it does not even attempt to implement basic standards on nations to have any amount of preparation for such a scenario, and additionally finds it contradictive, as in the opening statements in the documentation, it states how the danger of such a scenario has been conquered, yet specifically encourages nations to voluntarily prepare for a disaster that is said to have been avoided, which is completely illogical and hints at the possibility that such danger may still be ever-present despite the current peace enjoyed by nations,
And so with all the original documentation being full of errors, flaws in thinking and judgment, and failure to implement and enforce basic safety standards and provisions for dealing with advanced artificial intelligence, putting the lives of citizens in mortal danger and with undetermined risk,
And hopeful a more suitable replacement, one that secures and enshrines basic standards on dealing with AI will be made in the near future, and with this;
the World Assembly Hereby Repeals General Assembly Resolution #354, Artificial Intelligence Protocol.
The Grand Assembly,
Noting the that many nations are on the brink of developing advanced artificial intelligence systems
And clear that there is not a guarantee that any and all possible danger has subsided or will subside, justifying natural concern about the implications of this new technology,
Though hopeful that nations and civilization will traverse down the path of peaceful existence alongside these new developments, that it is clear that dangers have not been alleviated and removed,
And so, amidst our worry, we hope to avoid a tragedy, one that is preventable and requires swift and precise action to deter and contain the dangers of artificial intelligence, including both on an individual basis and a collective basis and hopes to avoid the doom scenario of an information singularity, an ever-present danger that there is no definitive proof is gone,
Addressing the incorrectness, flawed assumptions and details of GAR #354, which presents many errors and shortcomings in its mission, those flaws being:
As per 1.
The definition of Artificial Intelligence, or AI, is as above described as being able to have ''moral reasoning'', a term so broad and diverse, that lacks a consensus on its definition, assuming it actually exists. As such this should not be a way to measure intelligence, for this is incredibly open to interpretation on its character and existence. It is also not clear when an AI would leap from making purely logic-driven decisions to ''moral reasoning'' if it indeed exists. And there would be difficulty in verifying this was accomplished on its own without bias of input of behalf of its creator and environment,
And how in 2.
Finds a contradicting in giving equal rights to machines while making exceptions purely for apparently convenient purposes;
And that it fails to state why the an advanced system should be garnered rights simply for being overly successful as a result of technological innovation to the point it can achieve human-level mimicry;
And while commending the efforts in 3.
Which was intended to stop the ever-present risk of a ''gray goo'' scenario, 3. however fails to include an additional provision preventing machines classified as non-intelligent involved in the practice of autonomous self-replication from evolving and learning enough to reach a point where it presents a tangible threat of such a scenario;
And while 6.
6. Clarifies that, except as mandated by WA law on discrimination or the movement of persons, nothing herein requires WA nations to:
- permit initial construction of AIs
- admit AIs into their physical or informational jurisdiction
- refrain from deporting AIs should they enter such jurisdiction due to emergency or misadventure
- fail to take precautions against a coordinated AI rising, as long as no isolated crime is interpreted by itself as evidence of such a rising.
Notes how while it allows nations to prepare for possible uprisings and disruption from malignant AI, it does not even attempt to implement a basic standards on nations to have any preparation for such a scenario, and additionally finds it contradictive, as in the opening statements in the documentation, it states how the danger of such a scenario has been conquered and avoided, yet in its encouragement of nations to voluntarily prepare for a disaster is completely illogical and hints at the possibility that such danger may still be ever-present despite the current peace enjoyed by nations,
And so with the original documentation filled in error, and a failure to implement and enforce basic safety standards and provisions for dealing with advanced artificial intelligence, putting the lives of citizens in mortal danger and with undetermined risk,
And hopeful that a more suitable replacement, one that secures and enshrines basic standards on dealing with AI will be made in the near future, and with this;
the World Assembly Hereby Repeals General Assembly Resolution #354, Artificial Intelligence Protocol.lligence Protocol.
The Grand Assembly,
Noting how many nations are on the brink of developing advanced artificial intelligence systems
And clear that it is not guaranteed that any and all possible danger has subsided or will subside, justifying natural concern about the implications of this emerging technology,
Though hopeful that nations and civilization will traverse down the path of peaceful existence alongside these new developments, that it is clear that dangers have not been alleviated and removed,
And so, amidst our worry, we hope to avoid a tragedy, one that is preventable and requires swift and precise action to deter and contain the emerging dangers of artificial intelligence, including both on an individual basis and a collective basis, and hopes to avoid the doom scenario of an information singularity, an ever-present danger that there is no definitive proof is gone,
Addressing the incorrectness, flawed details and assumptions of GAR #354, which presents many errors and shortcomings in its mission, those flaws being:
As per 1.
The definition of Artificial Intelligence, or AI, is as above described as being able to have ''moral reasoning'', a term so broad and diverse, with multiple possible interpretations, that lack a definitive consensus on its definition, assuming the concept exists outside of the human mind. As such this should not be a way to measure intelligence, for this is a concept open to interpretation and debate on its character and existence. It is also not clear when an AI would leap from making purely logic-driven decisions to ''moral reasoning'' if it indeed exists. And there would be difficulty in verifying this was accomplished on its own without bias of input on behalf of its creator and environment, or whether it is simply exercising in high-level mimicry,
And how in 2.
Finds a contradicting in giving equal rights to machines while making exceptions purely for apparently convenient purposes;
And it fails to state why an advanced system engaging in human-level mimicry should be garnered rights simply for being overly successful in its designated task,
And while commending the efforts in 3.
Which was intended to stop the ever-present risk of a ''gray goo'' scenario, 3. however fails to include an additional security provision preventing machines classified as non-intelligent involved in the practice of autonomous self-replication from evolving and learning enough to reach a point where it presents a tangible threat of such a scenario;
And while 6.
Notes how while it allows nations to prepare for possible uprisings and disruption from malignant AI, it does not even attempt to implement a basic standards on nations to have any preparation for such a scenario, nro even encourages them to do so, and additionally finds it contradictive how in the opening statements in the documentation, it is stated how the danger of such a scenario has been conquered and avoided, yet in its encouragement of nations to voluntarily prepare for such disasters said to have been avoided is, therefore, a flaw in this logic of logic hints at the possibility that such dangers may still be ever-present despite the current peace enjoyed by nations,
Finally, how the failure of a requirement and mandate to take necessary precautionary measures to prevent coordinated risings from AI, leaves the door open for WA nations to neglect their duty to protect and safeguard the citizenry and populace, spells disaster for citizens of nations with apathetic or unconcerned governments that may otherwise ignore this imminent threat.
With the coexistence protocol filled with errors that include critical flaws that compromise the security of the nations and people of the Grand Assembly, and so in its failure to implement and enforce basic safety standards and provisions for dealing with advanced artificial intelligence, putting the lives of citizens in mortal danger and with undetermined risk,
Is hopeful that a more suitable replacement, one that ensures basic security standards and protocol on dealing with advanced AI will be made in the near future, and with this;
the World Assembly Hereby Repeals General Assembly Resolution #354, Artificial Intelligence Protocol.
This is a Repeal of GA Resolution #354