by Great Confederacy of Commonwealth States » Fri May 14, 2021 4:28 pm
by Nanatsu no Tsuki » Fri May 14, 2021 4:33 pm
Slava Ukraini
Also: THERNSY!!
Your story isn't over;֍Help save transgender people's lives֍Help for feral cats
Cat with internet access||Supposedly heartless, & a d*ck.||Is maith an t-earra an tsíocháin.||No TGsRIP: Dyakovo & Ashmoria
by Great Confederacy of Commonwealth States » Fri May 14, 2021 4:48 pm
Nanatsu no Tsuki wrote:Could the board of investors turn the AI off at the risk of losing profit? I’ll be honest, I don’t know the answer. I’d like to think that if they could avert a disaster with a rouge AI that they would stop it but I don’t know if the aversion at losing money would be stronger. I really don’t know.
I think it would be more likely (but not 100% certain) that an outside group attempts to shut the AI down.
by The Black Forrest » Fri May 14, 2021 4:58 pm
by Le Frasco » Fri May 14, 2021 5:12 pm
But what do you think, NSG? Could this AI be turned off? Would it be by activists, the government, or the board of investors? Is capitalism capable of solving existential issues? I would love to hear your opinions.
by The Emerald Legion » Fri May 14, 2021 5:19 pm
by Nanatsu no Tsuki » Fri May 14, 2021 5:31 pm
Great Confederacy of Commonwealth States wrote:Nanatsu no Tsuki wrote:Could the board of investors turn the AI off at the risk of losing profit? I’ll be honest, I don’t know the answer. I’d like to think that if they could avert a disaster with a rouge AI that they would stop it but I don’t know if the aversion at losing money would be stronger. I really don’t know.
I think it would be more likely (but not 100% certain) that an outside group attempts to shut the AI down.
An interesting additional question would then be... Would the majority of people support shutting down the AI? Even if it was destructive? The AI has access to the PR division of the company too, after all, so it would probably market itself quite positively.
Slava Ukraini
Also: THERNSY!!
Your story isn't over;֍Help save transgender people's lives֍Help for feral cats
Cat with internet access||Supposedly heartless, & a d*ck.||Is maith an t-earra an tsíocháin.||No TGsRIP: Dyakovo & Ashmoria
by Page » Sat May 15, 2021 12:31 am
by Greater Cosmicium » Sat May 15, 2021 3:05 am
Great Confederacy of Commonwealth States wrote:snip
Military Hub
Geography Hub
History Hub
Economy Hub
14/01/1072920 | Cosmi-Web News: [SCI] Consumption of artificial fish results in massive gastrointestinal expulsion | Cosmician Press Agency: Planet Toys-R-Us attacked by styrofoam bullet, planet shattered
by Greater Cosmicium » Sat May 15, 2021 3:10 am
Page wrote:If humanity invents an AI, and rather than using its unfathomable intelligence to transition to a post-scarcity resource-based economy, optimize our standard of living, and develop cures for disease, it's being used to maximize a corporation's profits, then we'll have to unplug that internet but only because we're going to need all those cords for nooses.
Military Hub
Geography Hub
History Hub
Economy Hub
14/01/1072920 | Cosmi-Web News: [SCI] Consumption of artificial fish results in massive gastrointestinal expulsion | Cosmician Press Agency: Planet Toys-R-Us attacked by styrofoam bullet, planet shattered
by An Alan Smithee Nation » Sat May 15, 2021 3:14 am
by Great Confederacy of Commonwealth States » Sat May 15, 2021 3:45 am
An Alan Smithee Nation wrote:It's impossible to know how an AI would interpret an instruction like "maximise shareholder profits". It could for example decide to kill all the key people at rival companies. It could fuck with the entire world economy to inflate profits. It could do a Purdue Pharmaceutical and make the company the purveyor of the most addictive drug possible.
Greater Cosmicium wrote:Great Confederacy of Commonwealth States wrote:snip
It couldn't be turned off no matter the stage of society, economic system, or intent, at that point it would have control of basically anything that's connected by electrons, from industry, energy to computers and financial systems. If anything, it would do everything to turn humans off (read: kill them or process them for more energy, depending on determined viability and potential benefit).
Nanatsu no Tsuki wrote:Great Confederacy of Commonwealth States wrote:An interesting additional question would then be... Would the majority of people support shutting down the AI? Even if it was destructive? The AI has access to the PR division of the company too, after all, so it would probably market itself quite positively.
No doubt there would be a percentage of people who’d oppose shutting the AI down out of sympathy. Particularly if the talk about “AI personhood” were a hotly debated topic.
by Bombadil » Sat May 15, 2021 3:47 am
An Alan Smithee Nation wrote:It's impossible to know how an AI would interpret an instruction like "maximise shareholder profits". It could for example decide to kill all the key people at rival companies. It could fuck with the entire world economy to inflate profits. It could do a Purdue Pharmaceutical and make the company the purveyor of the most addictive drug possible.
I doubt that, in this hypothetical, the board of shareholders would choose to turn off the AI. Since its goal is to maximize profits, it will do so in a way that will not hurt the PR of the company too much.
by Nevertopia » Sat May 15, 2021 4:12 am
So the CCP won't let me be or let me be me so let me see, they tried to shut me down on CBC but it feels so empty without me.
| Civilization Index: Class 9.28 Tier 7: Stellar Settler | Level 7: Wonderful Wizard | Type 7: Astro Ambassador This nation's overview is the primary canon. For more information use NS stats. |
|
by Labbos » Sat May 15, 2021 1:12 pm
Great Confederacy of Commonwealth States wrote:In the end, while theoretically possible, I believe that the economics involved in the individual actions of those involved would mean that a profit-maximising AI, even if its activation had horrific consequences, would never be turned off. I think this speaks to a broader problem with capitalism, where short-term profits are velued over the long-term viability of any system, or even the mid-term collateral damage. Problems like climate change and world hunger cannot be solved by the market, and in fact, the market prevents viable options from being pursued.
by Major-Tom » Sat May 15, 2021 1:57 pm
by The New California Republic » Sat May 15, 2021 2:49 pm
Great Confederacy of Commonwealth States wrote:[...] For convenience of the hypothetical, let's imagine this AI does have an off-switch, and it is located at the company headquarters. If this switch is switched, the AI will be deleted entirely. In reality, this would not be so easy, but we are simplifying the problem to its essence. The AI has knowledge of this off-button, however, and it will do whatever it can to stop its own deactivation, unless the board of investors votes to turn it off, at which point it will allow someone to switch it off. [...]
by Kubra » Sat May 15, 2021 2:57 pm
by Great Confederacy of Commonwealth States » Sat May 15, 2021 4:14 pm
Kubra wrote:The problem is solved by hardcoding the fundamental rights of man into the AI, then it may be connected to the internet without fear.
The New California Republic wrote:Great Confederacy of Commonwealth States wrote:[...] For convenience of the hypothetical, let's imagine this AI does have an off-switch, and it is located at the company headquarters. If this switch is switched, the AI will be deleted entirely. In reality, this would not be so easy, but we are simplifying the problem to its essence. The AI has knowledge of this off-button, however, and it will do whatever it can to stop its own deactivation, unless the board of investors votes to turn it off, at which point it will allow someone to switch it off. [...]
See, the problem here is that the AI would likely try to dig up some dirt with which to gain leverage on the board members. It'd operate on the principle that if it is deleted, then the compromising information will be automatically released, which the AI could do by hosting the files on a remote server, and unless the AI provides the server with the right code every 24 hours then the files get sent to various media organisations. In that manner it could tie the hands of the board members indefinitely, preventing any vote from ever taking place and thus still sticking to the rules in a roundabout way, since it only allows its own deletion in the event of a successful vote, but nothing prevents the AI from attempting to stop its own deactivation by preventing a vote in the first place.
Major-Tom wrote:AI terrifies me in part because I'm nowhere near smart enough to fully understand the ramifications of a highly advanced AI. Obviously, and this goes without saying, we have a plethora of existing AI forms that would have been unfathomable 20, 30, years ago. To the OP's point, once automating things away via increasingly advanced AI becomes even more profitable, I think the ramifications could be huge. Not just for workers, consumers, and our culture, but on the existential level mentioned by the OP (and by all the well-meaning doomsayers on this topic).
Beyond anything, AI follows a similar narrative to the rest of our highly technology-dependent world. It can be a force for good, but the potential social costs and threats are something we ought to always be looking out for.
Labbos wrote:Great Confederacy of Commonwealth States wrote:In the end, while theoretically possible, I believe that the economics involved in the individual actions of those involved would mean that a profit-maximising AI, even if its activation had horrific consequences, would never be turned off. I think this speaks to a broader problem with capitalism, where short-term profits are velued over the long-term viability of any system, or even the mid-term collateral damage. Problems like climate change and world hunger cannot be solved by the market, and in fact, the market prevents viable options from being pursued.
Capitalism provides solutions that help with climate change, such as solar panels and wind turbines.
But your hypothetical AI hasn't been given a good goal. Maximise profits? Easy, just wipe out humanity, then devalue the world's currencies so that profit is vast. This is like the paperclip maximiser AI which would attempt to turn as much of the planet into paperclips as it could.
The problem is that these super-AIs need to not be given simple tasks such as create as many paperclips as possible, or maximise profit. A human can be given those tasks, but they already know to not take things too far. They understand that big profits are good, or that lots of cheap paperclips are good, but that you can take things too far. The AIs instead need to understand the bigger picture of what is desired, and from that they can conclude that a company should make a higher profit, for example by inventing better products, or that it would help humanity to make a new factory or two to create paperclips.
Nevertopia wrote:The AI will be turned off when it becomes unsustainable to the point that it either destroys humanity or we destroy it first. Probably the former but thats the off-switch.
by Kubra » Sat May 15, 2021 4:42 pm
wonderful, yes, code all of that in, the more the better.Great Confederacy of Commonwealth States wrote:Kubra wrote:The problem is solved by hardcoding the fundamental rights of man into the AI, then it may be connected to the internet without fear.
Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?
Would you encode only negative rights or positive rights as well?The New California Republic wrote:See, the problem here is that the AI would likely try to dig up some dirt with which to gain leverage on the board members. It'd operate on the principle that if it is deleted, then the compromising information will be automatically released, which the AI could do by hosting the files on a remote server, and unless the AI provides the server with the right code every 24 hours then the files get sent to various media organisations. In that manner it could tie the hands of the board members indefinitely, preventing any vote from ever taking place and thus still sticking to the rules in a roundabout way, since it only allows its own deletion in the event of a successful vote, but nothing prevents the AI from attempting to stop its own deactivation by preventing a vote in the first place.
Like real-life corruption and blackmail, then...Major-Tom wrote:AI terrifies me in part because I'm nowhere near smart enough to fully understand the ramifications of a highly advanced AI. Obviously, and this goes without saying, we have a plethora of existing AI forms that would have been unfathomable 20, 30, years ago. To the OP's point, once automating things away via increasingly advanced AI becomes even more profitable, I think the ramifications could be huge. Not just for workers, consumers, and our culture, but on the existential level mentioned by the OP (and by all the well-meaning doomsayers on this topic).
Beyond anything, AI follows a similar narrative to the rest of our highly technology-dependent world. It can be a force for good, but the potential social costs and threats are something we ought to always be looking out for.
If an AI were used for profit maximisation, would that be a force for good or a threat?Labbos wrote:
Capitalism provides solutions that help with climate change, such as solar panels and wind turbines.
But your hypothetical AI hasn't been given a good goal. Maximise profits? Easy, just wipe out humanity, then devalue the world's currencies so that profit is vast. This is like the paperclip maximiser AI which would attempt to turn as much of the planet into paperclips as it could.
The problem is that these super-AIs need to not be given simple tasks such as create as many paperclips as possible, or maximise profit. A human can be given those tasks, but they already know to not take things too far. They understand that big profits are good, or that lots of cheap paperclips are good, but that you can take things too far. The AIs instead need to understand the bigger picture of what is desired, and from that they can conclude that a company should make a higher profit, for example by inventing better products, or that it would help humanity to make a new factory or two to create paperclips.
The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.
What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'Nevertopia wrote:The AI will be turned off when it becomes unsustainable to the point that it either destroys humanity or we destroy it first. Probably the former but thats the off-switch.
Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
by Great Confederacy of Commonwealth States » Sat May 15, 2021 4:50 pm
Kubra wrote:wonderful, yes, code all of that in, the more the better.Great Confederacy of Commonwealth States wrote:Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?
Would you encode only negative rights or positive rights as well?
Like real-life corruption and blackmail, then...
If an AI were used for profit maximisation, would that be a force for good or a threat?
The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.
What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'
Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
by An Alan Smithee Nation » Sat May 15, 2021 6:53 pm
Kubra wrote:wonderful, yes, code all of that in, the more the better.Great Confederacy of Commonwealth States wrote:Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?
Would you encode only negative rights or positive rights as well?
Like real-life corruption and blackmail, then...
If an AI were used for profit maximisation, would that be a force for good or a threat?
The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.
What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'
Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
by Kubra » Sun May 16, 2021 1:42 am
oh, make it mandatory, that way no competitor can have an edge by not having this AI.Great Confederacy of Commonwealth States wrote:Kubra wrote: wonderful, yes, code all of that in, the more the better.
At some point, though, it will become impossible for the AI to actually make a profit, if its role becomes more of a protector of human rights. But this is more of a question of the incompatibility of profit over the wellbeing if employees, for example.
And the question is; will its corporate designers imbue it with all those positive rights? And if they don’t, will governments interfere?
by Ifreann » Sun May 16, 2021 6:41 am
Advertisement
Users browsing this forum: Jetan, Singaporen Empire
Advertisement