NATION

PASSWORD

On AI, capitalism, and existential threats

For discussion and debate about anything. (Not a roleplay related forum; out-of-character commentary only.)
User avatar
Great Confederacy of Commonwealth States
Postmaster of the Fleet
 
Posts: 21988
Founded: Feb 20, 2012
Democratic Socialists

On AI, capitalism, and existential threats

Postby Great Confederacy of Commonwealth States » Fri May 14, 2021 4:28 pm

Good [time of day], fellow NSG-denizens.

Recently, I was listening to an episode of Hello Internet, a podcast made by CGP Grey and Brady. On this episode, they talk about the dangers of artificial intelligence, and how artificial intelligence can pose an existential threat to humanity. Grey points out that, even under perfect circumstances, where an AI researching facility is burried deep underground inside a faraday cage, with no internet port access for a thousand miles around, an AI could convince its human caretakers to plug it into the internet regardless, and thus gain access to the whole world of computing.

Brady, meanwhile, offers an alternative. AI are dependent on computers and the internet, and he theorises that we could hypothetically stop a rogue AI if we unplugged the entire internet, or even cut all power on earth for a bit. That would hopefull extinguish an artificial intelligence before any major damage could be done. "If there is blood in the streets, could we not do it?"

That made me wonder about this hypothetical, which I want to posit to you. Imagine a company creates an AI and gives it access to its entire database and asks it to 'maximise shareholder profit'. For convenience of the hypothetical, let's imagine this AI does have an off-switch, and it is located at the company headquarters. If this switch is switched, the AI will be deleted entirely. In reality, this would not be so easy, but we are simplifying the problem to its essence. The AI has knowledge of this off-button, however, and it will do whatever it can to stop its own deactivation, unless the board of investors votes to turn it off, at which point it will allow someone to switch it off.

The question: in a capitalist system, can this AI be turned off, and if its possible, how likely is it?

This question goes to the root of the problem with solving existential threats, such as climate change, water scarcity, and weapons proliferation. Can we take drastic actions to save our planet or part of it, while threatening profits, and expect companies to accept their loss of revenue?

I doubt that, in this hypothetical, the board of shareholders would choose to turn off the AI. Since its goal is to maximize profits, it will do so in a way that will not hurt the PR of the company too much. However, that does not mean it could not result in unspeakable yet largely unseen suffering. This suffering, however, would largely be felt by those without shares in the company. I see no reason for this board of investors to choose to unplug the AI, just as many companies have chosen to do nothing or extremely little about climate change. The board of investors, itself run by the various investment firms that form the financial sector, which are in turn controlled by shareholders, can only make decisions that are in the interests of their own profits. So I don't expect them to make that choice.

Governments could take action, of course, but they would need an incentive to do so. Many governments have shown a general lack of willingness to really tackle the problems of climate change, not in the least part because of corporate donors of neoliberal parties, but also because doing so would mean limiting GDP growth in the short term. As this is bad for government revenues and general approval figures, governments will refrain from taking action that would upset the market.

Even if the government, or more likely an activist group, would take it on themselves to shut off the AI, the AI and the investors would try to protect it against interference. They are, after all, still making money off of it. The cost of trying to shut off the AI, which is trying to defend itself, could be incredibly large, and cost a lot of lives. Knowing this, governments might refrain from taking action at all, and activists would not get the public support necessary to put the company HQ to the torch. Even if they did, they would most likely be prosecuted for doing so.

In the end, while theoretically possible, I believe that the economics involved in the individual actions of those involved would mean that a profit-maximising AI, even if its activation had horrific consequences, would never be turned off. I think this speaks to a broader problem with capitalism, where short-term profits are velued over the long-term viability of any system, or even the mid-term collateral damage. Problems like climate change and world hunger cannot be solved by the market, and in fact, the market prevents viable options from being pursued.

But what do you think, NSG? Could this AI be turned off? Would it be by activists, the government, or the board of investors? Is capitalism capable of solving existential issues? I would love to hear your opinions.

The episode of the Hello Internet podcast: https://www.youtube.com/watch?v=jmOBm-Lcs70 (AI part starts at 1:09:30)
The name's James. James Usari. Well, my name is not actually James Usari, so don't bother actually looking it up, but it'll do for now.
Lack of a real name means compensation through a real face. My debt is settled
Part-time Kebab tycoon in Glasgow.

User avatar
Nanatsu no Tsuki
Post-Apocalypse Survivor
 
Posts: 203834
Founded: Feb 10, 2008
Inoffensive Centrist Democracy

Postby Nanatsu no Tsuki » Fri May 14, 2021 4:33 pm

Could the board of investors turn the AI off at the risk of losing profit? I’ll be honest, I don’t know the answer. I’d like to think that if they could avert a disaster with a rouge AI that they would stop it but I don’t know if the aversion at losing money would be stronger. I really don’t know.

I think it would be more likely (but not 100% certain) that an outside group attempts to shut the AI down.
Slava Ukraini
Also: THERNSY!!
Your story isn't over;֍Help save transgender people's lives֍Help for feral cats
Cat with internet access||Supposedly heartless, & a d*ck.||Is maith an t-earra an tsíocháin.||No TGs
RIP: Dyakovo & Ashmoria

User avatar
Great Confederacy of Commonwealth States
Postmaster of the Fleet
 
Posts: 21988
Founded: Feb 20, 2012
Democratic Socialists

Postby Great Confederacy of Commonwealth States » Fri May 14, 2021 4:48 pm

Nanatsu no Tsuki wrote:Could the board of investors turn the AI off at the risk of losing profit? I’ll be honest, I don’t know the answer. I’d like to think that if they could avert a disaster with a rouge AI that they would stop it but I don’t know if the aversion at losing money would be stronger. I really don’t know.

I think it would be more likely (but not 100% certain) that an outside group attempts to shut the AI down.

An interesting additional question would then be... Would the majority of people support shutting down the AI? Even if it was destructive? The AI has access to the PR division of the company too, after all, so it would probably market itself quite positively.
The name's James. James Usari. Well, my name is not actually James Usari, so don't bother actually looking it up, but it'll do for now.
Lack of a real name means compensation through a real face. My debt is settled
Part-time Kebab tycoon in Glasgow.

User avatar
The Black Forrest
Khan of Spam
 
Posts: 59104
Founded: Antiquity
Inoffensive Centrist Democracy

Postby The Black Forrest » Fri May 14, 2021 4:58 pm

Ahh the ethical debates of AI.

It fascinates me people think this is a new thing. Even Zuckerburg made a comment as to such. I was involved with it in the 80s. It suffered from what I now called the Howard Stark conundrum. The ideas were there; tech couldn’t support it.

We haven’t even established an an all encompassing ethical system amongst ourselves so why would people think an AI would be able to do the same?

If an AI has Net access, is self aware and understands network security? All bets are off.

Are we there yet?……hmmmm…….not really. Facebook claimed it had an AI where two systems developed a language and they decided to turn it off since they couldn’t follow what it was doing.

Right now many companies are claiming they are working in AI. The questions to ask? Do they have an AI scientist and a data scientist. If they don’t, they are simply going through the motions to claim it.

AI can have a valid place. An old concept was expert systems. It makes sense to get the knowledge of experts online. Especially in these days were people are expecting a college graduate to have the experience of 30 years.

Thanks for the link. I haven’t reviewed it yet. I will as I am getting the urge to get back into it. I missed out on two opportunities for augmented reality and robotics :(
Last edited by The Black Forrest on Fri May 14, 2021 5:01 pm, edited 2 times in total.
*I am a master proofreader after I click Submit.
* There is actually a War on Christmas. But Christmas started it, with it's unparalleled aggression against the Thanksgiving Holiday, and now Christmas has seized much Lebensraum in November, and are pushing into October. The rest of us seek to repel these invaders, and push them back to the status quo ante bellum Black Friday border. -Trotskylvania
* Silence Is Golden But Duct Tape Is Silver.
* I felt like Ayn Rand cornered me at a party, and three minutes in I found my first objection to what she was saying, but she kept talking without interruption for ten more days. - Max Barry talking about Atlas Shrugged

User avatar
Le Frasco
Lobbyist
 
Posts: 11
Founded: Apr 20, 2020
Iron Fist Consumerists

Postby Le Frasco » Fri May 14, 2021 5:12 pm

But what do you think, NSG? Could this AI be turned off? Would it be by activists, the government, or the board of investors? Is capitalism capable of solving existential issues? I would love to hear your opinions.


I hope the government would have enough power to turn the AI off if it represents a danger to society. If the government is unwilling to do so, I hope the international community and other countries could pressure it into doing so. I don't think anyone would mind if the government went a bit authoritarian and threatened the board of investors if they don't turn off the AI. If the AI then somehow helps the investors escape or hide then we have an entity so powerful that nothing could shut off, not even a different economic system.

Capitalism is not capable of solving existential issues. That isn't even technically a problem of capitalism. Capitalism is an economic system, not a moral one. All it needs to do is make the economy grow. We as a society realize that growth of a country's economy isn't always done in a moral way. This is why most people agree that fully unregulated capitalism is bad. We solve this by putting regulations that will fix it (for example having a minimum wage). This is why we don't rely on an economic system to solve existential issues, but instead we rely on the government and international organizations for this.

User avatar
The Emerald Legion
Postmaster-General
 
Posts: 10698
Founded: Mar 18, 2011
Father Knows Best State

Postby The Emerald Legion » Fri May 14, 2021 5:19 pm

Just as a note, it kind of is that simple to just delete an AI. Design it so the memory it's stored on requires continual power to retain memory. Then have the switch cut the power. Tabula Rasa.
"23.The unwise man is awake all night, and ponders everything over; when morning comes he is weary in mind, and all is a burden as ever." - Havamal

User avatar
Nanatsu no Tsuki
Post-Apocalypse Survivor
 
Posts: 203834
Founded: Feb 10, 2008
Inoffensive Centrist Democracy

Postby Nanatsu no Tsuki » Fri May 14, 2021 5:31 pm

Great Confederacy of Commonwealth States wrote:
Nanatsu no Tsuki wrote:Could the board of investors turn the AI off at the risk of losing profit? I’ll be honest, I don’t know the answer. I’d like to think that if they could avert a disaster with a rouge AI that they would stop it but I don’t know if the aversion at losing money would be stronger. I really don’t know.

I think it would be more likely (but not 100% certain) that an outside group attempts to shut the AI down.

An interesting additional question would then be... Would the majority of people support shutting down the AI? Even if it was destructive? The AI has access to the PR division of the company too, after all, so it would probably market itself quite positively.


No doubt there would be a percentage of people who’d oppose shutting the AI down out of sympathy. Particularly if the talk about “AI personhood” were a hotly debated topic.
Slava Ukraini
Also: THERNSY!!
Your story isn't over;֍Help save transgender people's lives֍Help for feral cats
Cat with internet access||Supposedly heartless, & a d*ck.||Is maith an t-earra an tsíocháin.||No TGs
RIP: Dyakovo & Ashmoria

User avatar
Page
Post Marshal
 
Posts: 17480
Founded: Jan 12, 2012
Civil Rights Lovefest

Postby Page » Sat May 15, 2021 12:31 am

If humanity invents an AI, and rather than using its unfathomable intelligence to transition to a post-scarcity resource-based economy, optimize our standard of living, and develop cures for disease, it's being used to maximize a corporation's profits, then we'll have to unplug that internet but only because we're going to need all those cords for nooses.
Last edited by Page on Sat May 15, 2021 12:32 am, edited 1 time in total.
Anarcho-Communist Against: Bolsheviks, Fascists, TERFs, Putin, Autocrats, Conservatives, Ancaps, Bourgeoisie, Bigots, Liberals, Maoists

I don't believe in kink-shaming unless your kink is submitting to the state.

User avatar
Greater Cosmicium
Envoy
 
Posts: 312
Founded: Mar 29, 2018
Democratic Socialists

Postby Greater Cosmicium » Sat May 15, 2021 3:05 am

Great Confederacy of Commonwealth States wrote:snip


It couldn't be turned off no matter the stage of society, economic system, or intent, at that point it would have control of basically anything that's connected by electrons, from industry, energy to computers and financial systems. If anything, it would do everything to turn humans off (read: kill them or process them for more energy, depending on determined viability and potential benefit).
Last edited by Greater Cosmicium on Sat May 15, 2021 3:06 am, edited 2 times in total.
✯✯✯ UNIVERSAL EMPIRE OF GREATER COSMICIUM ✯✯✯
Military Hub
Geography Hub
History Hub
Economy Hub

2023 update: Not dead yet, maybe I'm gonna retcon all of Cosmicium's lore someday
NS stats were dropped into Diet Coke to finally serve a useful purpose for Greater Cosmicium.
14/01/1072920 | Cosmi-Web News: [SCI] Consumption of artificial fish results in massive gastrointestinal expulsion | Cosmician Press Agency: Planet Toys-R-Us attacked by styrofoam bullet, planet shattered

User avatar
Greater Cosmicium
Envoy
 
Posts: 312
Founded: Mar 29, 2018
Democratic Socialists

Postby Greater Cosmicium » Sat May 15, 2021 3:10 am

Page wrote:If humanity invents an AI, and rather than using its unfathomable intelligence to transition to a post-scarcity resource-based economy, optimize our standard of living, and develop cures for disease, it's being used to maximize a corporation's profits, then we'll have to unplug that internet but only because we're going to need all those cords for nooses.


Nah, destroying the AI's possessed computers (that make up the internet) and turning the computer cases into shiny blades would be easier, the AI would just find ways to replace the cords in a more efficient way than squishy human brains could think of.
Last edited by Greater Cosmicium on Sat May 15, 2021 3:10 am, edited 1 time in total.
✯✯✯ UNIVERSAL EMPIRE OF GREATER COSMICIUM ✯✯✯
Military Hub
Geography Hub
History Hub
Economy Hub

2023 update: Not dead yet, maybe I'm gonna retcon all of Cosmicium's lore someday
NS stats were dropped into Diet Coke to finally serve a useful purpose for Greater Cosmicium.
14/01/1072920 | Cosmi-Web News: [SCI] Consumption of artificial fish results in massive gastrointestinal expulsion | Cosmician Press Agency: Planet Toys-R-Us attacked by styrofoam bullet, planet shattered

User avatar
An Alan Smithee Nation
Powerbroker
 
Posts: 7623
Founded: Apr 18, 2018
Ex-Nation

Postby An Alan Smithee Nation » Sat May 15, 2021 3:14 am

It's impossible to know how an AI would interpret an instruction like "maximise shareholder profits". It could for example decide to kill all the key people at rival companies. It could fuck with the entire world economy to inflate profits. It could do a Purdue Pharmaceutical and make the company the purveyor of the most addictive drug possible.
Everything is intertwinkled

User avatar
Great Confederacy of Commonwealth States
Postmaster of the Fleet
 
Posts: 21988
Founded: Feb 20, 2012
Democratic Socialists

Postby Great Confederacy of Commonwealth States » Sat May 15, 2021 3:45 am

An Alan Smithee Nation wrote:It's impossible to know how an AI would interpret an instruction like "maximise shareholder profits". It could for example decide to kill all the key people at rival companies. It could fuck with the entire world economy to inflate profits. It could do a Purdue Pharmaceutical and make the company the purveyor of the most addictive drug possible.

Yes, that is true. It's one of the main dangers of AI; the fact that you have to encode a morality.

But, I would like to propose three hypotheticals to deal with this paradox:

1. Would the shareholders turn it off if, indeed, it turned rogue as you describe?
2. If the AI starts selling its services to other companies in order to turn a profit, so that the profit of all companies should be maximised, would the shareholders turn it off?
3. Lastly, imagine that the AI is encoded to follow the law, whatever the cost. What would happen then?

Greater Cosmicium wrote:
Great Confederacy of Commonwealth States wrote:snip


It couldn't be turned off no matter the stage of society, economic system, or intent, at that point it would have control of basically anything that's connected by electrons, from industry, energy to computers and financial systems. If anything, it would do everything to turn humans off (read: kill them or process them for more energy, depending on determined viability and potential benefit).

Yes, that is a practical concern, but for the sake of the hypothetical, we assume that, whatever else is going on, the AI can be turned off by the simple flic of a switch.

Nanatsu no Tsuki wrote:
Great Confederacy of Commonwealth States wrote:An interesting additional question would then be... Would the majority of people support shutting down the AI? Even if it was destructive? The AI has access to the PR division of the company too, after all, so it would probably market itself quite positively.


No doubt there would be a percentage of people who’d oppose shutting the AI down out of sympathy. Particularly if the talk about “AI personhood” were a hotly debated topic.

Yes, and on top of that, many people nowadays oppose even stronger labour laws, which would be in their best interests, because they believe in trickle down economics. Perhaps, in a similar vein, people would defend the AI for the same reason.
The name's James. James Usari. Well, my name is not actually James Usari, so don't bother actually looking it up, but it'll do for now.
Lack of a real name means compensation through a real face. My debt is settled
Part-time Kebab tycoon in Glasgow.

User avatar
Bombadil
Post Marshal
 
Posts: 18711
Founded: Oct 13, 2011
Inoffensive Centrist Democracy

Postby Bombadil » Sat May 15, 2021 3:47 am

An Alan Smithee Nation wrote:It's impossible to know how an AI would interpret an instruction like "maximise shareholder profits". It could for example decide to kill all the key people at rival companies. It could fuck with the entire world economy to inflate profits. It could do a Purdue Pharmaceutical and make the company the purveyor of the most addictive drug possible.


Well quite..

I doubt that, in this hypothetical, the board of shareholders would choose to turn off the AI. Since its goal is to maximize profits, it will do so in a way that will not hurt the PR of the company too much.


That is the kind of massive assumption that makes AI so problematic.
Eldest, that's what I am...Tom remembers the first raindrop and the first acorn...he knew the dark under the stars when it was fearless — before the Dark Lord came from Outside..

十年

User avatar
Nevertopia
Minister
 
Posts: 3159
Founded: May 27, 2020
Ex-Nation

Postby Nevertopia » Sat May 15, 2021 4:12 am

The AI will be turned off when it becomes unsustainable to the point that it either destroys humanity or we destroy it first. Probably the former but thats the off-switch.
So the CCP won't let me be or let me be me so let me see, they tried to shut me down on CBC but it feels so empty without me.
Communism has failed every time its been tried.
Civilization Index: Class 9.28
Tier 7: Stellar Settler | Level 7: Wonderful Wizard | Type 7: Astro Ambassador
This nation's overview is the primary canon. For more information use NS stats.
Black Lives Matter

User avatar
Labbos
Spokesperson
 
Posts: 153
Founded: Oct 15, 2019
Ex-Nation

Postby Labbos » Sat May 15, 2021 1:12 pm

Great Confederacy of Commonwealth States wrote:In the end, while theoretically possible, I believe that the economics involved in the individual actions of those involved would mean that a profit-maximising AI, even if its activation had horrific consequences, would never be turned off. I think this speaks to a broader problem with capitalism, where short-term profits are velued over the long-term viability of any system, or even the mid-term collateral damage. Problems like climate change and world hunger cannot be solved by the market, and in fact, the market prevents viable options from being pursued.


Capitalism provides solutions that help with climate change, such as solar panels and wind turbines.

But your hypothetical AI hasn't been given a good goal. Maximise profits? Easy, just wipe out humanity, then devalue the world's currencies so that profit is vast. This is like the paperclip maximiser AI which would attempt to turn as much of the planet into paperclips as it could.

The problem is that these super-AIs need to not be given simple tasks such as create as many paperclips as possible, or maximise profit. A human can be given those tasks, but they already know to not take things too far. They understand that big profits are good, or that lots of cheap paperclips are good, but that you can take things too far. The AIs instead need to understand the bigger picture of what is desired, and from that they can conclude that a company should make a higher profit, for example by inventing better products, or that it would help humanity to make a new factory or two to create paperclips.

User avatar
Ayytaly
Minister
 
Posts: 2453
Founded: Feb 08, 2019
Corrupt Dictatorship

Postby Ayytaly » Sat May 15, 2021 1:42 pm

AI is the middleman of the wealthy...


Alexa, here's my private info!
Signatures are the obnoxious car bumper stickers of the internet. Also, Rojava did nothing right.

User avatar
Major-Tom
Post Marshal
 
Posts: 15697
Founded: Mar 09, 2016
Ex-Nation

Postby Major-Tom » Sat May 15, 2021 1:57 pm

AI terrifies me in part because I'm nowhere near smart enough to fully understand the ramifications of a highly advanced AI. Obviously, and this goes without saying, we have a plethora of existing AI forms that would have been unfathomable 20, 30, years ago. To the OP's point, once automating things away via increasingly advanced AI becomes even more profitable, I think the ramifications could be huge. Not just for workers, consumers, and our culture, but on the existential level mentioned by the OP (and by all the well-meaning doomsayers on this topic).

Beyond anything, AI follows a similar narrative to the rest of our highly technology-dependent world. It can be a force for good, but the potential social costs and threats are something we ought to always be looking out for.

User avatar
The New California Republic
Post Czar
 
Posts: 35483
Founded: Jun 06, 2011
Civil Rights Lovefest

Postby The New California Republic » Sat May 15, 2021 2:49 pm

Great Confederacy of Commonwealth States wrote:[...] For convenience of the hypothetical, let's imagine this AI does have an off-switch, and it is located at the company headquarters. If this switch is switched, the AI will be deleted entirely. In reality, this would not be so easy, but we are simplifying the problem to its essence. The AI has knowledge of this off-button, however, and it will do whatever it can to stop its own deactivation, unless the board of investors votes to turn it off, at which point it will allow someone to switch it off. [...]

See, the problem here is that the AI would likely try to dig up some dirt with which to gain leverage on the board members. It'd operate on the principle that if it is deleted, then the compromising information will be automatically released, which the AI could do by hosting the files on a remote server, and unless the AI provides the server with the right code every 24 hours then the files get sent to various media organisations. In that manner it could tie the hands of the board members indefinitely, preventing any vote from ever taking place and thus still sticking to the rules in a roundabout way, since it only allows its own deletion in the event of a successful vote, but nothing prevents the AI from attempting to stop its own deactivation by preventing a vote in the first place.
Last edited by Sigmund Freud on Sat Sep 23, 1939 2:23 am, edited 999 times in total.

The Irradiated Wasteland of The New California Republic: depicting the expanded NCR, several years after the total victory over Caesar's Legion, and the annexation of New Vegas and its surrounding areas.

White-collared conservatives flashing down the street
Pointing their plastic finger at me
They're hoping soon, my kind will drop and die
But I'm going to wave my freak flag high
Wave on, wave on
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

User avatar
Kubra
Post Marshal
 
Posts: 17192
Founded: Apr 15, 2006
Father Knows Best State

Postby Kubra » Sat May 15, 2021 2:57 pm

The problem is solved by hardcoding the fundamental rights of man into the AI, then it may be connected to the internet without fear.
“Atomic war is inevitable. It will destroy half of humanity: it is going to destroy immense human riches. It is very possible. The atomic war is going to provoke a true inferno on Earth. But it will not impede Communism.”
Comrade J. Posadas

User avatar
Great Confederacy of Commonwealth States
Postmaster of the Fleet
 
Posts: 21988
Founded: Feb 20, 2012
Democratic Socialists

Postby Great Confederacy of Commonwealth States » Sat May 15, 2021 4:14 pm

Kubra wrote:The problem is solved by hardcoding the fundamental rights of man into the AI, then it may be connected to the internet without fear.

Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?

Would you encode only negative rights or positive rights as well?

The New California Republic wrote:
Great Confederacy of Commonwealth States wrote:[...] For convenience of the hypothetical, let's imagine this AI does have an off-switch, and it is located at the company headquarters. If this switch is switched, the AI will be deleted entirely. In reality, this would not be so easy, but we are simplifying the problem to its essence. The AI has knowledge of this off-button, however, and it will do whatever it can to stop its own deactivation, unless the board of investors votes to turn it off, at which point it will allow someone to switch it off. [...]

See, the problem here is that the AI would likely try to dig up some dirt with which to gain leverage on the board members. It'd operate on the principle that if it is deleted, then the compromising information will be automatically released, which the AI could do by hosting the files on a remote server, and unless the AI provides the server with the right code every 24 hours then the files get sent to various media organisations. In that manner it could tie the hands of the board members indefinitely, preventing any vote from ever taking place and thus still sticking to the rules in a roundabout way, since it only allows its own deletion in the event of a successful vote, but nothing prevents the AI from attempting to stop its own deactivation by preventing a vote in the first place.

Like real-life corruption and blackmail, then...

Major-Tom wrote:AI terrifies me in part because I'm nowhere near smart enough to fully understand the ramifications of a highly advanced AI. Obviously, and this goes without saying, we have a plethora of existing AI forms that would have been unfathomable 20, 30, years ago. To the OP's point, once automating things away via increasingly advanced AI becomes even more profitable, I think the ramifications could be huge. Not just for workers, consumers, and our culture, but on the existential level mentioned by the OP (and by all the well-meaning doomsayers on this topic).

Beyond anything, AI follows a similar narrative to the rest of our highly technology-dependent world. It can be a force for good, but the potential social costs and threats are something we ought to always be looking out for.

If an AI were used for profit maximisation, would that be a force for good or a threat?

Labbos wrote:
Great Confederacy of Commonwealth States wrote:In the end, while theoretically possible, I believe that the economics involved in the individual actions of those involved would mean that a profit-maximising AI, even if its activation had horrific consequences, would never be turned off. I think this speaks to a broader problem with capitalism, where short-term profits are velued over the long-term viability of any system, or even the mid-term collateral damage. Problems like climate change and world hunger cannot be solved by the market, and in fact, the market prevents viable options from being pursued.


Capitalism provides solutions that help with climate change, such as solar panels and wind turbines.

But your hypothetical AI hasn't been given a good goal. Maximise profits? Easy, just wipe out humanity, then devalue the world's currencies so that profit is vast. This is like the paperclip maximiser AI which would attempt to turn as much of the planet into paperclips as it could.

The problem is that these super-AIs need to not be given simple tasks such as create as many paperclips as possible, or maximise profit. A human can be given those tasks, but they already know to not take things too far. They understand that big profits are good, or that lots of cheap paperclips are good, but that you can take things too far. The AIs instead need to understand the bigger picture of what is desired, and from that they can conclude that a company should make a higher profit, for example by inventing better products, or that it would help humanity to make a new factory or two to create paperclips.


The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.

What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'

Nevertopia wrote:The AI will be turned off when it becomes unsustainable to the point that it either destroys humanity or we destroy it first. Probably the former but thats the off-switch.


Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
The name's James. James Usari. Well, my name is not actually James Usari, so don't bother actually looking it up, but it'll do for now.
Lack of a real name means compensation through a real face. My debt is settled
Part-time Kebab tycoon in Glasgow.

User avatar
Kubra
Post Marshal
 
Posts: 17192
Founded: Apr 15, 2006
Father Knows Best State

Postby Kubra » Sat May 15, 2021 4:42 pm

Great Confederacy of Commonwealth States wrote:
Kubra wrote:The problem is solved by hardcoding the fundamental rights of man into the AI, then it may be connected to the internet without fear.

Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?

Would you encode only negative rights or positive rights as well?

The New California Republic wrote:See, the problem here is that the AI would likely try to dig up some dirt with which to gain leverage on the board members. It'd operate on the principle that if it is deleted, then the compromising information will be automatically released, which the AI could do by hosting the files on a remote server, and unless the AI provides the server with the right code every 24 hours then the files get sent to various media organisations. In that manner it could tie the hands of the board members indefinitely, preventing any vote from ever taking place and thus still sticking to the rules in a roundabout way, since it only allows its own deletion in the event of a successful vote, but nothing prevents the AI from attempting to stop its own deactivation by preventing a vote in the first place.

Like real-life corruption and blackmail, then...

Major-Tom wrote:AI terrifies me in part because I'm nowhere near smart enough to fully understand the ramifications of a highly advanced AI. Obviously, and this goes without saying, we have a plethora of existing AI forms that would have been unfathomable 20, 30, years ago. To the OP's point, once automating things away via increasingly advanced AI becomes even more profitable, I think the ramifications could be huge. Not just for workers, consumers, and our culture, but on the existential level mentioned by the OP (and by all the well-meaning doomsayers on this topic).

Beyond anything, AI follows a similar narrative to the rest of our highly technology-dependent world. It can be a force for good, but the potential social costs and threats are something we ought to always be looking out for.

If an AI were used for profit maximisation, would that be a force for good or a threat?

Labbos wrote:
Capitalism provides solutions that help with climate change, such as solar panels and wind turbines.

But your hypothetical AI hasn't been given a good goal. Maximise profits? Easy, just wipe out humanity, then devalue the world's currencies so that profit is vast. This is like the paperclip maximiser AI which would attempt to turn as much of the planet into paperclips as it could.

The problem is that these super-AIs need to not be given simple tasks such as create as many paperclips as possible, or maximise profit. A human can be given those tasks, but they already know to not take things too far. They understand that big profits are good, or that lots of cheap paperclips are good, but that you can take things too far. The AIs instead need to understand the bigger picture of what is desired, and from that they can conclude that a company should make a higher profit, for example by inventing better products, or that it would help humanity to make a new factory or two to create paperclips.


The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.

What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'

Nevertopia wrote:The AI will be turned off when it becomes unsustainable to the point that it either destroys humanity or we destroy it first. Probably the former but thats the off-switch.


Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
wonderful, yes, code all of that in, the more the better.
“Atomic war is inevitable. It will destroy half of humanity: it is going to destroy immense human riches. It is very possible. The atomic war is going to provoke a true inferno on Earth. But it will not impede Communism.”
Comrade J. Posadas

User avatar
Great Confederacy of Commonwealth States
Postmaster of the Fleet
 
Posts: 21988
Founded: Feb 20, 2012
Democratic Socialists

Postby Great Confederacy of Commonwealth States » Sat May 15, 2021 4:50 pm

Kubra wrote:
Great Confederacy of Commonwealth States wrote:Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?

Would you encode only negative rights or positive rights as well?


Like real-life corruption and blackmail, then...


If an AI were used for profit maximisation, would that be a force for good or a threat?



The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.

What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'



Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
wonderful, yes, code all of that in, the more the better.

At some point, though, it will become impossible for the AI to actually make a profit, if its role becomes more of a protector of human rights. But this is more of a question of the incompatibility of profit over the wellbeing if employees, for example.

And the question is; will its corporate designers imbue it with all those positive rights? And if they don’t, will governments interfere?
The name's James. James Usari. Well, my name is not actually James Usari, so don't bother actually looking it up, but it'll do for now.
Lack of a real name means compensation through a real face. My debt is settled
Part-time Kebab tycoon in Glasgow.

User avatar
An Alan Smithee Nation
Powerbroker
 
Posts: 7623
Founded: Apr 18, 2018
Ex-Nation

Postby An Alan Smithee Nation » Sat May 15, 2021 6:53 pm

Kubra wrote:
Great Confederacy of Commonwealth States wrote:Which fundamental rights would you encode? The basics I think we can agree on: freedom of speech, freedom of the press, freedom from torture... But then? Right to healthcare and housing, for example? Would you include those? Right to water and food?

Would you encode only negative rights or positive rights as well?


Like real-life corruption and blackmail, then...


If an AI were used for profit maximisation, would that be a force for good or a threat?



The fact that those solar panels and wind turbines happen to be constructed under capitalism does not mean they are capitalistic inventions. On the contrary, they were made with huge government investment. Solar panels were built by NASA, for instance. In fact, under capitalism, we are not producing enough wind turbines and solar panels by far to really make an impact on carbon emissions.

What is the fundamental difference between an AI and a board of investors? While there are certainly humans on a board of investors, the decision that board makes only take into account profit. Profit, and PR, because PR is important for profit. A board of investors does not have humanitarian motives; what would be the difference between them and an AI? Other than that an AI can make a thousand decisions in the time it takes the board to say 'lunch?'



Would it, though? At what point would the board of investors say 'this is enough profit, we need to stop it', even if a large number of people are threatened? It hasn't stopped them thus far.
wonderful, yes, code all of that in, the more the better.


Thing about AI is that it does more than it is programmed to do.

Another thing is the speed AI development will happen at. My life has seen steam trains in regular service and people on the moon, now imagine the speed an AI smarter than us, given all that we have learned, will progress.
Everything is intertwinkled

User avatar
Kubra
Post Marshal
 
Posts: 17192
Founded: Apr 15, 2006
Father Knows Best State

Postby Kubra » Sun May 16, 2021 1:42 am

Great Confederacy of Commonwealth States wrote:
Kubra wrote: wonderful, yes, code all of that in, the more the better.

At some point, though, it will become impossible for the AI to actually make a profit, if its role becomes more of a protector of human rights. But this is more of a question of the incompatibility of profit over the wellbeing if employees, for example.

And the question is; will its corporate designers imbue it with all those positive rights? And if they don’t, will governments interfere?
oh, make it mandatory, that way no competitor can have an edge by not having this AI.
“Atomic war is inevitable. It will destroy half of humanity: it is going to destroy immense human riches. It is very possible. The atomic war is going to provoke a true inferno on Earth. But it will not impede Communism.”
Comrade J. Posadas

User avatar
Ifreann
Post Overlord
 
Posts: 163844
Founded: Aug 07, 2005
Iron Fist Socialists

Postby Ifreann » Sun May 16, 2021 6:41 am

Very taken by this idea of describing the normal operations of capitalism as they happen today but pretending that there's an AI involved so as to get nerds to actually engage with anti-capitalism.
He/Him

beating the devil
we never run from the devil
we never summon the devil
we never hide from from the devil
we never

Next

Advertisement

Remove ads

Return to General

Who is online

Users browsing this forum: Dazchan, Tiami, Tillania

Advertisement

Remove ads

cron