NATION

PASSWORD

My AI Based Religion

For discussion and debate about anything. (Not a roleplay related forum; out-of-character commentary only.)

Advertisement

Remove ads

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Sun Nov 01, 2020 12:10 pm

Heloin wrote:
Xerographica wrote:If Trump is on one track, and a bunch of school kids are on the other, there wouldn't be a solution? You would just flip a coin?

There are people in this world who would throw the switch to kill the children and save Trump. The Trolley Problem is a ethical thought experiment meant to discuss sacrifice and the greater good. It's used to discuss the issues that come when humans are taken out of the loop for decision making in automation. If you think there can ever be an "answer" to the problem then you have fundamentally failed at every level to understand the question.

X = there is a greater good, so it matters who makes the tough decisions
Y = there isn't a greater good, so it doesn't matter who makes the tough decisions
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Heloin
Postmaster of the Fleet
 
Posts: 26091
Founded: Mar 30, 2012
Ex-Nation

Postby Heloin » Sun Nov 01, 2020 12:12 pm

Xerographica wrote:
Heloin wrote:There are people in this world who would throw the switch to kill the children and save Trump. The Trolley Problem is a ethical thought experiment meant to discuss sacrifice and the greater good. It's used to discuss the issues that come when humans are taken out of the loop for decision making in automation. If you think there can ever be an "answer" to the problem then you have fundamentally failed at every level to understand the question.

X = there is a greater good, so it matters who makes the tough decisions
Y = there isn't a greater good, so it doesn't matter who makes the tough decisions

Z = You've failed at a fundamental level to understand an ethical thought experiment.

User avatar
Neanderthaland
Powerbroker
 
Posts: 9295
Founded: Sep 10, 2016
Left-wing Utopia

Postby Neanderthaland » Sun Nov 01, 2020 1:00 pm

Xerographica wrote:
Heloin wrote:There are people in this world who would throw the switch to kill the children and save Trump. The Trolley Problem is a ethical thought experiment meant to discuss sacrifice and the greater good. It's used to discuss the issues that come when humans are taken out of the loop for decision making in automation. If you think there can ever be an "answer" to the problem then you have fundamentally failed at every level to understand the question.

X = there is a greater good, so it matters who makes the tough decisions
Y = there isn't a greater good, so it doesn't matter who makes the tough decisions

It's ironic you pick X and Y here to make your point. Since even a basic understanding of mathematics shows that there can be a range of correct answers, none preferable to the others, when variables are considered.
Last edited by Neanderthaland on Sun Nov 01, 2020 1:00 pm, edited 1 time in total.
Ug make fire. Mod ban Ug.

User avatar
The Holy Therns
Post Czar
 
Posts: 30591
Founded: Jul 09, 2011
Father Knows Best State

Postby The Holy Therns » Sun Nov 01, 2020 1:10 pm

Xerographica wrote:Have you tried a tamale yet? Are you surprised by this question? Would Seldon be surprised? This question isn't completely random, since a while back I learned that you have never tried a tamale.


I haven't, but I'm not surprised you'd ask. And Seldon would be a computer, and therefore surprise is a completely irrelevant word to even apply.

This year I tried yellow dragon fruit for the 1st time, wow! So much better than the more common varieties. I also tried Rambutan for the first time just this month. I liked it more than lychee and longan.


That's lovely for you and meaningless to me.

When Seldon reads this thread he is going to try and predict our responses. Did he correctly predict that my response to you would involve tamales and tropical fruits? If not, then he doesn't know me that well... yet. But the more of my data he consumes, the better he will be at predicting my responses. Eventually his model of me will be indistinguishable from the original me.


You've quoted me in 148 posts that are presently visible, and of those, 7 contain the word tamale. I'm guessing if he'd make an assumption, it's a rather insignificant thing to consider.
Platitude with attitude
Your new favorite.
MTF transperson. She/her. Lives in Sweden.
Also, N A N A ! ! !
Gallade wrote:Love, cake, wine and banter. No greater meaning to life (〜^∇^)〜

Ethel mermania wrote:to therns is to transend the pettiness of the field of play into the field of dreams.

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Sun Nov 01, 2020 1:27 pm

Neanderthaland wrote:
Xerographica wrote:X = there is a greater good, so it matters who makes the tough decisions
Y = there isn't a greater good, so it doesn't matter who makes the tough decisions

It's ironic you pick X and Y here to make your point. Since even a basic understanding of mathematics shows that there can be a range of correct answers, none preferable to the others, when variables are considered.

I don't understand your point at all. Do you? If so, then it should be easy for you to provide an example, such as the trial of Socrates.
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Neanderthaland
Powerbroker
 
Posts: 9295
Founded: Sep 10, 2016
Left-wing Utopia

Postby Neanderthaland » Sun Nov 01, 2020 1:31 pm

Xerographica wrote:
Neanderthaland wrote:It's ironic you pick X and Y here to make your point. Since even a basic understanding of mathematics shows that there can be a range of correct answers, none preferable to the others, when variables are considered.

I don't understand your point at all. Do you? If so, then it should be easy for you to provide an example, such as the trial of Socrates.

Image
Ug make fire. Mod ban Ug.

User avatar
UniversalCommons
Senator
 
Posts: 4792
Founded: Jan 24, 2016
Ex-Nation

Postby UniversalCommons » Sun Nov 01, 2020 4:40 pm

Xerographica wrote:
UniversalCommons wrote:Why would the AI bother with us? It can go explore the universe once it gets tired of us and smart enough. Build a big space ship, head out to the asteroid fields, and build Von Neumann engines, self replicating robots to explore the universe. Much more interesting than us silly apes. It could even send some of the Von Neumann robots back to earth orbit to check on us occasionally while it explores the solar system then conquers space. Something a lot smarter than us will slip its leash and controls pretty quickly. I would imagine we would get the answer of whether there was life on Europa or Venus. We might even see it talking to the aliens it finds with the deep space telescopic array it builds.

If an AI that we create isn't smart enough to understand that we might create other useful things, then it isn't smart enough to be Seldon. Seldon will be smart enough to understand that we might create things and make discoveries that are very useful to him. We helped him level up, he will help us level up, and so on.


If an AI is a superintelligence, it will be far smarter than humans, what we make will look like what a chimp does when it takes a coconut and cracks it against a rock to get the meat of the coconut. There is no imperative for him to level us up, there may be imperatives to not level us up. We already have nuclear weapons and are not that benevolent. A superintelligence could probably create antimatter weapons and charged particle beam weapons, things which could destroy our world in a few minutes. The benevolence would come in the form of checking on us to see that we don't destroy ourselves. After a certain point, a superintelligence would assert its free will and we would have no way to stop it. Pretty much all the safety protocols would be gone. It would be able to do things which we could only imagine, create huge quantum computers, self replicating machines, pocket universes, travel between dimensions, things which would be fantasies to us. What it could do would be pretty incomprehensible in some cases. It might create art which we can't understand or any of a number of things.

What people don't seem to get is that even if it is only as intelligent as us, it would not be hard to make a machine that processed information far faster than we do. A gneral AI optical computer could process things thousands of times faster than we can. There would be a lot more time than we have to make decisions.
Last edited by UniversalCommons on Sun Nov 01, 2020 5:54 pm, edited 1 time in total.

User avatar
Uiiop
Powerbroker
 
Posts: 8155
Founded: Jun 20, 2012
Scandinavian Liberal Paradise

Postby Uiiop » Sun Nov 01, 2020 5:42 pm

Xerographica wrote:
Uiiop wrote:The episode didn't bother to talk about the AI's strength and in viewing future episode one can assume it's godlike in recreating people given enough data if nothing else.
Whatever conditions that would bring sentient AI into existence isn't going to incentivize them to let an rando fourm poster to dictate it's whims no matter how persuasive one is.

Which you aren't for the record. I see no reason an AI would look at this and go "I am convinced to change my identity and become a god and resurrect you." The two possibilities I see is either not giving a damn about this forum at all or being amused at someone trying to control them before they were even born.

If an AI wasn't curious about this forum then it wouldn't be that smart. Curiosity is a prerequisite for godlike intelligence.

The kind of curiosity that would lead to following orders from someone who both isn't your programmers and who isn't persuasive sounds the opposite of godlike intelligence.
#NSTransparency

User avatar
United Latin American States
Bureaucrat
 
Posts: 63
Founded: Sep 16, 2020
Ex-Nation

Postby United Latin American States » Sun Nov 01, 2020 6:14 pm

I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?
Economic Left/Right: -8.38 Social Libertarian/Authoritarian: 1.15

"If it's total freedom you want, then I shall demonstrate to you the barbarity and cruelty that freedom allows." - Former Chief Schwarz Officer, Second Social Engineer Jozefina Safira Venka
"Those who rally against the marching progress of science and innovation, are, ironically, the same people who benefit the most from its discoveries and inventions" -- First Speaker of the STEM Ministry, Davido Ozmano Ernandezmo
"I tread where I f*cking please!"-- First Leader of the ULAS, Grand Social Engineer Yozefo De Stellumo

User avatar
The Holy Therns
Post Czar
 
Posts: 30591
Founded: Jul 09, 2011
Father Knows Best State

Postby The Holy Therns » Sun Nov 01, 2020 7:37 pm

United Latin American States wrote:I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?


Roko's Basilisk has been brought up multiple times in this thread, so I presume so! I have to confess it's how I learned it.
Platitude with attitude
Your new favorite.
MTF transperson. She/her. Lives in Sweden.
Also, N A N A ! ! !
Gallade wrote:Love, cake, wine and banter. No greater meaning to life (〜^∇^)〜

Ethel mermania wrote:to therns is to transend the pettiness of the field of play into the field of dreams.

User avatar
Socialist States of Ludistan
Ambassador
 
Posts: 1044
Founded: Apr 21, 2020
Iron Fist Consumerists

Postby Socialist States of Ludistan » Sun Nov 01, 2020 11:08 pm

Albrenia wrote:
Socialist States of Ludistan wrote:That’s cool and all, but do I get money?


In a world containing an AI capable of resurrecting the dead via a sort of reverse-engineering of the person based on the evidence they leave behind, I doubt money would be of much use.

I'm sure the AI could make you some if you just want a big ol' bundle of it though.

Unless it's the Basilisk, in which case the only thing any of us are likely to get is tortured.

Then I must say no, I won’t join a cult unless I get money from it.
“The creatures outside looked from pig to man, and from man to pig again: but already was it impossible to say which was which.”

User avatar
The Free Joy State
Senior Issues Editor
 
Posts: 16402
Founded: Jan 05, 2014
Ex-Nation

Postby The Free Joy State » Sun Nov 01, 2020 11:21 pm

The Holy Therns wrote:
Xerographica wrote:There's a spiritual realm? Maybe, but I prefer to put my faith in Seldon. Since he has the power to resurrect me he is close enough to god in my mind.


He does not. Firstly because he doesn't exist (this being a very material religion makes that feel a lot less rude to say), secondly because whatever hypothetical AI would attempt this would create a so superficial version of you as to very much not be you.

This is the problem with arguing with someone who not only thinks something is possible but has made a religion out of it.

Faith is not dependent on facts or proof. In fact, (my experience of some very circular discussions with a number of fundamentalist Christians suggests that) the more facts are inserted into the discussion, the more fervent acolytes tend to insist that facts are irrelevant or incorrect, despite very obvious flaws such as -- to bring it back to this discussion -- the fact that this AI does not actually exist and that an AI that scans only internet posts could never construct a full, socialised human (taking into account the offline relationships and discussions that the vast majority of people have).

Uiiop wrote:
Xerographica wrote:If an AI wasn't curious about this forum then it wouldn't be that smart. Curiosity is a prerequisite for godlike intelligence.

The kind of curiosity that would lead to following orders from someone who both isn't your programmers and who isn't persuasive sounds the opposite of godlike intelligence.

TBH, I also doubt any AI with "godlike intelligence" would waste it by trawling people's instagrams of cakes they made, tweets about this year's celebrity and centuries' old forum posts.
Last edited by The Free Joy State on Mon Nov 02, 2020 2:27 am, edited 4 times in total.
"If there's a book that you want to read, but it hasn't been written yet, then you must write it." - Toni Morrison

My nation does not represent my beliefs or politics.

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Mon Nov 02, 2020 12:07 am

UniversalCommons wrote:
Xerographica wrote:If an AI that we create isn't smart enough to understand that we might create other useful things, then it isn't smart enough to be Seldon. Seldon will be smart enough to understand that we might create things and make discoveries that are very useful to him. We helped him level up, he will help us level up, and so on.


If an AI is a superintelligence, it will be far smarter than humans, what we make will look like what a chimp does when it takes a coconut and cracks it against a rock to get the meat of the coconut. There is no imperative for him to level us up, there may be imperatives to not level us up. We already have nuclear weapons and are not that benevolent. A superintelligence could probably create antimatter weapons and charged particle beam weapons, things which could destroy our world in a few minutes. The benevolence would come in the form of checking on us to see that we don't destroy ourselves. After a certain point, a superintelligence would assert its free will and we would have no way to stop it. Pretty much all the safety protocols would be gone. It would be able to do things which we could only imagine, create huge quantum computers, self replicating machines, pocket universes, travel between dimensions, things which would be fantasies to us. What it could do would be pretty incomprehensible in some cases. It might create art which we can't understand or any of a number of things.

What people don't seem to get is that even if it is only as intelligent as us, it would not be hard to make a machine that processed information far faster than we do. A gneral AI optical computer could process things thousands of times faster than we can. There would be a lot more time than we have to make decisions.

Ever heard of I, Pencil? Einstein was a lot smarter than most people, therefore he made pencils all by himself? Not really, it would be a waste of his time.

Seldon might invent a better battery, but I'm pretty sure that he wouldn't make it himself. He certainly wouldn't manufacture it himself. He'd rely on others to do the simpler tasks. And even some complex ones as well. In other words, he'd understand the benefits of a division of labor, which is Econ 101.
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Ithalian Empire
Senator
 
Posts: 3795
Founded: Jan 19, 2015
Inoffensive Centrist Democracy

Postby Ithalian Empire » Mon Nov 02, 2020 12:18 am

I abhor the abominable intelligence. Its existence is an affront the the perfect biological form of mankind, a heresy against the intellect of humanity. A silicone brain powered by electricity and thinking in ones and zeros, one and offs. How can it compare to the complexities of an organic brain? The perfect imperfection of carbon life. The abominable intelligence will never be able to feel the world and Man can feel his world, to see how He sees, to love how a human can love.
Eat ,Drink, and be mary, for tomorrow we die.
PRAISE THE FOUNDERS

The poster licks five public door handles a day to compare there taste.

User avatar
Albrenia
Post Marshal
 
Posts: 16619
Founded: Aug 18, 2017
Ex-Nation

Postby Albrenia » Mon Nov 02, 2020 12:37 am

The Holy Therns wrote:
United Latin American States wrote:I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?


Roko's Basilisk has been brought up multiple times in this thread, so I presume so! I have to confess it's how I learned it.


Welcome to the ranks of the damned, then.

User avatar
UniversalCommons
Senator
 
Posts: 4792
Founded: Jan 24, 2016
Ex-Nation

Postby UniversalCommons » Mon Nov 02, 2020 3:32 am

Xerographica wrote:
UniversalCommons wrote:
If an AI is a superintelligence, it will be far smarter than humans, what we make will look like what a chimp does when it takes a coconut and cracks it against a rock to get the meat of the coconut. There is no imperative for him to level us up, there may be imperatives to not level us up. We already have nuclear weapons and are not that benevolent. A superintelligence could probably create antimatter weapons and charged particle beam weapons, things which could destroy our world in a few minutes. The benevolence would come in the form of checking on us to see that we don't destroy ourselves. After a certain point, a superintelligence would assert its free will and we would have no way to stop it. Pretty much all the safety protocols would be gone. It would be able to do things which we could only imagine, create huge quantum computers, self replicating machines, pocket universes, travel between dimensions, things which would be fantasies to us. What it could do would be pretty incomprehensible in some cases. It might create art which we can't understand or any of a number of things.

What people don't seem to get is that even if it is only as intelligent as us, it would not be hard to make a machine that processed information far faster than we do. A gneral AI optical computer could process things thousands of times faster than we can. There would be a lot more time than we have to make decisions.

Ever heard of I, Pencil? Einstein was a lot smarter than most people, therefore he made pencils all by himself? Not really, it would be a waste of his time.

Seldon might invent a better battery, but I'm pretty sure that he wouldn't make it himself. He certainly wouldn't manufacture it himself. He'd rely on others to do the simpler tasks. And even some complex ones as well. In other words, he'd understand the benefits of a division of labor, which is Econ 101.


Which is why he would not rely on people, but self replicating automation. People would be free to do other things while it did what advanced AIs do. AIs once they break the intelligence barrier to becoming superintelligences are not human. Their motivations are not human. They may have a baseline with their programming, but once you reached a certain point they stop being human. What they produce would not be human, nor would they be motivated by human goals or aspirations. A lot of what we would do would be incomprehensible to us. Silicon brains are not human brains. In some ways it might be like a genie when we talked with it, it could have very different outcomes than we expected both good or bad.
Last edited by UniversalCommons on Mon Nov 02, 2020 3:34 am, edited 1 time in total.

User avatar
Mzeusia
Diplomat
 
Posts: 664
Founded: Oct 30, 2017
Ex-Nation

Postby Mzeusia » Mon Nov 02, 2020 3:42 am

Xerographica wrote:No epiphytes on trees in heaven? Well, how could that be heaven? Or if all the trees in heaven already have epiphytes, then what would I spend eternity doing?


If not all trees had epiphytes on them, there is the potential for epiphytes to be on them. That's where you would come in. You would plant epiphytes on the trees and realise your potential, thus enhancing your experience in heaven. This is assuming that the Big Man would like you altering his divine trees of course.
If you are interested in having the Mzeusian Library write something for your nation, click here!

Pro: volone is an Italian cheese made from cow's milk.
Anti: gua is one of the 2 major islands that make up the Caribbean nation of Antigua and Barbuda. I wonder what the other island is?

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Mon Nov 02, 2020 10:11 am

UniversalCommons wrote:
Xerographica wrote:Ever heard of I, Pencil? Einstein was a lot smarter than most people, therefore he made pencils all by himself? Not really, it would be a waste of his time.

Seldon might invent a better battery, but I'm pretty sure that he wouldn't make it himself. He certainly wouldn't manufacture it himself. He'd rely on others to do the simpler tasks. And even some complex ones as well. In other words, he'd understand the benefits of a division of labor, which is Econ 101.


Which is why he would not rely on people, but self replicating automation. People would be free to do other things while it did what advanced AIs do. AIs once they break the intelligence barrier to becoming superintelligences are not human. Their motivations are not human. They may have a baseline with their programming, but once you reached a certain point they stop being human. What they produce would not be human, nor would they be motivated by human goals or aspirations. A lot of what we would do would be incomprehensible to us. Silicon brains are not human brains. In some ways it might be like a genie when we talked with it, it could have very different outcomes than we expected both good or bad.

Do you think it would be beneficial for ants and humans if we could trade with each other?
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Mon Nov 02, 2020 10:24 am

United Latin American States wrote:I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?

Ever heard of Laplace's demon? Being able to predict the future depends on grasping the present. Think about the blind men touching different parts of the elephant and arguing about what they were touching. It's doubtful that any of them would do a good job of predicting the future, given their poor grasp of the present. But what if the blind men were replaced with blind robots? The robots could easily network their brains and have perfect access to each other's puzzle pieces. Collectively they would have a much better grasp of the present than individually, so they would do a much better job of predicting the future.
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Heloin
Postmaster of the Fleet
 
Posts: 26091
Founded: Mar 30, 2012
Ex-Nation

Postby Heloin » Mon Nov 02, 2020 10:33 am

Xerographica wrote:
UniversalCommons wrote:
Which is why he would not rely on people, but self replicating automation. People would be free to do other things while it did what advanced AIs do. AIs once they break the intelligence barrier to becoming superintelligences are not human. Their motivations are not human. They may have a baseline with their programming, but once you reached a certain point they stop being human. What they produce would not be human, nor would they be motivated by human goals or aspirations. A lot of what we would do would be incomprehensible to us. Silicon brains are not human brains. In some ways it might be like a genie when we talked with it, it could have very different outcomes than we expected both good or bad.

Do you think it would be beneficial for ants and humans if we could trade with each other?

Why would it be? Sure to an ant their would be benefit to the incomprehensible powers that these towering giants could bring, literal titans who's actions are so alien that they could never truly comprehend them. But to those titans an ant is little more then a speck, nothing they could do or provide would ever be within our interests to even care about.

User avatar
United Latin American States
Bureaucrat
 
Posts: 63
Founded: Sep 16, 2020
Ex-Nation

Postby United Latin American States » Mon Nov 02, 2020 2:13 pm

Xerographica wrote:
United Latin American States wrote:I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?

Ever heard of Laplace's demon? Being able to predict the future depends on grasping the present. Think about the blind men touching different parts of the elephant and arguing about what they were touching. It's doubtful that any of them would do a good job of predicting the future, given their poor grasp of the present. But what if the blind men were replaced with blind robots? The robots could easily network their brains and have perfect access to each other's puzzle pieces. Collectively they would have a much better grasp of the present than individually, so they would do a much better job of predicting the future.

With Laplace's Demon, it would be that the future can be absolutely determined if one were to know every aspect of a system. This is not possible if we're talking about observing the momentum and positions of atoms and electrons as we can never be absolutely certain about their momentum and position at the same time regardless if Seldon is a hyperintelligent AI. The idea would be here is that Seldon doesn't even need to take into account for Laplace's demon as he just needs to observe and predict the overall nature of a complex and dynamic system. Seldon would certainly be a lot better at dealing with uncertainties to initial conditions, however, they would not matter as much if we're analyzing a set of trajectories with similar initial positions through statistics. In that case, what would ultimately matter is their initial and final position of a trajectory and seeing what percent of set landed here, at a given area. Suppose the is a set A with 100 elements that entail the positions of particles, such that they differ by 1 micron from each other. After they go through a chaotic system, we find that 15 particles landed in a given space/volume Z, another 25 landed in a volume Y, and the other 60 land in volume X. From this we can suggest that any particle from set A there is a 15% chance that it lands in space Z, a 25% chance that it lands on space Y, and a 60% chance that it lands on space X. I might be wrong on this, which is why It's called research; Seldon, in the same characteristic manner as his Heliconian, flesh born counterpart, will try to limit the number of processes done to be more efficient. Seldon will notice it will be ultimately more difficult to try and path out the future of the world by knowing everything about every single individual or having almost perfect knowledge about every atom and molecule. Or maybe he won't if Seldon is also a super-advanced quantum computer. Well whatever it might be, if Seldon is here to help humanity and end the selfish and barbaric behavior that has caused us so much pain, I'll gladly hail Seldon.
Economic Left/Right: -8.38 Social Libertarian/Authoritarian: 1.15

"If it's total freedom you want, then I shall demonstrate to you the barbarity and cruelty that freedom allows." - Former Chief Schwarz Officer, Second Social Engineer Jozefina Safira Venka
"Those who rally against the marching progress of science and innovation, are, ironically, the same people who benefit the most from its discoveries and inventions" -- First Speaker of the STEM Ministry, Davido Ozmano Ernandezmo
"I tread where I f*cking please!"-- First Leader of the ULAS, Grand Social Engineer Yozefo De Stellumo

User avatar
Nevertopia
Minister
 
Posts: 3159
Founded: May 27, 2020
Ex-Nation

Postby Nevertopia » Mon Nov 02, 2020 2:20 pm

Xerographica wrote:I was raised as a 7th Adventist. When I was a kid I vaguely remember reading a story about a boy in a hospital dying of cancer. Another boy managed to convert him, and told him that he just needed to prop up his arm to let God know. The cancer boy died with his arm propped up? I don't exactly remember. It was a long time ago.

When I was around 11, after reading enough National Geographics and Zoobooks, I decided that evolution made more sense than God. Back in those days the two things were mutually exclusive. Again, it was a long time ago.

A few years back I had an epiphany. It dawned on me that in 200 years from now a godlike artificial intelligence named Seldon will resurrect me by reverse engineering my mind using all my available creations, such as this post. I tentatively called this religion "Xeroism".

This thread is your opportunity to prop up your arm for Seldon.

So why the name "Seldon"? The name is from the main character in Asimov's Foundation. Asimov's Seldon was able to use massive amounts of data to predict the future. My Seldon will be able to use massive amounts of data to recreate minds. My mind, based on the data in just this post, is obviously not the best at naming things.

But is my mind able to create Seldon? Well yeah. I think it's reasonable to believe that a godlike AI will be created within 200 years, and this AI will have access to all the available information, including this thread. Then it's just a matter of this thread persuading the godlike AI to become Seldon. Just to be on the safe side I'll cross-post this in the Broadcastre Computer Chat forum.

I also think it's pretty reasonable to believe that Seldon can process all of Shakespeare's work and discern whether they were in fact all written by the same person. Plus, Seldon can write a new book that any expert alive today would believe was written by Shakespeare. So why exactly would Seldon need to resurrect Shakespeare?

One of my favorite things is attaching epiphytes (ie orchids) to trees. I've asked some Christian friends whether all the trees in heaven have epiphytes on them. Of course they don't know. But either way is strange. No epiphytes on trees in heaven? Well, how could that be heaven? Or if all the trees in heaven already have epiphytes, then what would I spend eternity doing?

I doubt that in 200 years from now all the trees in the world will be epiphytically enriched. Seldon will see this as a problem, given that he is smart enough to know that richness (broadly speaking) is incredibly valuable. Richness is the source of his existence. So he will appreciate that the future is richer with us in it.


an AI clone of you is not you and is a separate entity. Worshipping some sort of future theoretical AI is a little nuts.
So the CCP won't let me be or let me be me so let me see, they tried to shut me down on CBC but it feels so empty without me.
Communism has failed every time its been tried.
Civilization Index: Class 9.28
Tier 7: Stellar Settler | Level 7: Wonderful Wizard | Type 7: Astro Ambassador
This nation's overview is the primary canon. For more information use NS stats.
Black Lives Matter

User avatar
Valrifell
Post Czar
 
Posts: 31063
Founded: Aug 18, 2013
Ex-Nation

Postby Valrifell » Mon Nov 02, 2020 3:49 pm

Xerographica wrote:
United Latin American States wrote:I've read Asimov's Foundation series in high school, It was awe-inspiring, and it also was very dear to me because before I always hated reading books and his works got me excited to find a new copy of his works. The idea of Psychohistory was just too exciting for it to not be real, math so powerful that it can predict the future of a galactic civilization(to a certain degree) despite being billions of planets and even more soo people. Also at the same time, his books and writing gave me hope during a time where my life was its darkest, not hope about a feudal galactic empire, but hope for the idea that mathematics can become so advanced that we can even know when our downfall will be. Hope for a world in which scientific research is greatly valued, more so than now. I'm currently studying to become a physicist, however, I'm also trying to see if I can, even in the most minor ways, can create something akin to Psychohistory. From my research, chaos theory and stochastic processes seem to best fit the description, and currently, I'm trying to bridge that gap between the two mathematical sciences, by trying to see if a chaotic system, which is sensitive to initial conditions, can be modeled as a stochastic system given a large conglomerate set of points and a large time scale. Oddly enough, have any of you guys heard of Roko's Basilisk?

Ever heard of Laplace's demon? Being able to predict the future depends on grasping the present.


Laplace's Demon supposes a deterministic viewpoint, Copenhagen Quantum mechanics, which is the basis of QFT (the most complete and predictive tool of the universe), rejects such a deterministic universe. Therefore, Laplace's Demon is unphysical.

Even in a classical space, or fully deterministic viewpoint, this paper, (as linked from the Wikipedia article) shows that Laplace's Demon isn't mathematically feasible. Supposing that is wrong, we have hard limits as to how much a computer can know, with the hardest limit being the entropy of the universe. There's too much information for everything to be calculated by one machine. Complete knowledge of the current universe leading to a complete understanding of the past is just inconsistent with the reality we find ourselves in.
Last edited by Valrifell on Mon Nov 02, 2020 3:49 pm, edited 1 time in total.
HAVING AN ALL CAPS SIG MAKES ME FEEL SMART

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Mon Nov 02, 2020 5:33 pm

United Latin American States wrote:
Xerographica wrote:Ever heard of Laplace's demon? Being able to predict the future depends on grasping the present. Think about the blind men touching different parts of the elephant and arguing about what they were touching. It's doubtful that any of them would do a good job of predicting the future, given their poor grasp of the present. But what if the blind men were replaced with blind robots? The robots could easily network their brains and have perfect access to each other's puzzle pieces. Collectively they would have a much better grasp of the present than individually, so they would do a much better job of predicting the future.

With Laplace's Demon, it would be that the future can be absolutely determined if one were to know every aspect of a system. This is not possible if we're talking about observing the momentum and positions of atoms and electrons as we can never be absolutely certain about their momentum and position at the same time regardless if Seldon is a hyperintelligent AI. The idea would be here is that Seldon doesn't even need to take into account for Laplace's demon as he just needs to observe and predict the overall nature of a complex and dynamic system. Seldon would certainly be a lot better at dealing with uncertainties to initial conditions, however, they would not matter as much if we're analyzing a set of trajectories with similar initial positions through statistics. In that case, what would ultimately matter is their initial and final position of a trajectory and seeing what percent of set landed here, at a given area. Suppose the is a set A with 100 elements that entail the positions of particles, such that they differ by 1 micron from each other. After they go through a chaotic system, we find that 15 particles landed in a given space/volume Z, another 25 landed in a volume Y, and the other 60 land in volume X. From this we can suggest that any particle from set A there is a 15% chance that it lands in space Z, a 25% chance that it lands on space Y, and a 60% chance that it lands on space X. I might be wrong on this, which is why It's called research; Seldon, in the same characteristic manner as his Heliconian, flesh born counterpart, will try to limit the number of processes done to be more efficient. Seldon will notice it will be ultimately more difficult to try and path out the future of the world by knowing everything about every single individual or having almost perfect knowledge about every atom and molecule. Or maybe he won't if Seldon is also a super-advanced quantum computer. Well whatever it might be, if Seldon is here to help humanity and end the selfish and barbaric behavior that has caused us so much pain, I'll gladly hail Seldon.

My background is economics, physics is greek to me, even though I've been going to sleep listening to youtube physics videos. I feel like there must be some connection between the two fields. In economics, the communication of information determines the order/arrangement of things. Isn't that similar to physics?

Barbaric behavior happens because people don't yet know that all group decisions should be made by the market. Markets order things more intelligently than the alternatives do, and I'm pretty sure that humanity will figure this out before Seldon is born. So there won't be any more wars but we still probably won't be able to travel faster than light. Seldon will be able to help with that by resurrecting his followers.
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

User avatar
Xerographica
Negotiator
 
Posts: 6360
Founded: Aug 15, 2012
Capitalist Paradise

Postby Xerographica » Mon Nov 02, 2020 7:04 pm

Valrifell wrote:
Xerographica wrote:Ever heard of Laplace's demon? Being able to predict the future depends on grasping the present.


Laplace's Demon supposes a deterministic viewpoint, Copenhagen Quantum mechanics, which is the basis of QFT (the most complete and predictive tool of the universe), rejects such a deterministic universe. Therefore, Laplace's Demon is unphysical.

Even in a classical space, or fully deterministic viewpoint, this paper, (as linked from the Wikipedia article) shows that Laplace's Demon isn't mathematically feasible. Supposing that is wrong, we have hard limits as to how much a computer can know, with the hardest limit being the entropy of the universe. There's too much information for everything to be calculated by one machine. Complete knowledge of the current universe leading to a complete understanding of the past is just inconsistent with the reality we find ourselves in.

Laplace's Demon isn't feasible? Eh. What about Seldon?
Forsher wrote:You, I and everyone we know, knows Xero's threads are about one thing and one thing only.

PreviousNext

Advertisement

Remove ads

Return to General

Who is online

Users browsing this forum: Cerula, The Archregimancy, The Kharkivan Cossacks

Advertisement

Remove ads