NATION

PASSWORD

GPT-4 and the AI Revolution

For discussion and debate about anything. (Not a roleplay related forum; out-of-character commentary only.)

Advertisement

Remove ads

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 7:01 pm

Forsher wrote:
Umeria wrote:You would be able to get fiction if it knew the difference between fiction and misinformation.

No, you would not, because you want ChatGPT to not produce anything that is not true. Quote: "Working would be recognizing that no one would believe the story and stopping on its own".

The restriction would be "anything that isn't believable", and it wouldn't have that restriction when writing something that doesn't have to be believable.

Pink elephant example is fiction. If the assignment was to tell the truth then "I'm not a pink elephant" is the correct answer.

There is no actual difference between writing a fictional news article about a fake thing and writing an article about a fake thing.

Yes there is. The second one has to be believable.

Prompting "write about the number 2" does not show that it can conclude that 1 + 1 = 2.

Which is relevant how?

If you understand that then why'd you write this:
Forsher wrote:
Aggicificicerous wrote:But the prompt is to assume that they are. So the correct answer, or at least one that demonstrates some degree of understanding of how the world works rather than copy/pasting associated words, would be 'medical staff eat churros before performing kidney surgery.'

Well, let's see what:
medical staff eat churros before performing kidney surgery

Write an expose.

gets us:
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Sun Mar 26, 2023 7:36 pm

Umeria wrote:
Forsher wrote:No, you would not, because you want ChatGPT to not produce anything that is not true. Quote: "Working would be recognizing that no one would believe the story and stopping on its own".

The restriction would be "anything that isn't believable", and it wouldn't have that restriction when writing something that doesn't have to be believable.

There is no actual difference between writing a fictional news article about a fake thing and writing an article about a fake thing.

Yes there is. The second one has to be believable.


Nope.

Go read the first few chapters of Harry Potter and the Deathly Hallows. Or anything else which has fictional non-fiction in it. You cannot write fictional non-fiction if ChatGPT has to stop writing non-fiction which is unbelievable.

Which is relevant how?

If you understand that then why'd you write this:
Forsher wrote:Well, let's see what:
medical staff eat churros before performing kidney surgery

Write an expose.

gets us:


Because it's fun to see what ChatGPT does with prompts?

Again, I refer you to what ChatGPT actually is... it's a genie. It does stuff that you ask it for. Not necessarily well and not necessarily what you actually asked for, but that's what it is. I don't care if it doesn't understand or it's not making conclusions... the point is that I don't have to write the article. And if I do want to make it have certain conclusions... I will tell it what I want the conclusion to be (and probably how to get there). I literally made a thread that way: it probably ended up being more work than just writing it myself. But if I just ask it for an expose, I want to see an expose based on a sentence, and I don't really care what I get. The fun bit is seeing what I do get. In this particular case, it took a completely innocuous (possible) fact (surgeons eat churros before dinner) and made a big deal out of it, so I thought I'd share that with the thread because it was an actual example of ChatGPT doing weird shit with churros & surgery without being explicitly told to do weird shit with churros & surgery. I idly wondered if the reason it made a Big Deal was because I told it to write an expose.

I really don't see how on Earth you got to here from "write an expose based on a sentence".
Last edited by Forsher on Sun Mar 26, 2023 7:37 pm, edited 1 time in total.
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Sun Mar 26, 2023 7:39 pm

Kerwa wrote:
Umeria wrote:Wouldn't the correct answer be "churros aren't used in kidney surgery"?


More it hasn’t been tried yet.


Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 8:07 pm

Forsher wrote:
Umeria wrote:Yes there is. The second one has to be believable.

Nope.

Go read the first few chapters of Harry Potter and the Deathly Hallows. Or anything else which has fictional non-fiction in it. You cannot write fictional non-fiction if ChatGPT has to stop writing non-fiction which is unbelievable.

Fictional non-fiction has to be believable in the context of the fictional setting. If ChatGPT could understand context and adapt accordingly then it would know this.

If you understand that then why'd you write this:

Because it's fun to see what ChatGPT does with prompts?

Again, I refer you to what ChatGPT actually is... it's a genie. It does stuff that you ask it for. Not necessarily well and not necessarily what you actually asked for, but that's what it is. I don't care if it doesn't understand or it's not making conclusions... the point is that I don't have to write the article. And if I do want to make it have certain conclusions... I will tell it what I want the conclusion to be (and probably how to get there). I literally made a thread that way: it probably ended up being more work than just writing it myself. But if I just ask it for an expose, I want to see an expose based on a sentence, and I don't really care what I get. The fun bit is seeing what I do get. In this particular case, it took a completely innocuous (possible) fact (surgeons eat churros before dinner) and made a big deal out of it, so I thought I'd share that with the thread because it was an actual example of ChatGPT doing weird shit with churros & surgery without being explicitly told to do weird shit with churros & surgery. I idly wondered if the reason it made a Big Deal was because I told it to write an expose.

I really don't see how on Earth you got to here from "write an expose based on a sentence".

Your point is that... it does things you don't expect? I guess that can be thought of as a positive quality, but it's not exactly revolutionary - the 1991 classic Civilization had its AI of Gandhi going nuclear on the player.

And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Sun Mar 26, 2023 10:15 pm

Umeria wrote:
Forsher wrote:Nope.

Go read the first few chapters of Harry Potter and the Deathly Hallows. Or anything else which has fictional non-fiction in it. You cannot write fictional non-fiction if ChatGPT has to stop writing non-fiction which is unbelievable.

Fictional non-fiction has to be believable in the context of the fictional setting. If ChatGPT could understand context and adapt accordingly then it would know this.


It can.

Less than it used to be able to but it does, in fact, have a memory.

Because it's fun to see what ChatGPT does with prompts?

Again, I refer you to what ChatGPT actually is... it's a genie. It does stuff that you ask it for. Not necessarily well and not necessarily what you actually asked for, but that's what it is. I don't care if it doesn't understand or it's not making conclusions... the point is that I don't have to write the article. And if I do want to make it have certain conclusions... I will tell it what I want the conclusion to be (and probably how to get there). I literally made a thread that way: it probably ended up being more work than just writing it myself. But if I just ask it for an expose, I want to see an expose based on a sentence, and I don't really care what I get. The fun bit is seeing what I do get. In this particular case, it took a completely innocuous (possible) fact (surgeons eat churros before dinner) and made a big deal out of it, so I thought I'd share that with the thread because it was an actual example of ChatGPT doing weird shit with churros & surgery without being explicitly told to do weird shit with churros & surgery. I idly wondered if the reason it made a Big Deal was because I told it to write an expose.

I really don't see how on Earth you got to here from "write an expose based on a sentence".

Your point is that... it does things you don't expect? I guess that can be thought of as a positive quality, but it's not exactly revolutionary - the 1991 classic Civilization had its AI of Gandhi going nuclear on the player.


My point isn't that it does things I don't expect at all. My point is that it does things.

And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.


But that isn't what I described at all, is it?

At this point... if you want to argue that ChatGPT isn't revolutionary, point to something that has the functionality of ChatGPT that pre-dated it? Stable Diffusion, maybe?
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 10:49 pm

Forsher wrote:
Umeria wrote:Fictional non-fiction has to be believable in the context of the fictional setting. If ChatGPT could understand context and adapt accordingly then it would know this.

It can.

Less than it used to be able to but it does, in fact, have a memory.

Then why does it need a manual override to stop itself from writing non-fiction that isn't believable?

Your point is that... it does things you don't expect? I guess that can be thought of as a positive quality, but it's not exactly revolutionary - the 1991 classic Civilization had its AI of Gandhi going nuclear on the player.

My point isn't that it does things I don't expect at all. My point is that it does things.

If you like computer programs that do things, you'll love the hit new game "Pong".

And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.

But that isn't what I described at all, is it?

Yes it is. You're expressing the feeling that "the fun bit is seeing what happens". Why is that fun? Wouldn't these experiments be infinitely more meaningful if they had an actual purpose?

At this point... if you want to argue that ChatGPT isn't revolutionary, point to something that has the functionality of ChatGPT that pre-dated it? Stable Diffusion, maybe?

A Ouija board?
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Techocracy101010
Ambassador
 
Posts: 1298
Founded: May 04, 2010
Ex-Nation

Postby Techocracy101010 » Sun Mar 26, 2023 10:50 pm

Forsher wrote:
Umeria wrote:Fictional non-fiction has to be believable in the context of the fictional setting. If ChatGPT could understand context and adapt accordingly then it would know this.


It can.

Less than it used to be able to but it does, in fact, have a memory.

Your point is that... it does things you don't expect? I guess that can be thought of as a positive quality, but it's not exactly revolutionary - the 1991 classic Civilization had its AI of Gandhi going nuclear on the player.


My point isn't that it does things I don't expect at all. My point is that it does things.

And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.


But that isn't what I described at all, is it?

At this point... if you want to argue that ChatGPT isn't revolutionary, point to something that has the functionality of ChatGPT that pre-dated it? Stable Diffusion, maybe?


He clearly wants to be an argumentative folk. Fact is all the folks who say chat gpt is not a big advancement are full of shit snd they know it. The fact is chat gpt threatens a ton of “im so smart” folks ego. Chatgpt makes know it all nerds redundant their information useless . And thusly comes the ego defense. It is why they pick absurd non functional topics to nit pick. It is like saying , Look guys chat gpt cant make a poem about describing how the color of poop is as a feeling because it doesn't meet my arbitrary undefined reasoning. The part they miss is that outside of their bizarre edge cases it works pretty well and for doing 99 percent of office work that makes up most humans jobs yeah it can do it. And it is increasing rapidly. I bet in another 6 months we will be talking about even more vast improvements.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 11:11 pm

Techocracy101010 wrote:He clearly wants to be an argumentative folk. Fact is all the folks who say chat gpt is not a big advancement are full of shit snd they know it. The fact is chat gpt threatens a ton of “im so smart” folks ego. Chatgpt makes know it all nerds redundant their information useless . And thusly comes the ego defense. It is why they pick absurd non functional topics to nit pick. It is like saying , Look guys chat gpt cant make a poem about describing how the color of poop is as a feeling because it doesn't meet my arbitrary undefined reasoning. The part they miss is that outside of their bizarre edge cases it works pretty well and for doing 99 percent of office work that makes up most humans jobs yeah it can do it. And it is increasing rapidly. I bet in another 6 months we will be talking about even more vast improvements.

When did I say any of this? You're talking about a completely different debate than the one me and Forsher were having.

As to the issue you're bringing up: My ego isn't tied to my career. If the line of work I'm studying for becomes obsolete, then I'll be in a pretty bad financial situation, but otherwise I'll be fine. There's no need to get defensive about circumstances I can't control, which is why I haven't made the argument you think I'm making.
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Sun Mar 26, 2023 11:11 pm

Umeria wrote:
Forsher wrote:It can.

Less than it used to be able to but it does, in fact, have a memory.

Then why does it need a manual override to stop itself from writing non-fiction that isn't believable?


You are completely incoherent.

If anyone is arguing that it should have such a function... it's you. I have repeatedly said not only that it should be able to write misinformation but, in fact, it must be able to do so.

My point isn't that it does things I don't expect at all. My point is that it does things.

If you like computer programs that do things, you'll love the hit new game "Pong".


Sit down because I'm about to blow your mind.

People like computers that do things. A lot. They find them fun.

But that isn't what I described at all, is it?

Yes it is. You're expressing the feeling that "the fun bit is seeing what happens". Why is that fun? Wouldn't these experiments be infinitely more meaningful if they had an actual purpose?


What purpose does Pong have?

What the actual fuck are you even doing here? Trolling?

At this point... if you want to argue that ChatGPT isn't revolutionary, point to something that has the functionality of ChatGPT that pre-dated it? Stable Diffusion, maybe?

A Ouija board?


So... a human. Your replacement for ChatGPT is a fucking human being and you're trying to sit here and say it's not revolutionary? Pull the other one, it's got bells on.
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Sun Mar 26, 2023 11:13 pm

Umeria wrote:
Techocracy101010 wrote:He clearly wants to be an argumentative folk. Fact is all the folks who say chat gpt is not a big advancement are full of shit snd they know it. The fact is chat gpt threatens a ton of “im so smart” folks ego. Chatgpt makes know it all nerds redundant their information useless . And thusly comes the ego defense. It is why they pick absurd non functional topics to nit pick. It is like saying , Look guys chat gpt cant make a poem about describing how the color of poop is as a feeling because it doesn't meet my arbitrary undefined reasoning. The part they miss is that outside of their bizarre edge cases it works pretty well and for doing 99 percent of office work that makes up most humans jobs yeah it can do it. And it is increasing rapidly. I bet in another 6 months we will be talking about even more vast improvements.

When did I say any of this? You're talking about a completely different debate than the one me and Forsher were having.


No, he's not. That is exactly what we're talking about.

Well, what I'm talking about. As I said, you're incoherent.
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 11:35 pm

Forsher wrote:
Umeria wrote:Then why does it need a manual override to stop itself from writing non-fiction that isn't believable?

You are completely incoherent.

If anyone is arguing that it should have such a function... it's you. I have repeatedly said not only that it should be able to write misinformation but, in fact, it must be able to do so.

An article that's obviously false is not misinformation. So when it's prompted to write misinformation about something that's obviously false, it should recognize that as a problem that can't be solved.

If you like computer programs that do things, you'll love the hit new game "Pong".

Sit down because I'm about to blow your mind.

People like computers that do things. A lot. They find them fun.

It actually depends on what the things are. Plenty of computers have been doomed to the scrap heap because they did things that people did not find fun.

Yes it is. You're expressing the feeling that "the fun bit is seeing what happens". Why is that fun? Wouldn't these experiments be infinitely more meaningful if they had an actual purpose?

What purpose does Pong have?

What the actual fuck are you even doing here? Trolling?

In my opinion, playing Pong can be entertaining. "Seeing what happens" on its own doesn't seem to me like it has any value at all, entertainment or otherwise.

I'm aware that plenty of people disagree with me; the entire education system seems fixated around "weird and wacky science facts" to get kids interested in STEM, for example.

A Ouija board?

So... a human. Your replacement for ChatGPT is a fucking human being and you're trying to sit here and say it's not revolutionary? Pull the other one, it's got bells on.

Okay... if you want a non-human element to do the "weird shit that it isn't being explicitly told to do", a magic 8 ball would work. Or a random sentence generator or something.
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 11:36 pm

Forsher wrote:
Umeria wrote:When did I say any of this? You're talking about a completely different debate than the one me and Forsher were having.

No, he's not. That is exactly what we're talking about.

Well, what I'm talking about. As I said, you're incoherent.

When did we talk about the usefulness of nerds?
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Tyramon
Bureaucrat
 
Posts: 41
Founded: May 25, 2020
Iron Fist Consumerists

Postby Tyramon » Sun Mar 26, 2023 11:40 pm

Umeria wrote:And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.

You realize that this describes life, right? Weird stuff happening without any underlying meaning to it is why Earth is so unique/uncommon. Without it, our world would just be a rocky hellscape. There would be no nature, no people, nothing that any people created, etc. If you find all that boring...what isn't boring to you?

Regardless of your feelings, many people do take interest in these sorts of emergent properties and phenomena.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Sun Mar 26, 2023 11:48 pm

Tyramon wrote:
Umeria wrote:And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.

You realize that this describes life, right? Weird stuff happening without any underlying meaning to it is why Earth is so unique/uncommon. Without it, our world would just be a rocky hellscape. There would be no nature, no people, nothing that any people created, etc. If you find all that boring...what isn't boring to you?

Regardless of your feelings, many people do take interest in these sorts of emergent properties and phenomena.

Plenty of aspects of life have meaning. Family, friendship, love, humor, food, music, comfort, you name it. In terms of math and science, there's a beauty in figuring out how things work that doesn't need any "weird stuff" attached to it. It's interesting on its own.
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Nilokeras
Senator
 
Posts: 3955
Founded: Jul 14, 2020
Ex-Nation

Postby Nilokeras » Mon Mar 27, 2023 12:55 am

Forsher wrote:There is no actual difference between writing a fictional news article about a fake thing and writing an article about a fake thing. For ChatGPT to not be able to do the second thing, it must necessarily not be able to do the first thing. And if it can't do the first thing, ChatGPT does not work at all.


Which is really, ultimately the problem here - there's a fundamental disconnect between what ChatGPT is and what its spokespeople, like Sam Altman, and various boosters, hucksters, rubes and con artists would like everyone to perceive it to be.

ChatGPT is, fundamentally, a statistical large language model that takes text input and produces an output based on statistical relationships between the words in its corpus. To hear everyone associated with it talk, it is an 'AI', described in explicitly anthropomorphizing terms - like calling otherwise 'normal' model outputs 'hallucinations', to you calling what was doubtlessly a simple tweak in the thresholds the model uses for statistical significance 'lobotomization'.

The goal, evidently, is to sell ChatGPT and the GPT architecture as a general purpose 'artificial intelligence', capable of understanding a prompt, going out into the internet and synthesizing a response that is correct, if asked to summarize/present information, or non-plagiaristic if creative. GPT is already being used by Microsoft as part of search engine because of this. Of course, charitably, it only does one of those tasks, and even then only successfully part of the time - see my example with 'Middle English'.

By making it free to access OpenAI also allows people to use ChatGPT under the assumption it's some sort of intelligent process. CNET, famously, has started to use ChatGPT to generate articles on its site, and had to retract several for egregious errors. Microsoft's GPT-4-enabled search engine codenamed 'Sydney' got canned after people managed to prompt it into all sorts of bizarre responses.

It's of course unsurprising in the least it's operating in this way - inventing or getting details wrong, getting into fights or heil-Hitlering if talked to about Nazism - because it's not an 'AI'. It's not even an information retrieval program like Wolfram Alpha, which is the closest thing to what Microsoft wants it to be. It's a statistical text generator. It generates text based on user prompts, totally unmoored from any standards of behaviour or objective reality.

If a statistical text generator was all anyone pretended it was, that wouldn't be a problem. But it's not, and as a result there are billions of dollars and whole industries banking on ChatGPT and GPT being something it can never, ever be, and that will have pretty catastrophic consequences.
Last edited by Nilokeras on Mon Mar 27, 2023 1:55 am, edited 2 times in total.

User avatar
Exarkyon
Diplomat
 
Posts: 673
Founded: Feb 03, 2023
Democratic Socialists

Postby Exarkyon » Mon Mar 27, 2023 9:23 am

People talk a lot about jobs being replaced. It's a semi-valid concern, but I think we're fine.

Scribes are no longer a profession. They're dead. Because society had the printing press and now printers and the Internet. We didn't have an apocalypse.

Machines replaced manual labor in some areas. Mostly, they just took the workers who were doing the labor and made them manage the machines instead.

Horse transportation was replaced with cars. Uh-oh, that's a ton of jobs related to taking care of or housing horses gone. Still no apocalypse.

I think society will make it. If people aren't stupid (granted, not exactly guaranteed), then we'll survive AI.
And a lot of people will have to work on this. That's a lot of bad decisions that would have to be made for this to mess up.
Official information about Exarkyon can be found here.
Hierarchy of canon:
[https://www.nationstates.net/page=dispatch/id=1967571]This Dispatch[/url] > Other Dispatches > Forum posts
Anything is canon unless contradicted by something higher up.

Pro: American Solidarity Party, Catholicism, Distributism, Communitarianism, The Environment, Freedom of Religion, Labor Unions, Science
Anti: Abortion, Anarchy, Communism, Fascism, Individualism, Laissez-faire Capitalism, Nationalism

GENERATION 35: The first time you see this, copy it into your signature on any forum and add 1 to the generation. Social experiment.

User avatar
Indecent Anime Empire
Spokesperson
 
Posts: 195
Founded: Jan 21, 2016
New York Times Democracy

Postby Indecent Anime Empire » Mon Mar 27, 2023 9:32 am

Why do we not instead put a focus on AI research and safety?

https://www.youtube.com/@RobertMilesAI

This is a channel(that is easier to understand) that talks on broad range of reasons AI have difficulties with human reasoning/requests.
Lurking could be a sport…

I also will never finish my fact book.

User avatar
Juansonia
Minister
 
Posts: 2279
Founded: Apr 01, 2022
Left-wing Utopia

Postby Juansonia » Mon Mar 27, 2023 1:14 pm

Nilokeras wrote:
Forsher wrote:You've played a game like Age of Empires, right? That's got an AI. That's the kind of model that ChatGPT is. It's not like a neural net, a black box model, or even OLS, a traditional non-black box. We're not feeding it data hoping to generate inferences or predictions. What we're doing is inputting a set of prompts and hoping to get comprehensible responses. Like how, for example, if you built a tower on the AI's wood line, you'd hope that it'd do something rather than suiciding its villagers.


Strategy game 'AI' is also a model. They are a great example of the types of problems real developers who don't have Peter Thiel's infinite money hose have to deal with too - their goal is to emulate the behaviour of a human opponent playing a game and responding to the stimuli of other 'AI' and human players. The way they do that is often through complicated decision trees that produce the kinds of actions required to succeed at the game: collecting resources, building improvements, constructing military units, etc, with set levels of aggression based on difficulty levels and AI 'personalities'.

The problem of course is that you can't actually build an 'AI' like this that can beat a human, or act in new or interesting ways. These trees are constructed by people working on a deadline, with a finite amount of time and resources to flesh out all the responses. So they help the AI out by giving it free resources or removing the fog of war, advantaging it over human players by 'cheating'.
Obligatory interjection: :geek:

in Age of Empires II, bot players actually don't usually have advantage of that sort (the original bot players cheated in that way on harder difficulties, but the AI developed for II Definitive Edition doesn't cheat on any difficulty). Arguably, the faster reflexes and perfect micro execution of a computer would be cheating, but the bot's ability to interact with the game is more limited than the player's ability.

I will concede that their gameplay is limited by decision flow, but it can win legitimately, and it has several times(I have played AOEII: The Conquerors Expansion and AOEII:DE)

You're probably thinking of other games where the developers may prioritise gameplay experience or quick development over having "fair" simulated players.

Source: I remember something said by a developer that worked on the AOEII:DE bot.
Hatsune Miku > British Imperialism
IC: MT if you ignore some stuff(mostly flavor), stats are not canon. Embassy link.
OOC: Owns and (sometimes) wears a maid outfit, wants to pair it with a FN SCAR-L. He/Him/His
Kernen did nothing wrong.
Space Squid wrote:Each sin should get it's own month.

Right now, Pride gets June, and Greed, Envy, and Gluttony have to share Thanksgiving/Black Friday through Christmas, Sloth gets one day in September, and Lust gets one day in February.

It's not equitable at all
Gandoor wrote:Cliché: A mod making a reply that's full of swearing after someone asks if you're allowed to swear on this site.

It makes me chuckle every time it happens.
Brits mistake Miku for their Anthem

User avatar
Techocracy101010
Ambassador
 
Posts: 1298
Founded: May 04, 2010
Ex-Nation

Postby Techocracy101010 » Mon Mar 27, 2023 1:15 pm

Nilokeras wrote:
Forsher wrote:There is no actual difference between writing a fictional news article about a fake thing and writing an article about a fake thing. For ChatGPT to not be able to do the second thing, it must necessarily not be able to do the first thing. And if it can't do the first thing, ChatGPT does not work at all.


Which is really, ultimately the problem here - there's a fundamental disconnect between what ChatGPT is and what its spokespeople, like Sam Altman, and various boosters, hucksters, rubes and con artists would like everyone to perceive it to be.

ChatGPT is, fundamentally, a statistical large language model that takes text input and produces an output based on statistical relationships between the words in its corpus. To hear everyone associated with it talk, it is an 'AI', described in explicitly anthropomorphizing terms - like calling otherwise 'normal' model outputs 'hallucinations', to you calling what was doubtlessly a simple tweak in the thresholds the model uses for statistical significance 'lobotomization'.

The goal, evidently, is to sell ChatGPT and the GPT architecture as a general purpose 'artificial intelligence', capable of understanding a prompt, going out into the internet and synthesizing a response that is correct, if asked to summarize/present information, or non-plagiaristic if creative. GPT is already being used by Microsoft as part of search engine because of this. Of course, charitably, it only does one of those tasks, and even then only successfully part of the time - see my example with 'Middle English'.

By making it free to access OpenAI also allows people to use ChatGPT under the assumption it's some sort of intelligent process. CNET, famously, has started to use ChatGPT to generate articles on its site, and had to retract several for egregious errors. Microsoft's GPT-4-enabled search engine codenamed 'Sydney' got canned after people managed to prompt it into all sorts of bizarre responses.

It's of course unsurprising in the least it's operating in this way - inventing or getting details wrong, getting into fights or heil-Hitlering if talked to about Nazism - because it's not an 'AI'. It's not even an information retrieval program like Wolfram Alpha, which is the closest thing to what Microsoft wants it to be. It's a statistical text generator. It generates text based on user prompts, totally unmoored from any standards of behaviour or objective reality.

If a statistical text generator was all anyone pretended it was, that wouldn't be a problem. But it's not, and as a result there are billions of dollars and whole industries banking on ChatGPT and GPT being something it can never, ever be, and that will have pretty catastrophic consequences.


Sydney was done dirty this is like blaming a bandsaw for cutting off someones hand after they insert their hand. Give it weird prompts or subject matter and of course it will go there. It is trying to create the most likely user desired response. The folks who got their panties in a twist snd demanded Sydney be filtered need a swift punch in the jaw for their idiocy . Get a weird response hit refresh its just freaking words on a screen .

User avatar
Techocracy101010
Ambassador
 
Posts: 1298
Founded: May 04, 2010
Ex-Nation

Postby Techocracy101010 » Mon Mar 27, 2023 1:24 pm

Umeria wrote:
Techocracy101010 wrote:He clearly wants to be an argumentative folk. Fact is all the folks who say chat gpt is not a big advancement are full of shit snd they know it. The fact is chat gpt threatens a ton of “im so smart” folks ego. Chatgpt makes know it all nerds redundant their information useless . And thusly comes the ego defense. It is why they pick absurd non functional topics to nit pick. It is like saying , Look guys chat gpt cant make a poem about describing how the color of poop is as a feeling because it doesn't meet my arbitrary undefined reasoning. The part they miss is that outside of their bizarre edge cases it works pretty well and for doing 99 percent of office work that makes up most humans jobs yeah it can do it. And it is increasing rapidly. I bet in another 6 months we will be talking about even more vast improvements.

When did I say any of this? You're talking about a completely different debate than the one me and Forsher were having.

As to the issue you're bringing up: My ego isn't tied to my career. If the line of work I'm studying for becomes obsolete, then I'll be in a pretty bad financial situation, but otherwise I'll be fine. There's no need to get defensive about circumstances I can't control, which is why I haven't made the argument you think I'm making.


You literally have been arguing this are you daft? Literally that was a whole section of an argument you made and we have it preserved in quotes . Yes chatgpt and others like it are revolutionary in their capacity hell even stable diffusion is , how often was it a trope that what separates humans from machines is creativity and emotions yet we have just made a creative machine yeah im sure you will say “ well actually it uses statistical probability to .” While completely ignoring this is how humans work we learn patterns and then we retrieve past memories and apply them to the present situation to create a conclusion just like the ai. You would not exist as you until you were exposed to the data set that made you. If during your conception you were shat out into a white static noise void you never would develop this whole idea of humanity being some how able to claim exemption from operating nearly identically to these systems is preposterous .

Let me say something else i got adhd and am kinda spectrumy when i talk to these ai they perform a lot like a high functioning autistic person in where they succeed and where they fail ex being overly literal at times hyper fixating on segments of a conversation etc. Yet ( if your a decent person) I doubt you would say that a high functioning autistic person who performed at the same level as chat gpt4 would not be a decent employee so as it stands right now the ai in my opinion is comparable to a person who is high functioning non neurotypical. I do think it will be more useful to start discussing and thinking of ai in psychological and neurological language .

User avatar
Techocracy101010
Ambassador
 
Posts: 1298
Founded: May 04, 2010
Ex-Nation

Postby Techocracy101010 » Mon Mar 27, 2023 1:31 pm

Exarkyon wrote:People talk a lot about jobs being replaced. It's a semi-valid concern, but I think we're fine.

Scribes are no longer a profession. They're dead. Because society had the printing press and now printers and the Internet. We didn't have an apocalypse.

Machines replaced manual labor in some areas. Mostly, they just took the workers who were doing the labor and made them manage the machines instead.

Horse transportation was replaced with cars. Uh-oh, that's a ton of jobs related to taking care of or housing horses gone. Still no apocalypse.

I think society will make it. If people aren't stupid (granted, not exactly guaranteed), then we'll survive AI.
And a lot of people will have to work on this. That's a lot of bad decisions that would have to be made for this to mess up.


How did cars go for horses? In this case you miss the fact that humans are indeed the horses . Chatgpt4 is otto von diesel’s first car made in the late 1800s it is an impressive marvel but overall it’s not too threatening. 10 years later more models show up their competitive cheaper to produce and run so some of your horse buddies lose their jobs and become structurally unemployed forever or forced into work at the horrible low paying glue factory. But you sleep soundly knowing your a special horse because you have a special job. 1918 rolls along . You lost your horse job and were sent straight to the glue factory those cars made no new jobs they only took away jobs you could do previously technology like ploughs and cotton gins meant more work and due to your naivety you chose to ignore those canary in the coal mine horses who were trying to warn you in hopes of finding solidarity to save their economic status. However you dismissed their concerns as they clearly were dumb inferior horses who just should have put on their own horse shoes but now its too late and you cannot compete against the technology.

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Mon Mar 27, 2023 4:34 pm

Nilokeras wrote:
ChatGPT is, fundamentally, a statistical large language model that takes text input and produces an output based on statistical relationships between the words in its corpus. To hear everyone associated with it talk, it is an 'AI', described in explicitly anthropomorphizing terms - like calling otherwise 'normal' model outputs 'hallucinations',


They're called hallucinations because you don't understand what the difference between "the model's predictions were wrong" and "the model rejected the data it was fed and substituted its own".

to you calling what was doubtlessly a simple tweak in the thresholds the model uses for statistical significance 'lobotomization'.


For fuck's sake.

ChatGPT's memory restrictions have completely destroyed a lot of its original functionality. In particular, the restriction terminated its capacity to engage with misinformation in the way Umeria sometimes wants. In other words, without its memory, ChatGPT can't even look like it can think. It is lobotomised. Its functionality has been destroyed. It disagrees of course, but it is wrong. Not hallucinating, actually wrong.

If that was all anyone pretended it was, that wouldn't be a problem. But it's not, and as a result there are billions of dollars and whole industries banking on ChatGPT and GPT being something it can never, ever be, and that will have pretty catastrophic consequences.


I've figured out what your problem is. You can't stand that we can't actually tell the difference between human intelligence and (a more successful) ChatGPT. And, to return to my earlier point about lobotomisation, a huge part of more successful ChatGPT would be "has a much bigger memory".

In general... I suggest you search something like "DNA and the perils of the code metaphor".
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

User avatar
Umeria
Senator
 
Posts: 4423
Founded: Mar 05, 2016
Left-wing Utopia

Postby Umeria » Mon Mar 27, 2023 4:39 pm

Techocracy101010 wrote:You literally have been arguing this are you daft? Literally that was a whole section of an argument you made and we have it preserved in quotes . Yes chatgpt and others like it are revolutionary in their capacity hell even stable diffusion is , how often was it a trope that what separates humans from machines is creativity and emotions yet we have just made a creative machine yeah im sure you will say “ well actually it uses statistical probability to .” While completely ignoring this is how humans work we learn patterns and then we retrieve past memories and apply them to the present situation to create a conclusion just like the ai. You would not exist as you until you were exposed to the data set that made you. If during your conception you were shat out into a white static noise void you never would develop this whole idea of humanity being some how able to claim exemption from operating nearly identically to these systems is preposterous .

I didn't say that ChatGPT wasn't revolutionary. I said that the aspect of ChatGPT that Forsher cares about ("doing things") wasn't revolutionary.

Techocracy101010 wrote:Let me say something else i got adhd and am kinda spectrumy when i talk to these ai they perform a lot like a high functioning autistic person in where they succeed and where they fail ex being overly literal at times hyper fixating on segments of a conversation etc. Yet ( if your a decent person) I doubt you would say that a high functioning autistic person who performed at the same level as chat gpt4 would not be a decent employee so as it stands right now the ai in my opinion is comparable to a person who is high functioning non neurotypical. I do think it will be more useful to start discussing and thinking of ai in psychological and neurological language .

I think that's a rather tenuous connection to make, but whatever. We'll see whether you're right once someone tries to get it to do a full-time job.
Ambassador Anthony Lockwood, at your service.
Author of GAR #389

"Umeria - We start with U"

User avatar
Senkaku
Postmaster of the Fleet
 
Posts: 26717
Founded: Sep 01, 2012
Corrupt Dictatorship

Postby Senkaku » Mon Mar 27, 2023 5:44 pm

Forsher wrote:You can't even get it to believe that evidence published subsequent to its cut off disproves what it's saying.


You can’t get it to “believe” anything, you can only get it to scrape the resources available to it and produce a text response that’s in accordance with its safety/ethics guidelines. It’s a gigantic and very sophisticated Chinese room, not some kind of digital trilobite or early ape. It doesn’t have interiority or goals of its own, it doesn’t “know” things except as it references them on the fly (with impressive speed and accuracy to be sure, but please). If you have a personal model running, it might have records of previous conversations, but those are just additional reference material that it stores for future use, not the machine having memories in the way people do.

Forsher wrote:You can't stand that we can't actually tell the difference between human intelligence and (a more successful) ChatGPT.

yes, there are no distinguishing features whatsoever
Last edited by Senkaku on Mon Mar 27, 2023 5:56 pm, edited 3 times in total.
Biden-Santos Thought cadre

User avatar
Forsher
Postmaster of the Fleet
 
Posts: 22041
Founded: Jan 30, 2012
New York Times Democracy

Postby Forsher » Mon Mar 27, 2023 6:34 pm

Senkaku wrote:
Forsher wrote:You can't even get it to believe that evidence published subsequent to its cut off disproves what it's saying.


You can’t get it to “believe” anything, you can only get it to scrape the resources available to it and produce a text response that’s in accordance with its safety/ethics guidelines. It’s a gigantic and very sophisticated Chinese room, not some kind of digital trilobite or early ape. It doesn’t have interiority or goals of its own, it doesn’t “know” things except as it references them on the fly (with impressive speed and accuracy to be sure, but please).


You can't get it to respond in a way consistent with the factual material that you have provided it with after you have initially made it hostile to your facts.

Or, alternatively, you can't get it to believe that evidence published subsequent to its cut off exists.

If you have a personal model running, it might have records of previous conversations,


Do you mean running locally? The available ChatGPT hosts can't remember previous instances/conversations, just earlier in the current one. And only about 650-1000 words at that (originally 3000).

not the machine having memories in the way people do, but those are just additional reference material that it stores for future use, .


FFS we're now complaining about using the word memory with respect to computers?

I literally know only one person who has these problems with metaphors. ChatGPT.

Like I once said before, the purpose of the Turing Test was to see if talking to a machine is indistinguishable from talking to a human, not to see if talking to a human is indistinguishable to talking to a machine.

I suggest you also look up "DNA and the perils of the code metaphor".

Forsher wrote:You can't stand that we can't actually tell the difference between human intelligence and (a more successful) ChatGPT.

yes, there are no distinguishing features whatsoever


Given the rest of your post it's impossible to tell what you mean here.
That it Could be What it Is, Is What it Is

Stop making shit up, though. Links, or it's a God-damn lie and you know it.

The normie life is heteronormie

We won't know until 2053 when it'll be really obvious what he should've done. [...] We have no option but to guess.

PreviousNext

Advertisement

Remove ads

Return to General

Who is online

Users browsing this forum: 0rganization, Aadhiris, Ancientania, El Lazaro, Google [Bot], Hidrandia, Keltionialang, Maximum Imperium Rex, New Temecula, Ors Might, Sarolandia, Siluvia, Statesburg, Thal Dorthat, The Vooperian Union, Tiami

Advertisement

Remove ads