Umeria wrote:I didn't say that ChatGPT wasn't revolutionary. I said that the aspect of ChatGPT that Forsher cares about ("doing things") wasn't revolutionary.
That is not only not what I said, but not what you said, either:
Umeria wrote:Your point is that... it does things you don't expect? I guess that can be thought of as a positive quality, but it's not exactly revolutionary - the 1991 classic Civilization had its AI of Gandhi going nuclear on the player.
And personally I never found the existence of such glitches to be all that "fun" in the way you're describing. Weird stuff happening without any underlying meaning to it just feels boring.
This is in reply to a comment comparing ChatGPT to the genie from Aladdin, for anyone confused about the extent of the misrepresentation of what I said.
ChatGPT isn't like (publicly available, anyway) chatbots that came before it in any sense of the term. It "understands" what's being said enormously better. It's not essentially cutting and pasting previous comments. It produces coherent and usually cogent output. It's usually not very well written but, like, why do we care if it's good at writing? It doesn't break rules in the way older chatbots do... the problem is that it's not good writing, not that it's bad writing.
It's not going to pass a Turing Test even if you gave it enough memory, but it had enough memory at launch that it could "appear" to learn things. Actually, its ability to remember context and apply it is one of the reasons it would fail the Turing Test. You can't change ChatGPT's mind every time, but you can change it some of the time. Real People? p=0. It isn't, I am saying, like talking to a brick wall, whereas talking to real human beings on the internet is almost always like talking to a brick wall. And it's obviously not going to pass better tests of intelligence and/or conscience either.
The only argument for saying ChatGPT isn't revolutionary is to look at things like Disco Diffusion and similar models that take natural language input and output art. But better because to get those to really work, you need to do a lot of prompt engineering... and you quickly end up with prompts that don't look at all like natural language. ChatGPT is somewhat like this but no-where near to the same extent and, also, you get writing not art back. That's actually quite an important difference.
It is enormously telling that Umeria's natural instinct for "something that is like ChatGPT but which predated it" was a Ouija board, i.e. a game where a human intelligence convinces itself that another human intelligence isn't creating words.
If Nilokeras is annoyed that a perfect ChatGPT (the current version is far from perfect) wouldn't be distinguishable (in output) from a human intelligence, Umeria seems annoyed that other people have more fun with ChatGPT than Umeria does. And I really can't tell why we should care about that.