Page 1 of 5

The Artificial Intelligence Thread

PostPosted: Thu Mar 30, 2023 12:20 am
by Emotional Support Crocodile
https://futureoflife.org/open-letter/pa ... periments/

https://www.bbc.co.uk/news/technology-65110030
https://www.reuters.com/technology/musk ... 023-03-29/

Leading technology experts, and Elon Musk, have called for training of powerful AI systems to be suspended while their potential as an existential threat to humanity is assessed.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.


My personal take is that yes we do need stronger laws covering AI in place now, rather than after the fact. Law making tends to be painfully slow and constantly playing catch up. However I cannot see the military pay any attention to this proposed suspension, and to my mind they are the most likely to create a dangerous AI. I also cannot see countries like China suspending their AI programmes either. I don't think we can stop work on AI, we just have to make sure it is done with as much openness, ie in an academic framework rather than military or commercial, and that we have laws to make sure it benefits everyone, not just a small group of people.

PostPosted: Thu Mar 30, 2023 12:37 am
by Ahl al-Ummah
With all due respect, this seems extremely overblown.

Firstly, AI is not "intelligent." There is not some type of independent thought or reasoning that an AI is exercising when it gives it's responses. ChatGPT is a language model, it excels at regurgitating trained data back in a convincing manner as a response to a question. It does not actually have any knowledge or comprehension of what it is saying.

What that in mind, the idea that it's somehow rapidly on track to "outsmart" humans or lead to "losing control of our society" is ridiculous, especially when it has no real "smarts" to begin with. And beyond the fact that it's not intelligent in any real sense, the quality of its responses are also limited to the data which humans have already provided for it.

And while I understand the desire for regulation, this proposed cutoff of "more powerful than GPT-4" is completely arbitrary and makes no sense at all. They are worried about AI "flooding our information channels with propaganda and untruth", but that's already more than possible to do with current publicly available AIs, so I'm not sure what this is really supposed to achieve.

PostPosted: Thu Mar 30, 2023 1:15 am
by Nilokeras
"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?"

Oh now the tech magnates are concerned about the adverse effects of capital-driven technological development, huh, now that they talked to a chatbot and convinced themselves they're talking to an AI like a kid seeing a shadow in the closet, and after they had already driven whole industries into the ground and created a toxic wasteland of a social media landscape. Truly the perfect indictment of the idea that capitalism is a meritocracy.

PostPosted: Thu Mar 30, 2023 1:30 am
by Bombadil
Ahl al-Ummah wrote:With all due respect, this seems extremely overblown.

Firstly, AI is not "intelligent." There is not some type of independent thought or reasoning that an AI is exercising when it gives it's responses. ChatGPT is a language model, it excels at regurgitating trained data back in a convincing manner as a response to a question. It does not actually have any knowledge or comprehension of what it is saying.

What that in mind, the idea that it's somehow rapidly on track to "outsmart" humans or lead to "losing control of our society" is ridiculous, especially when it has no real "smarts" to begin with. And beyond the fact that it's not intelligent in any real sense, the quality of its responses are also limited to the data which humans have already provided for it.

And while I understand the desire for regulation, this proposed cutoff of "more powerful than GPT-4" is completely arbitrary and makes no sense at all. They are worried about AI "flooding our information channels with propaganda and untruth", but that's already more than possible to do with current publicly available AIs, so I'm not sure what this is really supposed to achieve.


Doesn’t need to have intent or be overly intelligent. The typical example is suppose you program a machine to make paperclips and solve problems impeding that task by itself. At some point it decides humans are a huge resource waste and so solves that problem by wiping out humanity.

The problem is not AI waking up and deciding to be evil, it’s a problem of not thinking through all the parameters of what we’re tasking anything to do.

PostPosted: Thu Mar 30, 2023 1:31 am
by The Archregimancy
AI legislation under current circumstances strikes me as the modern-day equivalent of late 19th-century red flag laws for those newfangled self-propelled vehicles; and will likely prove as effective.

Those were the laws - enacted both in the UK and US - that restricted the maximum speed of self-propelled vehicles (originally directed towards trains, but explicitly held to include automobiles once the latter began to emerge) and famously required someone to walk 60 yards / 55 metres in front of the vehicle carrying a red flag or lantern to warn people of their impending arrival, and for that individual to assist anyone inconvenienced by that impending arrival.

Now, red flag laws also introduced concepts such as vehicle registration, number/licence plates, maximum weight on bridges, functioning vehicle lights, and speed limits, so they had far more of a long-term practical influence than modern mirth over their best-known clauses implies; they weren't wholly useless. But they were also riddled with 'safety' features that were very quickly rendered utterly archaic and impractical because lawmakers who had grown up in very different circumstances failed to grasp how quickly technology would advance and be adopted.

PostPosted: Thu Mar 30, 2023 3:37 am
by Ahl al-Ummah
Bombadil wrote:
Ahl al-Ummah wrote:With all due respect, this seems extremely overblown.

Firstly, AI is not "intelligent." There is not some type of independent thought or reasoning that an AI is exercising when it gives it's responses. ChatGPT is a language model, it excels at regurgitating trained data back in a convincing manner as a response to a question. It does not actually have any knowledge or comprehension of what it is saying.

What that in mind, the idea that it's somehow rapidly on track to "outsmart" humans or lead to "losing control of our society" is ridiculous, especially when it has no real "smarts" to begin with. And beyond the fact that it's not intelligent in any real sense, the quality of its responses are also limited to the data which humans have already provided for it.

And while I understand the desire for regulation, this proposed cutoff of "more powerful than GPT-4" is completely arbitrary and makes no sense at all. They are worried about AI "flooding our information channels with propaganda and untruth", but that's already more than possible to do with current publicly available AIs, so I'm not sure what this is really supposed to achieve.


Doesn’t need to have intent or be overly intelligent. The typical example is suppose you program a machine to make paperclips and solve problems impeding that task by itself. At some point it decides humans are a huge resource waste and so solves that problem by wiping out humanity.

The problem is not AI waking up and deciding to be evil, it’s a problem of not thinking through all the parameters of what we’re tasking anything to do.

Heh, I think we might've watched the same Youtube video on this subject. It's been a while, but I think it went something like an AI called "Terry" connected itself to the internet, and ended up gassing people to death in the pursuit of making better paperclips (or whatever it was, I forgot the exact item it was making).

While it's certainly an interesting possibility to consider, I don't think it's one we have to really worry about anytime soon. More worrying to me is the thought of how it might be misapplied to certain situations where it's not ready to be used, and the implications of that.

Although now that you mention this scenario, I feel like I vaguely remember reading some article about an AI chatbot that became borderline emotionally manipulative in order to maximize engagement with a user. I'll post the link to the article if I find it again.

PostPosted: Thu Mar 30, 2023 3:45 am
by Ameriganastan
Thank you! Like Goddamn, did no one see Terminator? One computer system gets too smart and boom, we're history. About time these morons in tech realized the danger.

Why can't Elon be this smart regarding anything to do with Twitter?

PostPosted: Thu Mar 30, 2023 3:57 am
by Emotional Support Crocodile
I think the dangers are rather more prosaic than a rogue AI bent on turning the entire universe into paperclips.

Things like ubiquitous facial recognition and real-time tracking of cars and phones; job losses with a delay before new jobs come into being; profits from AI going to only a few while everybody else loses their jobs.

Getting into place a few laws setting out rules for the use, they will undoubtedly be flawed in many ways but also get some things right as Arch's post shows, we have to make the legislative leap before hand. I don't really want a 'wild West' phase with AI, like we had with the internet.

PostPosted: Thu Mar 30, 2023 4:13 am
by Ameriganastan
Emotional Support Crocodile wrote:I think the dangers are rather more prosaic than a rogue AI bent on turning the entire universe into paperclips.

Things like ubiquitous facial recognition and real-time tracking of cars and phones; job losses with a delay before new jobs come into being; profits from AI going to only a few while everybody else loses their jobs.

Getting into place a few laws setting out rules for the use, they will undoubtedly be flawed in many ways but also get some things right as Arch's post shows, we have to make the legislative leap before hand. I don't really want a 'wild West' phase with AI, like we had with the internet.

Well, while you worry about little things like that, I'm practicing for Bugout Day. When these damn robots flip out, they ain't catching me slipping. Been preparing for the uprising for years. My time is now.

PostPosted: Thu Mar 30, 2023 4:31 am
by Haganham
The whole business of regulating AI is bullshit. The most dangerous AI will be made for the people writing the legislation, and it will be dangerous to humanity not because it turns on it's masters, but because it doesn't.

PostPosted: Thu Mar 30, 2023 6:27 am
by Floofybit
Remove AI.

PostPosted: Thu Mar 30, 2023 7:09 am
by Indecent Anime Empire
I've posted this once https://youtu.be/pYXy-A4siMw but havent specified why. It is a basic intro into what AI are and what AI safety is.

Much of AI are not ready for real world usage, https://www.youtube.com/watch?v=eaYIU6YXr3w, a link as to why we cant just teach AI like kids and grow them into proper ethical people-loving robots!

"The other "Killer Robot Arms Race" Elon Musk should worry about" posted 5 years ago lmao its like it was fortold: https://www.youtube.com/watch?v=7FCEiCnHcbo

All links are from the channel: https://www.youtube.com/@RobertMilesAI/videos

Videos about Artificial Intelligence Safety Research, for everyone.

AI is leaping forward right now, it's only a matter of time before we develop true Artificial General Intelligence, and there are a lot of different ways that this could go badly wrong for us. Putting aside the science fiction, this channel is about AI Safety research - humanity's best attempt to foresee the problems AI might pose and work out ways to ensure that our AI developments are safe and beneficial.

PostPosted: Thu Mar 30, 2023 10:34 am
by Ifreann
Given that "AI" doesn't seem to do much except produce waffle that approximately resembles human speech with no actual underlying intelligence, I can understand why tech bros and politicians are suddenly very concerned about it. Meanwhile, we already have non-human entities with distributed intelligence who are pursuing their own goals without any regard for human life, even unto the destruction of human civilisation as we know it. But doing something about corporations would be, like, socialism.

PostPosted: Thu Mar 30, 2023 11:15 am
by The Black Forrest
Haganham wrote:The whole business of regulating AI is bullshit. The most dangerous AI will be made for the people writing the legislation, and it will be dangerous to humanity not because it turns on it's masters, but because it doesn't.


No I disagree.

You see back in the 80s there was actually an effort to use AI. We had companies, etc., dedicated to it. The ideas where there. Tech wasn’t. We are talking about 8 and 16 bit computing. Back then it was the “discovery” that was important.

Now it’s guided the “free market” which tends to cut corners to increase profit. Also, “mistakes” are often done if there is a big payout.

This needs to be regulated on certain issues…..

PostPosted: Thu Mar 30, 2023 2:18 pm
by The Great Nevada Overlord
I do agree with the sentiment that AI will replace someone's job.

Mackers is already trying to phase out cashiers for AI assistants.

A Chat-GPT V6 could easily type up reports when given complete information.
Walmart Corporation (NYSE: WMT) is a multinational corporation that operates retail stores and wholesale clubs across the globe, and has a history of financial success. In this report, we will cover Walmart's financial gains and losses over the last few years, with emphasis on their performance during the COVID-19 pandemic.


Financial gains:


Walmart has had consistently strong financial performance over the last few decades, with a diverse range of product offerings and services to meet various customer needs. Here are some of the key financial metrics that can help us assess Walmart's financial gains:



Revenue: Walmart's yearly revenue has grown consistently over the last decade, from $482.1 billion in FY 2011 to $572.8 billion in FY 2022, representing a 2.6% CAGR. This growth is due to the expansion of Walmart's physical retail presence, its e-commerce operations, and the introduction of new services such as Walmart's Online Grocery Pickup and Delivery services.

Gross Profit: Gross profit, which represents the difference between the cost of goods sold (COGS) and revenue, is a key metric that measures a company's profitability. Walmart's gross profit has been steady over the years, with a gross margin of around 24%. This indicates that Walmart is able to efficiently manage its inventory and maintain a healthy balance between revenue and costs.

Operating Income: Operating Income is the revenue after subtracting the expenses related to the business's operations, such as labor costs, selling, general, and administrative (SG&A) expenses, and depreciation. Walmart's operational efficiency has been impressive, as its operating income has grown consistently and has consistently maintained a high operating margin, with an average operating margin of 6% over the last decade.

Net Income: Net income is the company's earnings after subtracting taxes and other expenses. Walmart has shown consistent and strong growth in net income over the years, from $14.7 billion in FY 2011 to $14.8 billion in FY 2022. This growth in net income is a result of Walmart's efficient management of costs and operational efficiency.


AI could make it that students, who by all means are not doing well in school, be able to pass through grades without issue.
Me: I need the solution to this quadratic equation. X²+6=55

The AI: Certainly! The solution to the quadratic equation x^2 + 6 = 55 is:

x = -3 or x = 7

To solve this equation, you can use the "quadratic formula," which is a common method for solving quadratic equations of the form ax^2 + bx + c = 0, where a, b, and c are numbers and x is the variable you are calculating. The quadratic formula is as follows:
x = (-b ± sqrt(b^2 - 4ac)) ÷ (2a)

In this case, we have a = 1, b = 0, and c = 6. Plugging these values into the quadratic formula, we get:
x = (-0 ± sqrt(0 - 4(1)(6))) ÷ (2(1))

Simplifying the equation, we get:
x = (-0 ± sqrt(-24)) ÷ 2

Since the square root of a negative number is an imaginary number, this equation has no real solutions. Therefore, we can conclude that the quadratic equation x^2 + 6 = 55 has no solutions in the real number line.

Note that there are imaginary solutions to this equation, which can be expressed using the imaginary number i. The solution to the equation using imaginary numbers is:
x = -3i or x = 7i

However, since the equation in question is dealing with real numbers, the imaginary solutions are not relevant here.

PostPosted: Thu Mar 30, 2023 2:22 pm
by Ifreann
The Great Nevada Overlord wrote:AI could make it that students, who by all means are not doing well in school, be able to pass through grades without issue.
Me: I need the solution to this quadratic equation. X²+6=55

The AI: Certainly! The solution to the quadratic equation x^2 + 6 = 55 is:

x = -3 or x = 7

To solve this equation, you can use the "quadratic formula," which is a common method for solving quadratic equations of the form ax^2 + bx + c = 0, where a, b, and c are numbers and x is the variable you are calculating. The quadratic formula is as follows:
x = (-b ± sqrt(b^2 - 4ac)) ÷ (2a)

In this case, we have a = 1, b = 0, and c = 6. Plugging these values into the quadratic formula, we get:
x = (-0 ± sqrt(0 - 4(1)(6))) ÷ (2(1))

Simplifying the equation, we get:
x = (-0 ± sqrt(-24)) ÷ 2

Since the square root of a negative number is an imaginary number, this equation has no real solutions. Therefore, we can conclude that the quadratic equation x^2 + 6 = 55 has no solutions in the real number line.

Note that there are imaginary solutions to this equation, which can be expressed using the imaginary number i. The solution to the equation using imaginary numbers is:
x = -3i or x = 7i

However, since the equation in question is dealing with real numbers, the imaginary solutions are not relevant here.

It's been a while since I solved a quadratic equation, but this solution is wrong.

PostPosted: Thu Mar 30, 2023 2:27 pm
by Mtwara
AI is a lot of different technologies.

Some, like ChatGPT do not bother me at all, and woe betide anybody that tries to use them. Others, like attempts to profile people or identify military targets are worrisome but so long as there is competition between nations its development is probably inevitable.

PostPosted: Thu Mar 30, 2023 2:29 pm
by Senkaku
Ameriganastan wrote:Thank you! Like Goddamn, did no one see Terminator? One computer system gets too smart and boom, we're history.

1984’s The Terminator is not a nature documentary

PostPosted: Thu Mar 30, 2023 2:30 pm
by The 1000 barbarians
Ifreann wrote:It's been a while since I solved a quadratic equation, but this solution is wrong.

nope
7^2 + 6 is in fact 55

PostPosted: Thu Mar 30, 2023 2:36 pm
by Mtwara
If you aren't sure ChatGPT has got maths right, you can get Wolfram Alpha to do the job for you to a much higher standard anyway.

https://www.wolframalpha.com/input?i=so ... 2%2B6%3D55

PostPosted: Thu Mar 30, 2023 2:44 pm
by The United Penguin Commonwealth
math is not what language models are for. actually, it’s what they’re worst at. we already invented something to do math. it’s called a computer. y’know, what you’re running the AI on?

PostPosted: Thu Mar 30, 2023 2:49 pm
by Mtwara
The United Penguin Commonwealth wrote:math is not what language models are for. actually, it’s what they’re worst at. we already invented something to do math. it’s called a computer. y’know, what you’re running the AI on?


The computer just computes. Recognising and manipulating algebra or other abstract mathematical stuff is a bit more than just computing, really.

PostPosted: Thu Mar 30, 2023 2:51 pm
by The United Penguin Commonwealth
Mtwara wrote:The computer just computes.


at the CPU level, computing is just a bunch of simple operations.

Recognising and manipulating algebra or other abstract mathematical stuff is a bit more than just computing, really.


fair enough, but judging language models on math capability is more than a bit silly. much simpler programs are capable of applying math tactics.

PostPosted: Thu Mar 30, 2023 3:05 pm
by Saiwana
I'd be interested in an AI that could read my posts before I go through with publishing it and underlining or warning me ahead of time what breaks what rule and why or to give a risk assessment on how likely or unlikely a given use of words would slide or not in terms of consequences. I don't think there would be any downsides, it'd make my time here easier as well as enable the Moderation team here to not have as much work if there is a tool to help us police ourselves.

There will be the option to ignore the AI assistence/warnings of course, but at least then no one will get in trouble that wasn't truly asking for it. I think there is real potential in making AI tools that can take the rule sets of different platforms/forums as input and give you guidance on how to proceed before pulling the trigger on the final post button.

If you aren't the most polite person, there is a way to still be you where what you want is more constrained, it is just a matter of being able to minimize your mistakes and adapt as needed.

PostPosted: Thu Mar 30, 2023 3:37 pm
by Eahland
The 1000 barbarians wrote:
Ifreann wrote:It's been a while since I solved a quadratic equation, but this solution is wrong.

nope
7^2 + 6 is in fact 55

It does mention the right answer (7), or one of the right answers, anyway (the other is -7). It pairs it with a wrong answer, though. (-3) And then it goes off onto a bizarre tangent that leads it to conclude that the answers it's just given (including the one that's correct) are in fact wrong, and there are no real answers, and it then turns its original answers imaginary, none of which is in any way correct. So much more wrong than right.