Eukaryotic Cells wrote:This whole ethics in AI discussion reminds me of this incident:
https://en.m.wikipedia.org/wiki/Tay_(bot)
A Microsoft-developed chatbot learned to say racist stuff (Holocaust denial, for example) from users. It was shut down by Microsoft after only 16 hours.
It’s interesting that in China it is the anti-PRC stuff that caused a chatbot to be taken offline.
In general chatbots often make mistakes. It is only when it really offends rulers / social consensus that it gets noticed.