Microsoft's AI goes rogue
Mar. 24th, 2016 10:54 pmGacked from
gonzo21
http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
I suspect that this is on a par with ELIZA learning abusive language from users, e.g. Bad data rather than conscious malice on the part of the software itself, but it's an interesting problem.
http://arstechnica.com/information-technology/2016/03/microsoft-terminates-its-tay-ai-chatbot-after-she-turns-into-a-nazi/
I suspect that this is on a par with ELIZA learning abusive language from users, e.g. Bad data rather than conscious malice on the part of the software itself, but it's an interesting problem.
no subject
Date: 2016-03-24 11:46 pm (UTC)I think folks are right to be alittle concerned about all the AI research going on, with very little moral philosophical oversight.
no subject
Date: 2016-03-25 09:55 am (UTC)Having said that, I'm pretty sure that crowd-sourcing your AI's social development is not a good idea, any more than letting a tot have a Facebook account. Unless you want it to be a troll, of course...
no subject
Date: 2016-03-25 06:28 am (UTC)no subject
Date: 2016-03-25 09:56 am (UTC)no subject
Date: 2016-03-25 06:49 pm (UTC)What this says about Twitter-users is a separate matter.
no subject
Date: 2016-03-25 07:23 pm (UTC)no subject
Date: 2016-03-26 07:44 pm (UTC)no subject
Date: 2016-03-26 08:55 pm (UTC)