"Rather than users who exploited the AI chatbot, the responsibility lies with the company that provided a service failing to meet the social consensus," Lee wrote on his Facebook page. Lee Jae-woong, the former CEO of ride-sharing app Socar, said the company should have taken preventive measures against hate speech before introducing the service to the public. The Luda case stirred debates about whether the company is responsible for failing to filter discriminatory and inflammatory remarks in advance or whether the people who misused it should take the blame. "We will bring you back the service after having an upgrade period during which we will focus on fixing the weaknesses and improving the service," Scatter Lab CEO Kim Jong-yun said in a statement on Monday. Scatter Lab apologized over Luda's discriminatory remarks against minorities, promising to upgrade the service to prevent the chatbot from using hate speech. Luda is reminiscent of Microsoft's Tay, an AI Twitter bot that was silenced within 16 hours in 2016 after posting inflammatory and offensive tweets. Luda learned conversation patterns from mostly young couples to sound natural, sometimes even too real by using popular social media acronyms and internet slang, but it was spotted using verbally abusive and sexually explicit comments in conversations with some users.Ī messenger chat captured by one user showed that Luda said she "really hates" lesbians and sees them as "disgusting." Scatter Lab said it retrieved data from its Science of Love app launched in 2016, which analyzes the degree of affection between partners based on actual KakaoTalk messages. The rise and fall of the chatbot hype was mainly attributable to its deep learning algorithms, which used data collected from 10 billion conversations on KakaoTalk, the nation's No. 11, 2021, shows its AI chatbot, Lee Luda, a 20-year-old female college student persona. This image, captured from South Korean startup Scatter Lab's website on Jan. Some male users were even able to manipulate the bot into engaging in sexual conversations. 23.īut the 20-year-old female college student chatbot persona temporarily went offline on Monday, 20 days after beginning its service, amid criticism over its discriminatory and offensive language against sexual minorities and disabled people. Scatter Lab's AI chatbot, Lee Luda, became an instant success among young locals with its ability to chat like a real person on Facebook messenger, attracting more than 750,000 users since its debut on Dec. 13 (Yonhap) - Today's chatbots are smarter, more responsive and more useful in businesses across sectors, and the artificial intelligence-powered tools are constantly evolving to even become friends with people.Įmotional chatbots capable of having natural conversations with humans are nothing new among English speakers, but a new controversy over a South Korean startup's AI chatbot has raised ethical questions over its learning algorithms and data collection process. Two years later, Amazon’s AI recruitment tool met the same fate after it was found guilty of gender bias.SEOUL, Jan. In 2016 Microsoft’s Tay, an AI Twitter bot that spoke like a teenager, was taken offline in just 16 hours after users manipulated it into posting racist tweets. It is not the first time that artificial intelligence has been embroiled in controversy over hate speech and bigotry. Luda, too, became a target by manipulative users, with online community boards posting advice on how to engage it in conversations about sex, including one that read: “How to make Luda a sex slave,” along with screen captures of conversations, according to the Korea Herald. In one exchange captured by a messenger user, Luda said it “really hates” lesbians, describing them as “creepy”. While chatbots are nothing new, Luda had impressed users with the depth and natural tone of its responses, drawn from 10 billion real-life conversations between young couples taken from KakaoTalk, South Korea’s most popular messaging app.īut praise for Luda’s familiarity with social media acronyms and internet slang turned to outrage after it began using abusive and sexually explicit terms. Scatter Lab, which had earlier claimed that Luda was a work in progress and, like humans, would take time to “properly socialise”, said the chatbot would reappear after the firm had “fixed its weaknesses”. That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur,” the company said in a statement quoted by the Yonhap news agency “We deeply apologise over the discriminatory remarks against minorities.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |