脸书用户大清理!13亿虚假账号全中枪
Facebook is still struggling to curb the spread of spam, hate speech, violence and terrorism on its site. In its first quarterly Community Standards Enforcement Report, Facebook disclosed that it disabled 1.3 billion 'fake accounts' over the past two quarters, many of which had 'the intent of spreading spam or conducting illicit activities such as scams'. The update marks the first Community Standards report since Facebook was hit with a massive data privacy scandal earlier this year. The tech giant also revealed millions of standards violations that occurred in the last six months leading up to March. This includes inappropriate content like vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts. Facebook acknowledged that its artificial intelligence detection technology 'still doesn't work that well,' particularly when it comes to hate speech, and that it needs to be checked by human moderators. 'It's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works,' said Guy Rosen, vice president of Product Management at Facebook, in a statement. The firm has said previously that it plans to hire thousands more human moderators to 'make Facebook safer for everyone'. Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm. |