Wall Street Journal (WSJ) posted a new report on Sunday, claiming that its employees working on mitigating hate speech and violent content believe that Facebook is unable to do so properly.

These employees shared internal Facebook documents that show that 2 years ago, Facebook reduced human-based reviews for hate speech complaints and made other adjustments to reduce the complaints. According to WSJ, this helped portray that Facebook’s artificial intelligence is better at enforcing company rules than it actually is.

These employees also revealed that Facebook’s automated system was only removing posts that would generate 3 to 5% of the total hate speech on the platform. It removed less than 1% of the posts that were against’s the company’s policy of violence and incitement.

Additionally, these documents showed alarming pieces of content that were easily able to evade Facebook’s automated detection systems. This included videos of car crashes with graphic injuries and violent threats against transgender children.

Facebook’s vice president of integrity Guy Rosen has now responded with a blog post, completely denying all these allegations. He said that violence and hate speech had dropped by 50% on the platform over the past three years. He also said that WSJ’s claims were false.

We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.