Artificial intelligence is used to track down hate speech (VIDEO)

0

Social media companies are now using artificial intelligence to detect hate speech online.

Over the past decade, the United States has seen tremendous growth in frequent internet use, with one-third of Americans reporting being constantly online, while nine in ten report surfing the web several times a week. – according to a March 2021 Pew Research Survey. This immense increase in activity has helped people stay more connected to each other, but it has also enabled the widespread proliferation and exposure of hate speech. Artificial intelligence is one of the solutions that social media companies and other online networks have relied on, with varying degrees of success.

For companies with giant userbases, like Meta, artificial intelligence is a key, if not necessary, tool for detecting hate speech – because there are too many violent users and content to scrutinize. thousands of human content moderators already employed by the company. AI can help alleviate this burden by ramping up or down to fill these gaps based on new influxes of users.

Facebook, for example, has seen massive growth – from 400 million users in the early 2010s to over two billion by the end of the decade. Between January and March 2022, Meta took action against more than 15 million hateful content on Facebook. About 95% of this was proactively detected by Facebook with the help of AI.

This combination of AI and human moderators can still let huge misinformation themes slip through. Paul Barrett, deputy director of NYU’s Stern Center for Human Rights, found that every day 3 million Facebook posts are flagged for review by 15,000 Facebook content moderators. The ratio of moderators to users is one to 160,000.

“If you have a volume of this nature, these humans, these people are going to have a tremendous burden of making decisions on hundreds of discrete items every work day,” Barrett said.

Another problem: the AI ​​detected to eradicate hate speech is mainly driven by text and still images. This means that video content, especially if it is live, is much harder to automatically detect as possible hate speech.

Zeve Sanderson is the founding executive director of NYU’s Center for Social Media and Politics.

“Live video is incredibly difficult to moderate because it’s live, you know, we’ve seen that unfortunately recently with tragic shootings where, you know, people have used live video to broadcast, you know , some kind of content related to that. And even though the platforms were relatively quick to respond to that, we saw copies of those videos spreading around. So it’s not just the original video, but also about being able to save it and then share it in other forms. So, so living is extraordinarily difficult,” Sanderson said.

And many AI systems are not robust enough to be able to detect this hate speech in real time. Extremism researcher Linda Schiegl told Newsy this has become a problem in online multiplayer games where players can use voice chat to spread hateful ideologies or thoughts.

“It’s really hard for automatic detection to detect things because if you’re talking about weapons or if you’re talking about how are we going to, I don’t know, take this school or whatever.” could be in the game. And so AI or auto-sensing is really hard in play spaces. And so it would have to be something more sophisticated than that or done by hand, which is really hard , I think, even for those companies,” Schiegl said.

Share.

Comments are closed.