Black women journalists and politicians get abusive tweet every 30 seconds

Parent Category: Issue

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 
Pin It

 

Will Knight

In Summary: Twitter can be a toxic place. In recent years, trolling and harassment on the site have made it an extremely unpleasant and upsetting experience for many people, particularly women and minorities.  Most recently, machine learning has revealed a disturbing level of harassment, abuse, and trolling aimed at women and minorities on Twitter. Particularly, Black Women journalists and black politicians, in general, have suffered the most vicious attacks on Twitter.

Black female journalist April Ryan, American Urban Radio's Washington bureau chief asks questions during an exchange with President Trump at a news conference following last month’s midterm elections at the White House. Ryan’s two other African American female journalists Yamiche Alcindor of the PBS NewsHour and Abby Phillip of CNN have not only endured denigrating remarks from President Trump but constant vicious attacks on Twitter. (Photo by Jonathan Ernst/Reuters)

Cambridge, Massachusetts-- Twitter can be a toxic place. In recent years, trolling and harassment on the site have made it an extremely unpleasant and upsetting experience for many people, particularly women and minorities. Most recently, machine learning has revealed a disturbing level of harassment, abuse, and trolling aimed at women and minorities on Twitter. Particularly, Black Women journalists and black politicians, in general, have suffered the most vicious attacks on Twitter.

 But automatically identifying and stopping such abuse is difficult to do accurately and reliably. This is because, for all the recent progress in AI, machines generally still struggle to respond meaningfully to human communication. For example, AI usually finds it hard to pick up on abusive messages that may be sarcastic or disguised with a sprinkling of positive keywords.

A new study has used cutting-edge machine learning to get a more accurate snapshot of the scale of harassment on Twitter. Its analysis confirms what many people will already suspect: female and minority journalists and politicians face a shocking amount of abuse on the platform.

Julien Cornebise, Ph.D. director of research at ElementAI in London, an office focused on humanitarian applications of machine learning

The study carried out by Amnesty International in collaboration with Canadian firm ElementAI, shows that black women politicians and journalists are 84% more likely to be mentioned in abusive or “problematic” tweets than white women in the same profession. “It’s just maddening,” says Julien Cornebise, director of research at ElementAI in London, an office focused on humanitarian applications of machine learning. “These women are a big part of how society works.”

ElementAI researchers first used a machine-learning tool similar to the one used to classify spam to identify abusive tweets. The researchers then gave volunteers a mix of pre-classified and previously unseen tweets to classify. The tweets identified as abusive were used to train a deep-learning network. The result is a system that can classify abuse with impressive accuracy, according to Cornebise.

A group of Some Black members of the British House of Commons. 

The project focused on tweets sent to politicians and journalists. The study saw 6,500 volunteers from 150 countries help classify abuse in 228,000 tweets sent to 778 women politicians and journalists in the UK and US in 2017. The study examined tweets sent to female members of the UK Parliament and the US Congress and Senate, as well as women journalists from publications like the Daily Mail, Gal Dem, the Guardian, Pink News, and the Sun in the UK and Breitbart and the New York Times in the US.

Flashback: Retired North Carolina Supreme Court Chief Justice Henry Frye (L) swears in members of the Congressional Black Caucus of the 109th Congress, (L-R) the then outgoing Chairman Rep. Elijah Cummings (D-MD), incoming Chairman Rep. Melvin Watt (D-NC) and incoming Vice-Chair Rep. Corrine Brown (D-FL) on January 4, 2005 during a swearing-in ceremony at the Library of Congress in Washington, DC.  Most of their successors have been targets of toxic and racist attacks on Twitter.

The study found that 1.1 million abusive tweets were sent to the 778 women in this period—that’s the equivalent of one every 30 seconds. It also found that 7.1% of all tweets sent to women in these roles are abusive. The researchers behind the study have also released a tool, called Troll Patrol, to test whether a tweet constitutes abuse or harassment. While the deep-learning approach was a big improvement on existing methods for spotting abuse, the researchers warn that machine learning or AI will not be enough to identify trolling all the time. Cornebise says the tool is often as good as human moderators but is also prone to error. “Some human judgment will be required for the foreseeable future,” he says.

Milena Marin of Amnesty International worked on the project.

Twitter has been widely criticized for not doing more to police its platform. Milena Marin, who worked on the project at Amnesty International, says the company should at least be more transparent about its policing methods. “Troll Patrol isn’t about policing Twitter or forcing it to remove content,” says Marin. But she warned, “Twitter must start being transparent about how exactly it is using machine learning to detect abuse and publish technical information about the algorithms it relies on.”

Twitter’s legal officer Vijaya Gadde

In response to the report, Twitter legal officer Vijaya Gadde pointed to the problem of defining abuse. “I would note that the concept of ‘problematic’ content for the purposes of classifying content is one that warrants further discussion,” Gadde said in a statement. “We work hard to build globally enforceable rules and have begun consulting the public as part of the process.”

About the Author: Will Knight is the Senior Editor, AI.  Will Knight is MIT Technology Review’s Senior Editor for Artificial Intelligence. He covers the latest advances in AI and related issues.

 

Source: MIT Technology Review

Pin It