On Thursday, at the UN Generation Equality Forum in Paris, Twitter, TikTok, Google, and Facebook pledged to combat online abuse and improve women’s safety on their platforms. The vow comes after a year of conversations with the World Wide Web Foundation (WWWF) to look into online gender-based violence and abuse.
According to the WWWF, women want more control over who can reply or comment on their social media posts, as well as more choice over what they see online, where they see it, and when they see it.
According to the WWWF, the companies have committed to “building better ways for women to curate their online safety” by providing more granular settings, such as who can see, share, or comment on posts; more simple and accessible language; easier navigation and access to safety tools, and “reducing the burden on women by proactively reducing the amount of abuse they see.”
It’s a little aggravating how that last section is worded; it addresses the aftermath or the place of abuse, but not the person or individuals who commit the abuse. And just because women aren’t witnessing the harassment on social media doesn’t mean it has stopped.
Platforms do have some duty in making their online spaces safer, but until they become more proactive and less reactive in pursuing abusers, the onus will continue to fall on women and marginalized groups to report abuse and persuade a social media platform that it is worth addressing.
In addition to the “better curation” checklist, the firms will strengthen their reporting processes by allowing users to track and manage their reports, and provide new avenues for women to obtain aid and support when they report abuse as part of the pledge.
They’ll also have “increased capacity to address context and/or language,” perhaps allowing for the inclusion of more subtle kinds of verbal abuse or threats in enforcement procedures.
These are all admirable objectives, but the WWWF’s announcement made no mention of how each platform intends to achieve them. We’ve reached out to all four companies for comment after they didn’t respond to our request for comment in the news release. In an emailed statement, Vijaya Gadde, Twitter’s head of law, public policy, and trust & safety, emphasized that keeping everyone who uses Twitter safe and free from abuse is the company’s top goal.
“While we have made significant progress in giving people greater control over their safety, we recognize there is still much work to be done,” Gadde said, stressing that abuse disproportionately affects women and minority cultures (which is pretty well known at this point). Abuse, according to Gadde, “has no place on our service.” It is harmful to those who are targeted, as well as the health of the debate and the role Twitter plays in the expression and exchange of ideas, where people may be heard regardless of their views or perspectives.”
Antigone Davis, Facebook’s global head of safety, said in an email that the firm was excited to collaborate with other digital businesses to make the internet safer for women. “To keep women safe from online and offline abuse, exploitation, and harassment, we continually update our rules, tools, and technology in conjunction with experts across the world, including over 200 women’s safety organizations,” Davis said in a statement.
Tara Wadhwa, TikTok US’s head of policy, outlined the company’s objectives in a blog post. “In the months ahead,
We’ll start working on and testing a number of potential platform enhancements to address these concerns and make TikTok a safer environment for women,” Wadhwa stated.
Google did not respond to a request for comment on Thursday.
There appears to be nothing binding the corporations to their “commitments” at this time, except from the threat of public shaming if they fail to deliver. Regrettably, this is often the most effective way of getting social media platforms to respond to consumers’ issues.