Once a month, Leslie Mac braces herself to purge her Twitter handle from various lists. It’s been a part of her routine for several years now after her first experience being bombarded with hateful messages through a list whose sole purpose seemed to be targeting other users.
On a Friday in May, just a few days before her scheduled cleansing, Mac saw a number of lists that wouldn’t make the cut. Since her last purge, Mac, a social and political activist, was most recently added to a Twitter list entitled “Black Racists.” She rattled off the names of a few more, with words like “idiots” and “SJWs” in the titles.
In a slow month, Mac said, she will get added to about four or five of these negative lists. During more politically charged times, it spikes to anywhere from 12 to 20 lists in a month.
Twitter lists allow users to create a curated public or private feed of tweets from just the users added to that list. For example, someone might create a list of comedians they follow or people who tweet about a topic they are interested in keeping up with.
But like many well-intentioned tools of the internet, some users have figured out how to use Twitter lists for more nefarious purposes. Half a dozen women interviewed for this article shared their experiences becoming targets of harassment after being added to lists or similar mechanisms.
Sometimes, the list names sound innocuous. Other times, they are outwardly hateful. Either way, these women now know the telltale signs they were added to a list that effectively places a bullseye on their accounts: a flood of peculiarly similar tweets, repeated hot-button phrases like “libtard” and commenters who have no followers in common with them.
“It sounds like some sort of puzzle, and it’s really been such a disturbing thing to have all of these triggers that I’m always looking for,” said Mac, who has hired people to help her manage the security of her online accounts. Mac said she now tries to provide a barrier between the harassers and her voice by posting on places like Patreon that require users to pay a small fee to access her work.
Sydette Harry, an editor at the web browser company Mozilla, said she checks regularly for the signs she was added to a malicious list, something that happens “at least two to three times a week.”
“If I end up on a list … and all I see is the first 30 to 40 people are my friends but specifically my friends of color, specifically my black women friends, and that person’s account is less than two months old and they don’t have any content, that’s a bot or a hostile account. I don’t even have to think about it,” she said. “It’s just part of my day now, which says something about my social media experience, which is at least once every other day I look through what lists I’m on.”
Twitter says it’s well aware of the issue. In 2017, its safety account shared that users would no longer be bothered with notifications when they’ve been added to lists. Two hours later, it reversed the decision, calling it a “misstep” after users responded with outrage and concern that they would no longer be able to keep tabs on the harmful lists to which they were added.
In an emailed statement for this article, a Twitter spokesperson referenced the 2017 incident and said it “quickly worked to introduce notifications in the experience” following users’ feedback.
“While we recognize that there is more work that can be done to make lists healthier, this was a first step and we continue to improve our service, rules and tools to keep people safe everyday,” the spokesperson said.
Before blocking the list’s creator to remove herself from a list, Mac said she investigates how that person found her page to begin with. Sometimes, she finds that one of her tweets was posted to a forum like 4chan or Reddit that has pointed abusers to her account. In those cases, she may decide that deleting the tweet entirely would be most effective to mute the abuse.
“It’s natural to have people who don’t agree with you on Twitter… it’s when it’s so clear that it’s a proliferation and they’re saying the same thing even when they’re saying it in different ways and they’re targeting a specific tweet” that it’s coming from an outside platform, Mac said.
Since Twitter decided to keep notifications for lists intact, not much has changed to make the feature safer, users said. Twitter’s own help pages aren’t that useful for understanding how to get off of a list, according to users, who said they’ve cobbled together solutions from the internet or followed friends’ advice.
Some have even created their own tools to try to block known harassers from discovering their profiles. An app called Block Together says it is “intended to help cope with harassment and abuse on Twitter” and uses lists toward that end. Users can share the lists of accounts they block so that others can subscribe and block those accounts from their own profiles. The GoogleChrome plugin Twitter Block Chain similarly lets users block all of the people following a particular account.
Using these types of tools have been the only effective way to tune out the harassment for author Celeste Ng. Before discovering them, she said she tried a number of tactics to ward off abusive messages: ignoring them, responding with kindness and even donating $5 to an organization like the ACLU that she assumed would be counter to the abuser’s values.
“I think every time this happens I kind of develop a new strategy,” said Ng, who added she was not aware of being placed on Twitter lists but has been targeted through pointed tweets that similarly direct abusers to her account. “That was all I could come up with before I turned to these technological tools and those are a lot more effective because it stops the messages before it gets to me.”
Shireen Mitchell, an entrepreneur whose work has focused on tech and diversity, said the informal lists Ng described is a sort of workaround that makes it harder for targeted users to remove themselves. On a formal Twitter list, users can escape by blocking the creator. To get around this, some harassers will target others by tagging several handles in a tweet. This approach makes it more difficult for users to remove their names without having Twitter delete the message entirely.
Ng said the only other effective strategy she’s found to ward off abuse has been wielding the power of her own following. Ng occasionally exposes the abuse she receives to her 112,000 followers on Twitter, in part to get the company’s attention. Ng said she tries to be vocal about the topic “because I can afford to.”
“I’ve heard from other writers with much smaller follower numbers that they’re scared to talk about it,” she said. When a user has a small number of followers, she added, “it’s easier for the hate messages to just be like everything you see.”
Enforcing its own policies
Twitter’s own policies say users “may not engage in the targeted harassment of someone, or incite other people to do so. We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.” The company has tried to advance its ability to detect unhealthy or unsafe communication on its platform, like through a recent acquisition of an AI startup.
But victims of harassment on the platform say Twitter does not effectively enforce its own terms.
Twitter seems to have done a good job blocking hate symbols like the swastika in Germany, where it is forced to comply with the country’s stricter laws on their usage. Ng said she knows others who have set their location to Germany on Twitter to avoid pro-Nazi or neo-Nazi content “because somehow Twitter manages to keep that off of German Twitter.”
In the U.S., however, Twitter faces political pressure that has put it in a precarious position when it comes to removing content. In April, representatives from Twitter, Facebook and Google were grilled by the Senate Judiciary subcommittee over accusations that they discriminate against conservative speech on their platforms. Sen. Ted Cruz (R-Texas), who leads the subcommittee, suggested Congress could take regulatory action against the companies to decentralize their power over popular forums for self-expression.
Twitter could avoid some questions around its content removal if it did a better job assessing how its features could be used for abuse in the first place, experts said.
“One of the biggest things … needs to be an awareness that the tools that we’re building have the potential to be used in harmful ways,” said Bailey Poland, author of “Haters: Harassment, Abuse, and Violence Online.” “If you have a hammer, it’s useful when you need to hammer something, but you can also murder someone with a hammer.”
Poland said lists in and of themselves aren’t a bad feature, “but it also becomes a tool for surveillance, for providing targets for other bad actors, for intimidating people.”
Lists were introduced “without considering the type of impact they could have on people who were marginalized,” Poland said. She suggested Twitter could avoid similar problems by designating an employee to think through the ways features might be abused.
Others suggested diversifying the workforce at Twitter could help it see new features through a wider array of perspectives.
“The fact that the villain looks like them [makes it so that it] isn’t something that they should worry about,” said journalist Xeni Jardin, who has been harassed after being added to Twitter lists. She said Twitter should invite people from diverse backgrounds, both in terms of identities and professions, to learn how their product impacts different communities.
“They’re engineers. They build magic,” Jardin said. “Right now, it’s like a really bad spell.”
Not ready to log off
Even when using Twitter means building in time to filter out hateful messages, all of the users interviewed for this article said there’s a reason they keep coming back.
“The list of positive ways that Twitter has impacted my life is a lot longer than the list of negative ways,” said Jardin, who “live tweeted” her cancer diagnosis and treatment. “I shared very, very intimate things about my life when I thought I was dying on Twitter, and the type of support that came back to me was life changing, perhaps life saving.”
Jardin said Twitter’s short and fleeting messaging qualities has helped expose her to new perspectives.
“That lends itself to the kind of natural discourse that allows us to feel close to each other, that allows us to feel greater empathy to each other,” she said. “It also has the capacity to drive people apart and harm people.”
For Mac, the reasons for staying on Twitter are more practical. She said she’s used the platform to fund raise for causes she cares about, raising $5,000 to $10,000 for various causes in a single month “on Twitter alone.”
Mac recognizes that function comes with a personal cost.
“I’ve just come to accept the fact that Twitter is not really concerned with the safety of their users,” Mac said.
How abuse harms Twitter
People targeted by harassment say Twitter is also harmed by its own abuse issues.
“I’m not willing to do nuance,” Mac said of the way she now engages with Twitter. While she’d previously write thoughtful threads on complex issues, Mac said she no longer is willing to deal with the abuse she knows it would attract. As a result, she says, Twitter is missing out on a wide swath of diverse perspectives.
A review by the U.K. Committee on Standards in Public Life suggests online abuse may also keep diverse voices out of public positions.
“We heard that women were likely to cite intensive abuse on social media as a key factor in preventing them from seeking public offices – particularly if there may be threats towards members of their family,” said the 2017 report, which recommends making social media companies, including Twitter, liable for the content on their platforms.
Beyond the regulatory concerns, Twitter could face a threat to its engagement numbers if users begin to feel that the benefits of the platform no longer outweigh the negatives. Analysts have worried about the future of Twitter’s user growth after the company disclosed in an earnings report in February that it would stop sharing monthly active users (MAUs) after the metric had fallen short of estimates for two straight quarters. As a replacement, Twitter now shares what it calls monetizable daily active users (mDAUs), which it says is not comparable to other companies’ metrics.
Several users said that some of the pure enjoyment of Twitter has been replaced by hate in recent years.
“I think it’s missing some of the fun that used to be there,” Mac said.
Many users said they’ve moderated their voices on Twitter to stay out of abuser’s line of sight.
“This sort of harassment and abuse on Twitter creates a chilling effect for the people who are on the receiving end of it and the people who observe people who are on the other end of it,” said Jardin. She said she’s begun to temper her speech on Twitter — using fewer f-bombs and trying to get her points across with a more empathetic tone.
Sometimes, they just don’t post at all.
“There are times I won’t post something because I just do not want to handle this right now,” said Ng. “I know what’s going to happen if I post this.