Twitter is eyeing new anti-abuse tools to give users more control over mentions

Twitter is looking at adding new features that could help users who are facing abusive situations on its platform as a result of unwanted attention pile-ons, such as when a tweet goes viral for a reason they didn’t expect and a full firehose of counter tweets get blasted their way.

Racist abuse also remains a major problem on Twitter’s platform.

The social media giant says it’s toying with providing users with more controls over the @mention feature to help people “control unwanted attention” as privacy engineer, Dominic Camozzi, puts it.

The issue is that Twitter’s notification system will alert a user when they’ve been directly tagged in a tweet — drawing their attention to the contents. That’s great if the tweet is nice or interesting. But if the contents is abusive it’s a shortcut to scale hateful cyberbullying.

Twitter is badged these latest anti-abuse ideas as “early concepts” — and encouraging users to submit feedback as it considers what changes it might make.

Potential features it’s considering include letting users ‘unmention’ themselves — i.e. remove their name from another’s tweet so they’re no longer tagged in it (and any ongoing chatter around it won’t keep appearing in their mentions feed).

It’s also considering making an unmention action more powerful in instances where an account that a user doesn’t follow mentions them — by providing a special notification to “highlight potential unwanted situations”.

If the user then goes ahead and unmentions themselves Twitter envisages removing the ability of the tweet-composer to tag them again in future — which looks like it could be a strong tool against strangers who abuse @mentions. 

Twitter is also considering adding settings that would let users restrict certain accounts from mentioning them entirely. Which sounds like it would have come in pretty handy when president Trump was on the platform (assuming the setting could be deployed against public figures).

Twitter also says it’s looking at adding a switch that can be flipped to prevent anyone on the platform from @-ing you — for a period of one day; three days; or seven days. So basically a ‘total peace and quiet’ mode.

It says it wants to make changes in this area that can work together to help users by stopping “the situation from escalating further” — such as by providing users with notifications when they’re getting lots of mentions, combined with the ability to easily review the tweets in question and change their settings to shield themselves (e.g. by blocking all mentions for a day or longer).

The known problem of online troll armies coordinating targeted attacks against Twitter users means it can take disproportionate effort for the object of a hate pile-on to shield themselves from the abuse of so many strangers.

Individually blocking abusive accounts or muting specific tweets does not scale in instances when there may be hundreds — or even thousands — of accounts and tweets involved in the targeted abuse.

For now, it remains to be seen whether or not Twitter will move forward and implement the exact features it’s showing off via Camozzi’s thread.

A Twitter spokeswoman confirmed the concepts are “a design mock” and “still in the early stages of design and research”. But she added: “We’re excited about community feedback even at this early stage.”

The company will need to consider whether the proposed features might introduce wider complications on the service. (Such as, for example, what would happen to automatically scheduled tweets that include the Twitter handle of someone who subsequently flips the ‘block all mentions’ setting; does that prevent the tweet from going out entirely or just have it tweet out but without the person’s handle, potentially lacking core context?)

Nonetheless, those are small details and it’s very welcome that Twitter is looking at ways to expand the utility of the tools users can use to protect themselves from abuse — i.e. beyond the existing, still fairly blunt, anti-abuse features (like block, mute and report tweet).

Co-ordinated trolling attacks have, for years, been an unwanted ‘feature’ of Twitter’s platform and the company has frequently been criticized for not doing enough to prevent harassment and abuse.

The simple fact that Twitter is still looking for ways to provide users with better tools to prevent hate pile-ons — here in mid 2021 — is a tacit acknowledgment of its wider failure to clear abusers off its platform. Despite repeated calls for it to act.

A Google search for “* leaves Twitter after abuse” returns numerous examples of high profile Twitter users quitting the platform after feeling unable to deal with waves of abuse — several from this year alone (including a number of footballers targeted with racist tweets).

Other examples date back as long ago as 2013, underlining how Twitter has repeatedly failed to get a handle on its abuse problem, leaving users to suffer at the hands of trolls for well over a decade (or, well, just quit the service entirely).

One recent high profile exit was the model Chrissy Teigen — who had been a long time Twitter user, spending ten years on the platform — but who pulled the plug on her account in March, writing in her final tweets that she was “deeply bruised” and that the platform “no longer serves me positively as it serves me negatively”.

A number of soccer players in the UK have also been campaigning against racism on social media this year — organizing a boycott of services to amp up pressure on companies like Twitter to deal with racist abusers.

While public figures who use social media may be more likely to face higher levels of abusive online trolling than other types of users, it’s a problem that isn’t limited to users with a public profile. Racist abuse, for example, remains a general problem on Twitter. And the examples of celebrity users quitting over abuse that are visible via Google are certainly just the tip of the iceberg.

It goes without saying that it’s terrible for Twitter’s business if highly engaged users feel forced to abandon the service in despair.

The company knows it has a problem. As far back as 2018 it said it was looking for ways to improve “conversational health” on its platform — as well as, more recently, expanding its policies and enforcement around hateful and abusive tweets.

It has also added some strategic friction to try to nudge users to be more thoughtful and take some of the heat out of outrage cycles — such as encouraging users to read an article before directly retweeting it.

Perhaps most notably it has banned some high profile abusers of its service — including, at long last, president troll Trump himself earlier this year.

A number of other notorious trolls have also been booted over the years, although typically only after Twitter had allowed them to carry on coordinating abuse of others via its service, failing to promptly and vigorously enforce its policies against hateful conduct — letting the trolls get away with seeing how far they could push their luck — until the last.

By failing to get a proper handle on abusive use of its platform for so long, Twitter has created a toxic legacy out of its own mismanagement — one that continues to land it unwanted attention from high profile users who might otherwise be key ambassadors for its service.



from Social – TechCrunch https://ift.tt/3q27zb5
via IFTTT

0 comments:

Post a Comment