Using the Internet shouldn’t be like navigating a minefield—stepping over negative comments here and avoiding trolls there.
Yet social networks still struggle to prevent online harassment.
Head of Instagram Adam Mosseri took to the photo-sharing platform this week to announce new weapons in the fight against cyberbullying.
“We know bullying is a challenge many face, particularly young people,” he wrote in a Monday blog post. “We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves.”
Starting with a new AI-powered feature that warns users their pending comment may be considered offensive.
New tools encourage positive interactions (via Instagram)
The update, rolling out now, gives people the chance to reconsider and remove criticism before the intended recipient sees it.
“From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” according to Mosseri.
For others, of course, the tool will go unnoticed or ignored, perhaps even inciting more abusive commentary.
On the flipside, Instagram is focused on empowering its community—especially young members who are often reluctant to block, unfollow, or report their bully because it could “escalate the situation.”
“We wanted to create a feature that allows people to control their Instagram experience, without notifying someone who may be targeting them,” Mosseri explained. “Soon, we will begin testing a new way to protect your account from unwanted interactions, called Restrict.”
New tools protect your account from unwanted interactions (via Instagram)
Once you “Restrict” someone, comments on your posts from that person will be visible only to that person; they’ll never know that no one else can see their backtalk. To make a heckler’s remark public, simply approve it.
Restricted users won’t be able to see when you’re active on Instagram, or when you’ve read their direct messages.
“It’s our responsibility to create a safe environment on Instagram,” Mosseri said. “This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”
In October—only 10 days into the job—the Head of Instagram introduced a new algorithm, trained to detect harassment in photos and captions and proactively send offending content to the Community Operations team for review.
AI isn’t perfect, though: ‘Grammers can still manually report a post or profile for abuse via the mobile app or website.
More on Geek.com:
- Australian City Spends $280K on Toilets for Instagrammers
- Instagram Adds IGTV, Shopping, Stories to Explore
- Instagram Might Be Deleting Its Direct Messaging App Soon
The post Instagram Guilts Users Into Removing Nasty Comments appeared first on Geek.com.
from Geek.com https://ift.tt/2YIimsD
via IFTTT






0 comments:
Post a Comment