Instagram is to warn users when it detects potentially bullying language in their captions for posts to the social network.
The Facebook-owned social media app said it will use artificial intelligence to spot possibly harmful language and give users the opportunity to reconsider it before sharing.
Earlier this year, a similar feature was introduced which warned users when their comments to other people’s posts contained language that could be considered offensive or bullying.
When triggered, the feature shows users a message which says the caption looks similar to others that have been previously reported and gives them the option to edit it, learn more about why it has been flagged or share it anyway.
Instagram has been among the social media platforms repeatedly criticised for failing to act quickly enough to remove abusive and other potentially dangerous content from the platform.
Politicians and campaigners from around the world have called for greater regulation to be introduced to enable better policing of social media and hold sites to account for not protecting their users.
The social media giant said it was committed to developing new technology and features aimed at mitigating online bullying, pointing to a number of features introduced this year designed to cut bullying taking place on its platform.
Responding to the new feature, Dan Raisbeck, co-founder of anti-cyberbullying charity Cybersmile, said it was a good example of proactive attempts to stop online abuse, rather than reacting to it after the event.
“We should all consider the impact of our words, especially online where comments can be easily misinterpreted,” he said.
“Tools like Instagram’s Comment and Caption Warning are a useful way to encourage that behaviour before something is posted, rather than relying on reactive action to remove a hurtful comment after it’s been seen by others.”