The way social media platforms handle violent content has come under renewed scrutiny after a man in the US allegedly killed a woman and posted images of her body online.
Police in Utica, New York, said 21-year-old Brandon Clark, who has been charged with the murder of 17-year-old Bianca Devins, posted images of the crime to online platform Discord and social media sites including Instagram, which were circulated more widely.
Social media sites have been repeatedly criticised by governments and regulators for failing to stop violent content from spreading on their platforms.
In response to the incident, Facebook said it had initially found and removed content from Instagram Stories linked to the suspect, before removing his accounts from both Facebook and Instagram following confirmation of his arrest and identification by police.
Images of the victim’s body are reported to have appeared on Instagram hidden behind a sensitive content filter, but would become visible to those who then chose to view it.
The social network giant said it had also taken steps to prevent further sharing and re-posting of the images by proactively searching for and removing accounts attempting to share content linked to the incident, as well as anyone attempting to impersonate the suspect.
“Our thoughts go out to those affected by this tragic event. We are taking every measure to remove this content from our platforms,” the firm said in a statement.
“Our goal is to take action as soon as possible – there is always room for improvement. We don’t want people seeing content that violates our policies.”
However, Damian Collins, the Conservative chairman of the Department for Digital, Culture, Media and Sport (DCMS) Select Committee, which has led inquiries into the policing of illegal content, accused social platforms of failing to take violent content seriously.
“Another brutal murder has gone viral. When will social networks take violence seriously?” he wrote on Twitter.
In a further statement to PA, Mr Collins said: “It is clearly irresponsible if distressing images are still available online despite tech companies being made aware of them. We have called for social media companies to be held to account for harmful content.
“While we’re expecting legal powers to enforce regulation here in the UK, it’s clearly in the interests of Facebook to demonstrate its intention and ability to comply with future laws by taking urgent action in removing these images. Social networks must take violence seriously.”
Labour MP Yvette Cooper, the chairwoman of the House of Commons Home Affairs Select Committee, has also repeatedly accused internet firms of failing to take action and “profiting” from hateful and violent content.
In the wake of the Christchurch terror attacks earlier this year, which were live-streamed by the suspect, platforms including Facebook and Google-owned YouTube were widely criticised for failing to stop re-posts of the video appearing online.
In a summit with world leaders in Paris in response to the attacks, Amazon, Facebook, Google, Microsoft and Twitter agreed to introduce new measures to their businesses to help eliminate violent and terrorist content from the internet.
The Government has also published a white paper on online harms which proposes stricter regulation on technology firms, including larger fines and criminal liability for executives if platforms are found to have failed to adequately protect their users from data breaches and exposure to illegal and dangerous content.