Technology used by Facebook to detect harmful images and videos has been publicly released by the social network.
The company said it was publishing the algorithms on code-sharing website GitHub to help “industry partners, smaller developers and non-profits” find and remove harmful content.
Facebook uses the technology to identify unsafe content and spot duplicates when accounts attempt to upload them again.
It said it uses the software to find images and video linked to child exploitation, terrorist propaganda and graphic violence.
Earlier this year, as part of a global forum of technology companies, Facebook pledged to work collaboratively with other firms to help slow the spread of harmful content online.
That agreement was made at a summit known as the Christchurch Call to Action, held in response to the terrorist attacks in the New Zealand city in March.
In a blog post announcing the release of the code, Facebook said it hoped the technology could help make the internet safer.
“Today, we are open-sourcing two technologies that detect identical and nearly identical photos and videos — sharing some of the tech we use to fight abuse on our platform with others who are working to keep the internet safe.
“These algorithms will be open-sourced on GitHub so our industry partners, smaller developers and non-profits can use them to more easily identify abusive content and share hashes — or digital fingerprints — of different types of harmful content.
“For those who already use their own or other content-matching technology, these technologies are another layer of defence and allow hash-sharing systems to talk to each other, making the systems that much more powerful.”