Social media tech company, Facebook, has announced on Monday that it will use a new form of artificial intelligence to detect suicide posts. The company has said that the software will be able to sift through posts or videos and flag them when someone displays suicidal tendencies. Once the Facebook AI detects patterns of suicidal thoughts, it will then send mental health resources to the user or their friends, or contact local authorities.
The company has stated that their new AI will use pattern recognition to scan through posts and comments for specific phrases, which would show suicidal behavior. Facebook noted several phrases to be red flags such as “Are you ok?” or “ Can I help?” and require immediate intervention.
If the user posts a live video, other people can report the video and get in touch with a helpline to help their friend. The Facebook AI will provide the user with an option to contact a helpline or another friend. Facebook employees can sometimes call local first-responders to intervene.
“We’ve found these accelerated reports – that we have signaled require immediate attention – are escalated to local authorities twice as quickly as other reports,” wrote Guy Rosen, Facebook vice president of product management.
The company has been focusing on eradicating such displays ever since the platform experienced an increase of live-streamed suicides in April. A month after, Facebook said it would hire three thousand additional workers to its community operations team, responsible for reviewing posts and other content reported to be violent.
Facebook is currently testing the AI program in the United States and will release to most countries later on, with the exception of those in the European Union. While the platform did not say exactly why the EU won’t get their Facebook AI, it may be due to different legislation regarding privacy and Internet.
Image Source: StaticFlickr