In the past year, Facebook started utilizing artificial intelligence to screen people’s accounts for danger indications of forthcoming self-harm. Antigone Davis—Facebook’s Global Head of Safety—is satisfied with the results so far.
Davis stated that in the first month when they started the scanning, they had nearly 100 imminent-response cases. This resulted in Facebook interacting local emergency responders to verify someone. But that rate was immediately surged. She further reported that just to give an idea of how healthy the technology is functioning and rapidly improvising in the past year Facebook had 3,500 reports. That means AI checking is making Facebook contact crisis responders on an average about 10 times per day to check on someone that does not include Europe, where the system has not been opened. Davis also says that the AI operates by checking what a person writes online and what that individual’s friends respond. For example, if someone begins streaming a live video, the AI may select on the tone of people’s replies. When the software highlights someone, Facebook staff decides whether to call the police and then AI comes into function there, too.
Recently, Facebook was also in news as its former CSO (Chief Security Officer) stated that the U.S. must assemble together for protecting democracy from misinformation. Alex Stamos lately stepped down from his position as CSO at Facebook and published an “opinion editorial” in The Washington Post. The article stated that Facebook could have previously reacted to Russian interference on its platform. But he also reports that the problem is much bigger than Facebook. Congress should upgrade its regulations related to political advertisers, and social media users must “adjust to a platform environment where several dozen protectors no more control what is interesting.”