This post was originally published on this site
Facebook hopes to make its livestreaming tools safer—and protect its audience—with a new “one strike” policy.
The company faced heavy criticism after a gunman in New Zealand was able to broadcast a violent attack on a mosque. Social media companies generally have struggled to police violent speech on their platforms, but live video has presented unique problems for engineers trying to keep audiences safe.
Now, elected officials in New Zealand, led by Prime Minister Jacinda Ardern, are calling on government and tech leaders to do more to limit the spread of messages from hate organizations and terrorist groups.
In a New York Times opinion column Saturday, Ardern wrote of the balance that must be struck: “Social media connects people. And so we must ensure that in our attempts to prevent harm that we do not compromise the integral pillar of society that is freedom of expression. But that right does not include the freedom to broadcast mass murder.”
Officials from the U.S., Canada and Britain are expected to be at the summit, as well as Twitter CEO Jack Dorsey and staff from Facebook, Amazon and Google, The Washington Post reports.
A number of nations are expected to sign the Christchurch Call, the Times reports, but the U.S. is not among them, with concerns about free speech.
Facebook announced in a blog post that its new policy would limit access to livestreaming tools when users violated certain rules, such as posting content that violated the platform’s community standards.
Following the