(File Photo) A file photo taken on November 20, 2017 shows logos of US multinational technology company Google displayed on computers’ screens.
LOIC VENANCE / AFP
Major technology firms pledged Wednesday to cooperate on “transparent, specific measures” to prevent the posting of violent extremist content online, part of a “Christchurch Call” launched in the wake of the massacre at two New Zealand mosques in March in which 51 people died.
“The dissemination of such content online has adverse impacts on the human rights of the victims, on our collective security and on people all over the world,” said the companies, including Google, Microsoft, Twitter and Facebook, at a meeting with world leaders in Paris.
The call was initiated by New Zealand Prime Minister Jacinda Ardern and French leader Emmanuel Macron to avoid a repeat of the Christchurch killings, which were broadcast live by the gunman on Facebook for 17 minutes.
The horrific footage remained online for a further 12 minutes before Facebook was alerted by a user and took it down, but millions of uploads and shares of the video continued in the following days.
The statement was issued as Ardern and Macron hosted tech chiefs and some other world leaders at the Elysee Palace to crack down on extremism online.
Backers of the Christchurch Call, a voluntary series of commitments by firms and governments, have pledged new steps to prevent uploads of hateful and violent content, and quickly remove any that gets through their defences.
“Cooperative measures to achieve these outcomes may include technology development, the expansion and use of shared databases… and effective notice and takedown procedures,” they said.
In particular they promised “immediate, effective measures to mitigate the specific risk that terrorist and violent extremist contest is disseminated through livestreaming.”
The firms also agreed to invest in the development of artificial intelligence and other technical solutions for identifying and purging violent and extremist posts.
Algorithms used to determine what social media users see in their feeds may also be tweaked to direct people away from extremist or hateful content, “or the promotion of credible, positive alternatives.”
But the text did not outline any concrete steps that would be taken by individual firms, nor set any timeframe for putting any new measures in place.
Liked this post? Follow this blog to get more.