In 2010, the U.K. government set up a “Counter Terrorism Internet Referral Unit,” which removes “terrorist material” from the internet. In November, the British government said it had taken down more than 65,000 “pieces of unlawful terrorist-related content,” 80 per cent of them about jihadist groups in Iraq and Syria.
Last year, the U.K. government reached an agreement with internet service providers (ISPs) to voluntarily block outside material at the request of the counterterrorism unit, with no court order required. If the web content is on servers in the U.K., action can be taken against the host to remove the terrorist material.
In Australia, for locally hosted content, ISPs can be ordered to remove content, and action taken against them if they refuse. If the content is outside the country, Australia’s domestic spy agency, ASIO, or the federal police, can order websites to block the content. As in the U.K., no court order is required.
Free speech advocates in both the U.K. and Australia have been critical of both governments’ approaches to fighting terrorism on the internet. One concern in Britain is that there’s no requirement for public disclosure about how often and what content gets blocked or taken down.
While the Canadian bill uses the phrase “computer system,” Australia’s new law uses “computer network,” which, critics say, gives the government the authority to monitor the entire internet.
The Australian Lawyers Alliance says legislation which became law last fall would have “not just a chilling effect but a freezing effect.”
Kim Carlson, international co-ordinator of the U.S.-based online rights group Electronic Frontier Foundation, views the Australian and U.K. governments as “racing to see which country can introduce the worst restriction as quickly as possible.”
While also critical of Canada’s draft legislation, on the issue of removing internet content, she says Canada’s proposal would have more safeguards than in those two countries. But she argues it would still “have a chilling effect on speech, as people fear that their words are going to be misconstrued in some way.”
Bill C-51 also would give a judge who has reasonable grounds to believe that a website contains terrorist propaganda the power to order an ISP to “provide the information that is necessary to identify and locate the person who posted the material.”
In June, the Supreme Court ruled that Canadians have the right to remain anonymous on the internet and that ISPs cannot disclose their identifying information to law enforcement unless they first obtain a warrant.
Christopher Parsons, the managing director of the Citizen Lab’s Telecom Transparency Project at the Munk Centre for Global Affairs, says that given the top court’s ruling, he’s concerned about ISPs handing over subscriber information.
“Advocacy and promotion is the test,” Justice Minister Peter MacKay explained on CBC’s Power and Politics.
Given the extent online of what the government calls terrorist propaganda, there’s also a question about the staffing required to find and remove that content from the internet. Parsons noted the challenge the RCMP has getting the resources to take down the vast quantity of child pornography.
Rozita Dara, a computer science professor at the University of Guelph, doubts that technology alone can identify terrorist propaganda. She says that doing subjective searches in data mining is still very tough, and even with algorithms as good as Google’s, humans will still need to look at every suspect web page.
Dara raises the question about distinguishing between someone expressing an opinion and someone who’s recruiting or publishing propaganda. She worries that before the distinction is clear, people expressing opinions will be put under online surveillance and their social media information checked out. For her, linking different sources of information or databases with personal content raises privacy concerns.
CBC is not responsible for 3rd party content