Artificial intelligence is at the heart of online toxicity Grand Committee hears

Artificial intelligence is playing a major role in exacerbating the problems being blamed on large digital platforms such as a Facebook Inc. and Google LLC, according to a U.S. expert who testified before the International Grand Committee on Big Data, Privacy and Democracy meeting in Ottawa this week.Ben Scott, a Stanford fellow at the Centre for Internet and Society and a former advisor to Hillary Clinton, said that it wasn’t until the tech giants got hooked on machine learning that concerns about their polarizing effect and impact on democracy really took off.“It’s not just the ads that get targeted. Everything gets targeted. The entire communications environment in which we live is now tailored by machine intelligence to hold our attention,” he said.“The more time people spend on the platform, the more ads they see and the more money they make. It’s a beautiful business model, and it works.” Jim Balsillie : ‘Data is not the new oil – it’s the new plutonium’ Zuckerberg keeps ignoring the politicians, and they are getting ‘sick to death’ of it Apple CEO backs privacy laws, warns data being ‘weaponized’ The International Grand Committee was assembled by politicians from Canada, the United Kingdom, Germany, Costa Rica, Singapore and several other countries, hoping to call Facebook Inc. CEO Mark Zuckerberg and COO Sheryl Sandberg to testify on privacy practices at the social media giant.Zuckerberg and Sandberg failed to show up, prompting MPs to issue a highly unusual open summons, which will demand that they appear before parliament if they step foot in Canada.In the absence of Zuckerberg and Sandberg, the committee heard from lower-ranked officials from Facebook, Google and Twitter, along with a host of experts including Scott, all testifying about the problems caused by the tech giants, and the need for aggressive regulation.Scott argued that the implementation of AI was the key turning point for Facebook and Google, turning them into highly profitable companies but also causing technology to become more opaque and more corrosive.“We can now make policies to limit the exploitation of these tools, by malignant actors and by companies that place profits over the public interest,” Scott said. “We have to view our technology problems through the lens of the social problems that we’re experiencing.”Artificial intelligence, also referred to as deep learning or machine learning, was invented in large part by academics in Toronto and Montreal, and both cities remain significant global centres for expertise.And the country’s expertise in AI is something the federal government has worked hard to promote, with supercluster funding in Montreal focused in artificial intelligence. And when Canada chaired the G7 over the past year, the government used the leadership position to put AI issues on the agenda.We can now make policies to limit the exploitation of these tools, by malignant actors and by companies that place profits over the public interestBen Scott, Centre for Internet and Society and a former advisor to Hillary Clinton During their testimony Tuesday, representatives from Google, Facebook and Twitter were very much on the defensive, under a barrage of questions from politicians on the topics of privacy, misinformation and the inflammatory information hosted by social sharing platforms.But both Facebook and Google representatives said artificial intelligence could be used as a solution to better manage the social networks that are too large to be monitored by humans.In an emailed statement to the Financial Post, Google spokesman Aaron Brindle pointed out the value AI has for the company.“It is also a tool that can be used to identify problematic content on platforms like YouTube. For example: between January to March 2019, YouTube removed nearly 8.3 million videos for violating its Community Guidelines,” Brindle said. “76 per cent of these videos were first flagged by machines rather than people. Of those detected by machines, 75.7 per cent had never received a single view.”In an interview, University of Toronto Rotman School of Management professor Joshua Gans said people should be cautious with calls to regulate artificial intelligence.Gans co-authored Prediction Machines, a book entirely about the economic ramifications of artificial intelligence, and he said it’s simply too soon to understand how the technology will affect things.“This has the potential to improve efficiency across a vast number of industries, and it would be good to see that happen. You don’t want anything that’s going to impede it, so that’s why I get concerned about regulation,” Gans said. “But nor do I think I can say to you that this is perfectly safe, that we shouldn’t be thinking about good regulations that we might have to put in place in the future.”Gans said people fall victim to the outsized claims of the tech giants, which treat artificial intelligence as an all-powerful force when the technology is really still in its infancy.“I find that some of the top-line comments from things like surveillance capitalism and some of these other things, are posited on a world where AI has reached its full potential,” he said.“People don’t really know what everyone is doing, and they’re making false assumptions everywhere. That seems like an environment that makes it very hard to say whether a regulation would be good or not.”