On Sept. 13, the Redpath Museum hosted Derek Ruths, a McGill professor of computer science and director of the Centre for Social and Cultural Data Science, who addressed a pertinent problem of our technological world: The dark side of the internet.
According to Ruths, the three most substantial issues with the internet are behavioural tracking where websites record your web-browsing information, referring to websites tracking their users’ web-browsing information, aggressive behaviour such as harassment and hate speech, and misinformation or manipulation via bots.
A regular social media user has likely had the off-putting experience of seeing an ad related to something they Googled an hour earlier. According to Ruths, this occurs as a result of ad networks which track internet activity and enable advertisers to provide more personalized ads. Some see this strategy as a minor annoyance, while others see it as creepy or even a violation of privacy. However, these nuisances are necessary costs for free services, flexibility of expression, and right to access.
“Advertising has enabled an unprecedented level of innovation on the internet,” Ruths said.
Everything we take for granted as free online is funded by ads. There is no such thing as a free lunch: If you’re not paying for something directly, you’re still paying another way. On the internet, subjection to ads is a form of payment.
On the streaming service Spotify, for example, paying customers can listen to their favourite songs anywhere and anytime, uninterrupted. Those who don’t pay must instead listen to an advertisement every 30 minutes. A choice arises between paying to avoid ads and tolerating them. Ultimately, the decision lies between paying with one’s money or with one’s time.
With many internet users being hesitating to pay for online services to begin with, ignoring irritating ads has become an acceptable pastime. However, looking past the hate speech that infiltrates many internet spaces is significantly more of a challenge, both ethically and technically. Ruths pointed out that simply agreeing on what qualifies as threatening behaviour on the web is extremely difficult. The moderation of offensive content can come at the cost of freedom of expression. Furthermore, what most can agree constitutes offensive content may be too nuanced for digital detection tools to catch.
Although Facebook and Twitter use the report feature to minimize hate speech on their platforms, Ruths exposed an unintentional result of monitoring content. According to him, it is not uncommon in authoritarian regimes, such as Myanmar and Vietnam, for the government to censor content that they believe threatens their power. As a result, moderating material on the internet raises questions over who gets to decide what counts as hate speech and what the criteria are.
Another issue plaguing the internet is the current trend of fake news that misinforms and manipulates the public’s knowledge.
“The three core problems with misinformation are detecting it, detecting accounts that propagate it, and presenting information responsibly,” Ruths said.
Identifying bots, software that can automatically and rapidly perform internet tasks, is a key challenge of fighting online manipulation. Bots have the ability to borrow words from real users to appear human or human enough to fool the technology trying to detect them.
In conclusion, Ruths reminded attendees that finding an acceptable trade-off between accessible, free services and the internet’s darker underbelly is crucial to ushering the internet into a less malevolent era.