John Naughton: The smartphone tracking industry has been rumbled. When will we act? | John Naughton

Shadowy firms collect detailed data on where we go and who we meet through our apps. Yet where is the protest that would fuel change?

When the history of our time comes to be written, one of the things that will puzzle historians (assuming any have survived the climate cataclysm) is why we allowed ourselves to sleepwalk into dystopia. Ever since 9/11, it’s been clear that western democracies had embarked on a programme of comprehensive monitoring of their citizenry, usually with erratic and inadequate democratic oversight. But we only began to get a fuller picture of the extent of this surveillance when Edward Snowden broke cover in the summer of 2013.

For a time, the dramatic nature of the Snowden revelations focused public attention on the surveillance activities of the state. In consequence, we stopped thinking about what was going on in the private sector. The various scandals of 2016, and the role that network technology played in the political upheavals of that year, constituted a faint alarm call about what was happening, but in general our peaceful slumbers resumed: we went back to our smartphones and the tech giants continued their appropriation, exploitation and abuse of our personal data without hindrance. And this continued even though a host of academic studies and a powerful book by Shoshana Zuboff showed that, as the cybersecurity guru Bruce Schneier put it, “the business model of the internet is surveillance”.

Related: The privacy paradox: why do people keep using tech firms that abuse their data? | John Naughton

The scope of the New York Times study is incomparably wider than Die Zeit’s: 12 million people are tracked in the not-so-distant past

Continue reading…

John Naughton: The law that helped the internet flourish now undermines democracy | John Naughton

Section 230 of the 1996 US Telecoms Act is just 26 words long – but its impact has been incalculable

In October 1994, an unidentified user of a bulletin board hosted by an online service provider, Prodigy.com, posted an item that was to have far-reaching consequences. The post claimed that a Long Island brokerage firm called Stratton Oakmont had committed criminal and fraudulent acts in connection with the initial public offering (IPO) of another company.

Stratton Oakmont sued Prodigy and the unidentified poster for defamation – and won. Prodigy argued that it couldn’t be held responsible for what anonymous users posted on its platform. The judge disagreed, arguing that the company was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in several ways and was thereby potentially liable for any and all defamatory material posted on its websites.

Continue reading…

A Facebook campaign is trying to get this very relevant Jarvis Cocker song to Christmas number one – and they might just succeed

It’s been quite an end to the year here in the UK as the general election result proved a surprise for everyone, as the Tories achieved a hefty majority not seen since the days of Margaret Thatcher. Unless you are an ardent Tory supporter, the prospect of celebrating Christmas and trying to put on a happy face doesn’t seem worth the effort after that result but, with that

Read More

John Naughton: To err is human – is that why we fear machines that can be made to err less? | John Naughton

Algorithmic bias can be fixed more easily than the prejudices of people – so why do we still have a problem with it?

One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.

Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.

The same human confronted with a decision on different occasions will often decide inconsistently

Continue reading…