EU privacy gurus peer at Windows 10, still don't like what they see
Article 29 has 'concerns' about Microsoft's data slurp
The EU’s top privacy body has been probing Windows 10, but isn’t satisfied, even after Microsoft agreed to tweak the consent settings.
Microsoft unveiled new privacy controls as part of its forthcoming “Creators Update” to Windows 10 due this spring. However, Reuters reports that the Article 29 Working Party, which represents the data protection commissioners of the EU member states, still has “concerns”. The Working Party first raised the issue of personal data processing in Windows 10 with Microsoft last year.
In a statement seen by Reuters, the Article 29 Working Party finds that the new consent screen presented to users during the installation process still doesn’t sufficiently inform them about what personal data was being collected, and what it was subsequently used for.
Paradoxically, no company has done more than Microsoft to challenge antiquated laws that provide insufficient personal data protection to users. It has filed four separate lawsuits against the US government – with some success, particularly over a law that allows the state to access personal information stored on Microsoft servers overseas – the so-called “Dublin Warrant” case. The most recent is over gagging orders. Microsoft has launched these lawsuits because it says that for the public to use the cloud, it must be trusted.
Speaking last autumn, Microsoft CEO Satya Nadella promised to comply by regional data protection laws, and be as transparent as possible.
Yet in the Nadella era, Microsoft has made its data collection pervasive, subtle and obscure.
"If the operating system was the first run time, the second run time you could say was the browser, and the third run time can actually be the agent. Because in some sense, the agent knows you, your work context, and knows the work. And that's how we are building Cortana,” he told developers in an open conference last week.
That’s because the CEO believes that for cloud computing to be “useful”, the cloud needs to know everything about you before the algorithms can perform their “help”. Nadella called what is popularly referred to as “Artificial Intelligence” – which really means using the server farms to run fairly old probabilistic neural networking and machine learning techniques, with a modern UX slapped on top – the “third run time”.
Nadella explained that the “first” run time was the operating system, and the second the browser.
So from Windows 8.0 in 2012, which presented personal data acquisition as largely optional, to Windows 10 three years later, where the consent screen resembles a click-through EULA, Microsoft has made personal data acquisition a top priority.
If the AI hype flops, do you think we'll get our privacy back?
Microsoft isn’t alone, of course: Google and Facebook have pursued personal data collection just as aggressively, also touting machine learning “breakthroughs” as the justification. So too did Evernote, which badly miscalculated how its users would react to new data protection rules, introduced on the basis we would be wowed by AI. Evernote had torn up the new rules within a few days.
AI hypes come and go with only one certainty: all have failed to deliver in the marketplace. Interestingly, the latest hype was sparked some three or four years ago not by any new technical breakthrough, but by a new anxiety amongst the chatterati that robots would soon take middle class jobs. The anxiety spread like wildfire amongst opinion columnists, political advisors and think-tanks: a fairly homogenous, and not too-too-literate class.
The technology industry then realised it had to produce the “breakthrough”, but also kicked off a data collection arms race. But there’s little evidence so far that the machine-generated “help” or “intelligence” built in to chatbots or personal AI assistants is at all helpful to consumers. If that remains the case, the supposedly society-changing breakthroughs will be elusive, with machine learning confined to demonstration niches such as language processing or image recognition.
Should that happen, do you suppose Google, Facebook or Microsoft will then delete the data, hand it back to us... or retain it? What’s your guess? ®