You're responsible for getting permission from subjects if you want to use Windows Photos' facial recog feature
Microsoft gets nervous, dumps burden of consent on users
Microsoft has begun rolling out an update to the Photos app in Windows 10 that prompts you to confirm "all appropriate consents from the people in your photos and videos", in order to use facial recog to find snaps of your friends and loved ones.
The feature itself is not new. Photos has attempted to use facial recognition to group and tag photos since at least 2017. The setting in previous versions of the app simply stated: "Allow naming and grouping of people in your photos and videos by turning on face detection and recognition, and storing this data on your device."
Using facial recognition to organise photos is not unique to Microsoft either: it is a standard feature across the industry. Adobe Photoshop Elements, a consumer app for photo editing and organisation, for example, uses it. Google Photos has a "Face grouping" feature that uses facial recognition, but disables the feature for some countries including those in Europe.
Facebook (of course) uses facial recognition "to let us know when you're in other photos and videos so we can create better experiences" but allows you to opt out.
Apple Photos "uses advanced computer vision to scan all of your photos and recognize the people, scenes, and objects in them".
Across all these platforms, though, these features are designed so that it should not be possible for strangers to know who you are from a photo.
Windows 10 users are being asked to assert that they have obtained consent from people in their photos (click to enlarge)
Microsoft has decided that additional safeguards are needed, and has come up with the notion that you should obtain "appropriate consents" from the people in your pictures.
This is enabled in build 19041 of the Photos app; it is not specific to a build of Windows 10 itself.
What consents are appropriate? There is a link to a privacy statement which takes you to a massive general privacy document. There is a section on Photos full of disclaimers. Facial groupings "are not accessible beyond the context of the device file system", it says. And then:
You, and not Microsoft, are responsible for obtaining consent to link Contacts with your facial groupings, and you represent that you have obtained all necessary consents to link your photos and videos into groups.
Most of us take lots of photos, and Microsoft's OneDrive app for Android and iOS has an option to upload all your pictures so they appear in Photos on Windows 10. You can imagine the befuddled or angry response you would get if you endeavoured to secure consent from everyone you snap, presuming you have some clue what the "necessary consents" might be, which most of us do not, and that securing consents is feasible, which it often is not.
The dialogue is therefore not a useful one unless you respond by turning the feature off, and the question to ask is why the feature exists at all if Microsoft is so doubtful about its legality in the absence of consent.
The company has also come up with a great counter-example of intuitive UI by presenting a confirmatory dialogue that makes it unclear whether you should click Accept or Decline to disable the feature. At least one user was caught out, asking: "How do I get the new photos facial [recognition] enabling 'offer' to STOP offering after I've declined?"
Leaving aside its inane implementation, Microsoft's move is in line with increasing disquiet about facial recognition technology. The company last week told The Financial Times that it had deleted a database of 10 million celebrity (and other) faces published in 2016 and used to train recognition systems around the world, though it will remain easily available via private copies.
IBM was also smacked by concerns over subjects' consent in March this year, when it used Creative Commons-licensed Flickr images for an ML training dataset.
Much of the controversy around facial recognition centres on police usage. Megan Goulding, lawyer at UK human rights org Liberty, has said such use "belongs to a police state and has no place on our streets" and the ACLU and other rights groups in the US are also calling for a ban.
Use of Amazon's Rekognition service has attracted protests in a variety of contexts, and at a recent annual shareholders' meeting, there were proposals to make "sales of facial recognition technology to government agencies" subject to independent review and to require the company to "commission an independent study of Rekognition and report to shareholders regarding... the extent to which such technology may endanger, threaten, or violate privacy and or civil rights". The board opposed it, citing "the material benefits to both society and organizations of Amazon Rekognition's image and video analysis capabilities" and the proposals were defeated.
AI used on your local device to help tag photos seems a long way from use by police or employers but demonstrates the ubiquity of the technology. Tighter regulation may come, but nothing will now prevent it being used at least some of the time in ways we dislike. ®