Voice prints at risk from impostors
Up to 20% of voice biometric samples could be fooled by ‘wolves’
Ten to twenty per cent of utterances collected by voice biometrics systems are not strong identifiers of the individual that spoke them, according to Dr. Clive Summerfield, the founder of Australian voice biometrics outfit Armorvox. Voice biometrics systems could therefore wrongly identify users under some circumstances.
Most voice biometrics implementations require users to utter a pass phrase or mention personal details as part of their authentication process. Dr. Summerfield told The Register that while a small fraction of the population, which he labels “wolves”, have voices that match many other voice prints, the need to know the pass phrase means voice biometrics systems are not likely to be casually cracked without an effort to also collect users' secret words. But he also feels that most voice biometrics systems build in tolerances for those with less distinct voice prints, therefore applying a lower authentication standard for all users.
Some of the less-effective voice prints are gathered because of ambient noise when utterances are collected. Signal clipping applied by carriers can also have the unintended consequence of reducing the quality of voice prints. Some individuals simply have generic voice prints that share qualities with many others. Summerfield labels those afflicted, for whatever reason, with poor voice prints as “goats”, in contrast with the majority of “sheep” whose voices are a strong authentication token.
Armorvox’s answer is a system it calls ImpostorMap which tests every utterance in a database to see if any could authenticate more than one user. Those with less-secure voiceprints can then be encouraged to re-enrol with a better sample. By doing so, Summerfield says voice biometrics can become a stronger authentication technique as users create more distinct utterance collections that are harder to imitate.
The company has already secured channel partners in Australia and is actively seeking implementation partners beyond Antipodean shores. ®
Expect people^Wusers to conform to the machine, why don't you
This sounds a wee bit desperate to me. Against better judgement (20% intrinsic failure rate, what fun) pushing because-we-claim-we-can technology. And those details you need for impersonation? Eh, often quite easy to find, not a chore for the experienced impostor (just have to have the voice, might turn into an interesting line of contracting down the line) and, oh, you're expected to SAY IT OUT LOUD every time you talk to the machine. Just hope nobody ever overhears you then, eh. Or one of them newfangled devices that can RECORD then PLAY BACK sound. Luckily those are really rare in practice.
And then there's the thing that biometrics generally suck for casual identification as they're adversarial in nature beyond the simple finnickiness even without considering illness or a night out with the lads. I don't know why people keep on believing that biometric sauce somehow is going to make them more secure; it'll sooner lock them out of their own identities instead. Being securely at rock bottom isn't quite my cup of tea. I don't know why all those companies keep digging that mine for gold, either, as from a security perspective it's fool's gold, and with that, worse than useless. Lots of fools buying up the gold, apparently. Wish they'd have the good grace to not foist it upon anyone but themselves.
This sort of thing is laughably insecure for the ease it can be casually, accidentally compromised, and doesn't stand a snowflake's chance in hell against spearphishing. Voice printing might be useful for lots of things, but as biometric authentication? Not so much.
Combining the worst of...
* The false positive / false negative problems of biometrics
* The transmission encryption of telnet
* The secrecy of password reset questions
And the solution is to ask the user to change their voice unnaturally. What could possibly go wrong?
The guy's never heard of Rich Little, has he?
Or maybe he didn't recognize Mr. Little's voice.