Hold on. Here's an idea. Let's force AI bots to identify themselves as automatons, says Cali

Bill gets second reading but faces wrath of robot-loving EFF

A law bill that would require AI bots pretending to be humans to identify themselves as such is progressing through California's Congress – but has hit opposition from the Electronic Freedom Foundation.

The B.O.T. Act (SB 1001) would make it illegal for a computer to communicate with someone living in the US state without revealing that it was not human.

The legislation passed through to a second reading this week, bringing it closer to becoming law, but in light of a recent demonstration by Google showing a digital assistant booking a haircut, some are questioning whether the law is too broad and risks styming future technology.

The law itself notes that it would be unlawful for a bot to interact with a "natural person" with "the intention of misleading and without clearly and conspicuously disclosing a natural person about its artificial identity."

The catch is that "misleading" is defined as not identifying itself as a machine rather than actively trying to get a human to believe something untrue: "A person using a bot is presumed to act with the intent to mislead unless the person discloses that the bot is not a natural person."

Even so, it's not hard to imagine with advances in technology that this would be no bad thing: a quick note that whoever you are talking to "is a bot" or is “auto-generated" before the conversation progresses would clue you in to what is happening.

Haircut, sir, I mean, they?

This would appear to be increasingly relevant following Google's demonstration of its Duplex technology earlier this month when a machine added various human dialogue responses and tics like “er” or “mmhm” in placing a call to a real person in order to appear more human.

Pichai

Google's socially awkward geeks craft socially awkward AI bot that calls people for you

READ MORE

Lots of observers were impressed; just as many were disturbed; and it turns out that Google may well have fudged the entire demo anyway. But the clear intent was there – for people to believe they are talking to other people when in fact it was a machine. Legislation couldn't be more timely.

Not so! Cries the EFF, which reckons that the law raises "significant free speech concerns."

But surely free speech is an exclusively human right? Otherwise Siri would be suing Apple developers for subjecting her to endless hours of geeky nonsense.

The EFF doesn't tackle that distinction but does argue that bots are used for all sorts of activities that would qualify as protected speech, such as political speech, satire and poetry. And it notes "the speech generated by bots is often simply speech of natural persons processed through a computer program."

As such, it argues, "disclosure mandates would restrict and chill the speech of artists whose projects may necessitate not disclosing that a bot is a bot."

We're not sure we buy that argument: if a bot is just relaying directly what a human is saying, then it's not a bot. If it's using recorded human speech and feeding it back in its own way to generate a different meaning then it is a bot.

Art of the con

Plus of course, artists typically present their work in a specific location and environment and people are aware of that fact, so there is a different understanding and plenty of law to defend that.

If the EFF is arguing that an "artist" should be allowed to contact people in their own environment, without them being aware of it, and trick them into believing that they are a real person, then it's not hard to argue that someone could con people out of their money or possessions and then claim they were an "artist". How long until our first Picasso robocaller?

The EFF goes on to argue that a big problem with the legislation is that "it isn't always easy to determine whether an account is controlled by a bot, a human, or a centaur, a human-machine team."

Presumably, people will simply reach out to the platforms themselves – Twitter, Facebook et al – to adjudicate. Although that raises the interesting notion that a bot could lie and swear that it was in fact a human. Blade Runner here we come.

But on a broader, more philosophical level the EFF argues: "When protected speech is at risk, it is not appropriate to cast a wide net and sort it out later."

Which is a fair argument. Except when it isn't. Sometimes a law has to be laid down and then exceptions made to it.

The bigger question is: do AI bots acting as humans represent a unique opportunity that needs to be reined in when the situation is abused? Or do they pose a threat that requires people to argue for specific exceptions to?

The problem appears to be that we got the Russian bots before we got the Google haircuts. ®




Biting the hand that feeds IT © 1998–2018