This article is more than 1 year old

Latest from the coming AI robot apocalypse: we're going to be fine

Of course by "we", we mean educated, well-off Westerners

Tackling society's ills

It's not too much of a stretch to see how this kind of system could provide the answer to one of the biggest problems we as a society are facing right now: the amplification of a few persistent voices, and the creation of silos of thought where repetition leads to reinforcement.

As humans, we are pretty hopeless are dealing with lots and lots of inputs – like making sense of thousands of tweets on a given topic – but we are pretty effective at identifying patterns. AI machines could help harness this tidal wave of information, funneling it without discarding it, and draw lessons and patterns that can then be used to gain a broader understanding.

This meta-level of artificial intelligence is what will drive things beyond the amusing and cutesy to something that we would actually recognize as intelligence.

Atlas

Robo-callers, robo-cops, robo-runners, robo-car crashes, and more

READ MORE

Element AI for example is focused on pulling in datasets from everywhere and seeing if a large, broader understanding of what is going on can be used to identify new market opportunities. That is what entrepreneurs and business development people do within corporations. But, again, they are limited by the ability to process large amounts of information. A machine can do that first pass.

Matroid's Reza Zadeh – who is also a professor at Stanford University – told attendees that his PhD students were most excited about AI systems that work one level up, finding existing models and neural networks to apply to new situations. In other words, building up knowledge and expertise.

Zadeh also points to computer vision as a revolutionary next wave in computing. "A computer can now see," he points out, "that is as profound as us being able to type into a computer." In other words, the move from punch cards to writing lines of code will be replicated but this time using senses we associated with higher life-forms – seeing and hearing.

Collaborators

One thing that all the panelists were sure of – which may also help stymie the idea of a mad scientist creating demented AI robots – was that progress in this field is dependent on collaboration across fields.

"If someone says their technique is the best, that just means they don't know enough about the other techniques," argued Zadeh.

That awareness of other fields was clearly outlined when Zadeh – who is focused on vision – said he was willing to bet $5 that in five years' time you still won't be able to ask a digital assistant to deliver your favorite food (because it won't be able to grasp the context and subsequent steps required). "We can do that right now," replied Gamalon's Vigoda.

But it wasn't all sunshine and rainbows. All the panelists identified that there were potential problems with this degree of smart learning.

Patterns, or expertise, or intelligence, or however you want to view the smart accumulations of information, are dependent on the information they receive, in much the same way we as humans are.

But what we would call human experience is going to be datasets for machines – and you could flood a machine with the wrong datasets or experiences, creating a monster filled with deeply held convictions and prejudices.

"We need to team up, and to regulate properly," argued Jean-Francois Gagne. "Data is useless out of context and can be super meaningful in the right context. It can be used for you or against you." He suggested that some kind of agreement, like a Creative Commons, for information consent is needed: you agree what you are giving away in terms of data, and for how long.

Bias and prejudice

There is a question of bias: how what looks like a big difference could, in the broader sense, be quite small. If the average life expectancy of one group is two years higher than another, how much weight should that be given if looking at wellness factors? Is two years really that big in terms of an 80-year lifespan?

Equally, what if the data revealed that certain specific groups did far worse in academic studies? Do you focus resources on them to raise the scores – with the assumption that all human beings are generally equal – or do you pull resources away – with the assumption that it is going to be a more efficient use of limited resources?

Could we end up creating an army of digital James Damores? A group of closeted intelligent machines with narrow experience but access to just enough information to develop and reinforce prejudices? Yes, we could. Very easily.

So how do we make sure that artificial intelligence machines are given a rounded, broader education of the world when the economic value will be in getting them to make decisions that benefit their creators and owners?

If you have a good answer, perhaps share it with humanity first. ®

More about

TIP US OFF

Send us news


Other stories you might like