Reith Lectures: AI and Why People Should Be Afraid

Rory cellan jones
Technology correspondent
@BBCRoryCJon Twitter


The Reith Conferences take place in Newcastle, Manchester, Edinburgh and London

Professor Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence, University of California, Berkeley, is giving this year’s Reith lectures.

His four lectures, Living with Artificial Intelligence, address the existential threat of machines more powerful than humans – and offer a way forward.

Last month he spoke to then BBC News tech correspondent Rory Cellan-Jones about what to expect.

How did you shape the conferences?

The first drafts I sent them were way too sharp, way too focused on the intellectual roots of AI and the different definitions of rationality and how they have emerged over the course of history and stuff like that.

So I readjusted myself – and we have a conference that presents AI and the future prospects both good and bad.

And then we talk about weapons and we talk about jobs.

And then the fourth will be, “OK, here’s how we avoid losing control of AI systems in the future.”

Do you have a formula, a definition, for what artificial intelligence is?

Yes, it is the machines that perceive and act and, hopefully, choose the actions that will achieve their goals.

All these other things that you read like deep learning and so on, these are just special cases of that.

But couldn’t a dishwasher fit into this definition?

Image source, Getty Images

More and more, home appliances have a degree of intelligence

Thermostats perceive and act, and in a sense, they have a little rule that says, “If the temperature is below, turn on the heat.”

“If the temperature is higher than that, turn off the heat.”

So it’s a trivial program and it’s a program that was written entirely by one person, so there was no learning involved.

All the way down to the other end – you have the self-driving cars, where the decision-making is a lot more complicated, where a lot of learning has been involved to achieve that quality of decision-making.

But there is no hard and fast line.

We can’t say anything below it doesn’t count as AI and anything above it counts.

And is it fair to say that there have been great strides in the last decade in particular?

In object recognition, for example, which was one of the things we’ve been trying to do since the 1960s, we’ve gone from completely pathetic to superhuman, by some measure.

And in machine translation, again, we went from completely pathetic to really, really good.

So what is the destination of AI?

Image source, Getty Images

Robots are increasingly used as an educational resource in schools – but will they ever build one?

If you look at what the founders of the field said their goal was, general purpose AI, which means not a program that is really good for playing Go or a program that is really good at machine translation, but something that can do just about anything a human could do and probably a lot more because machines have huge bandwidth and memory advantages over humans.

Just say we need a new school.

The robots were about to appear.

Robot trucks, construction robots, construction management software would know how to build it, knew how to get permits, knew how to talk to the school district and principal to find the right design for the school, etc. – and a week later you have a school.

And where are we on this trip?

I would say we are quite far away.

Clearly, there is still major progress to be made.

And I think the most important is complex decision making.

So if you think about the example of building a school – how do we start from the goal that we want a school, then all the conversations happen, and then all the construction takes place, how humans do it. they this?

Well, humans have the ability to think at multiple scales of abstraction.

So we could say, “OK, well, the first thing we need to figure out is where we’re going to put it. And how big should it be? ”

We don’t start to think about whether I should move my left finger first or my right foot first, we focus on the high level decisions that need to be made.

You painted a picture showing that AI has made a lot of progress, but not as much as it thinks it is. Are we at a point, however, of extreme danger?

There are two arguments as to why we should be careful.

The first is that even though our algorithms are currently far from general human capabilities, when you have billions of them running, they can still have a very big effect on the world.

The other reason for concern is that it is entirely plausible – and most experts very likely think – that we will have general purpose AI in the course of our lifetimes or the lives of our children.

I think if general purpose AI is created in the current context of superpower rivalry – you know, whoever runs AI runs the world, that kind of mentality – then I think the results could be the worst. possible.

Your second talk is about the military use of AI and the dangers associated with it. Why does this deserve an entire conference?

Image source, Getty Images

The military is already experimenting with AI and robots on the battlefield

Because I think it’s really important and really urgent.

And the reason it’s urgent is because the guns we’ve been talking about for six or seven years are now starting to be made and sold.

So in 2017, for example, we produced a movie called Slaughterbots on a small quadcopter about 3 inches [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to explode.

We first showed it at diplomatic meetings in Geneva and I remember the Russian ambassador sniggering, snorting and saying, “Well you know this is just science fiction, we don’t have to worry about these things for 25 or 30 years. “

I explained what my robotic colleagues had said, which was that no, they could assemble a weapon like this in a matter of months with a few graduate students.

And the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is essentially a slightly larger version of the Slaughterbot.

What do you hope in terms of the reaction to these conferences – that people come away scared, inspired, determined to see a way forward with this technology?

All of the above – I think a little fear is appropriate, not afraid when you wake up tomorrow morning and think my laptop is going to kill me or something, but thinking about the future – I would say the same kind of fear that we have about the climate or, rather, we should have about the climate.

I think some people just say, “Well, that looks like a beautiful day today,” and they don’t think about the longer timescale or the bigger picture.

And I think a little fear is necessary, because that’s what makes us act now rather than act when it’s too late, which is what we have done with the climate.

The Reith conferences will be on BBC Radio 4, BBC World Service and BBC Sounds.

More on this story

Comments are closed.