This article is more than 1 year old

DeepMind now has an ethics unit – which may have helped when it ate 1.6m NHS patient details

Better late than never, I guess

Google's controversial DeepMind has created an ethics unit to "explore and understand" the real-world impacts of Artificial Intelligence.

The DeepMind Ethics & Society (DMES) group will be comprised of both full-time DeepMind employees and external fellows.

It will be headed by technology consultant Sean Legassick and former Google UK/EU policy manager and government adviser Verity Harding, while advisers will include Columbia development professor Jeffrey Sachs, climate change campaigner Christiana Figueres and Oxford AI professor Nick Bostrom.

They will look at areas such as privacy and transparency, economic impacts, governance, managing AI risk, morality and values. All its research will be published online.

In a statement, the company said: "At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes.

"Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work."

DeepMind was acquired by Google for £400m in September 2010. However, the neural network wranglers have come under fire for its controversial use of patient records data in the NHS, the UK's national health service.

Earlier this year DeepMind's NHS partner, the Royal Free Hospital in London, England, was slammed by the UK Information Commissioner for providing it with 1.6 million patient details. ®

More about

TIP US OFF

Send us news


Other stories you might like