The dream of a computer system with godlike powers and the wisdom to use them well is merely a theological constructThe House of Lords report on the implications of artificial intelligence is a thoughtful document which grasps one rather important point: this is not only something that computers do. Machine learning is the more precise term for the technology that allows computers to recognise patterns in enormous datasets and act on them. But even machine learning doesn’t happen only inside computer networks, because these machines are constantly tended and guided by humans. You can’t say that Google’s intelligence resides either in its machines or in its people: it depends on both and emerges from their interplay. Complex software is never written to a state of perfection and then left to run for ever. It is constantly being tweaked, increasingly often as part of an arms race with other software or networks that are being used to outwit it. And at every step of the way, human bias and human perspectives are involved. It couldn’t be otherwise. The dream of a computer system with godlike powers and the wisdom to use them well is a theological construct, not a technological possibility. The question, then, is which forms of bias and which perspectives are desirable, and which we should guard against. It is easy to find chilling examples – the Google image recognition program that couldn’t distinguish between black people and gorillas, because it had been trained on a dataset where almost all the human faces were white or Asian; the program used by many American jurisdictions to make parole descriptions turns out to be four times as likely to recommend that white criminals be freed than black ones when all other things are equal. Without human judgment we are helpless against the errors introduced by earlier human judgments. This has been known for some time, but the report discusses these dangers very clearly.
Read full story