“Death by robot is an undignified death, Peter Asaro, an affiliate scholar at the Center for Internet and Society at Stanford Law School, said in a speech in May at a United Nations conference on conventional weapons in Geneva. A machine ‘is not capable of considering the value of those human lives’ that it is about to end, he told the group. ‘And if they’re not capable of that and we allow them to kill people under the law, then we all lose dignity, in the way that if we permit slavery, it’s not just the suffering of those who are slaves but all of humanity that suffers the indignity that there are any slaves at all.'” – Robin Marantz Henig, “Death By Robot“, in tomorrow’s New York Times Magazine.
This gets close to what I was trying to say in my recent post on self-driving cars, at least in that it offers an alternative to the usual default discourse of utilitarian calculation.
The further step I would take is to say that fully “considering the value” of a human life may well require awareness of the possibility of one’s own death. By this I mean death, not as a mere empirical fact about a body ceasing to function, but as the final closing of an opening into the world, the cutting off of relationships and projects and concerns and cares.
Still further, full consideration of the value of a human life requires a sense or an experience that these others around me are just like me in that regard: we are all mortal, we are all vulnerable. To look into the eyes of another person is to look into a world of relationships and projects and concerns that can be closed off.
At it’s best, that recognition is mutual, and creates a connection, a bond, that cannot be reduced either to utility, or duty, or emotion.
I am far from convinced a machine could meaningfully participate in such a mutual recognition. That one might be programed to fake it really doesn’t help: fake concern is almost worse than callousness.