I have not been following the hype over self-driving cars closely enough to tell whether it’s a passing fad or something more enduring.
As is often the case with emerging technologies that excite people’s imaginations, many claims for the benefits of self-driving cars come across as exaggerated, almost utopian.
In any case, benefits are cast as benefits, along a single dimension of value: self-driving cars will be convenient and profitable, and we’ll all be better off if they become more prevalent.
I’m far from convinced.
I don’t imagine I’ll be able to render a final judgment on self-driving cars in a single blog post, but I do have a concern about the things that seems to be overlooked.
I suspect the self-driving car is generally regarded as a modification of an existing technology, the automobile, the implications of which are to be understood in terms of what might be called an automotive frame of reference.
(I’m here borrowing an idea from technology studies – from Wiebe Bijker in particular – of a technological frame, a particular way of constructing the meaning of an artifact.)
Within that frame of reference, it’s all about the cars and the relations between cars, and how easily cars can or cannot move through their environment, and the price of fuel for cars, and the availability of storage for cars, and so on. As the automotive frame came to dominate the developed world, for example, the meaning of the street changed: where once it was a multi-function public thoroughfare, it is now nearly exclusively, with only rare exceptions, a corridor for cars to move through the landscape: any other possible uses of the street have to accommodate themselves to the dominance of cars.
Any modification of the automobile – including various degrees of autonomy – is mainly understood in those terms: Will it help cars move more efficiently through the landscape? Will it prevent accidents?
As someone who sometimes partially opts out of the automotive frame of reference, I can say that the view of an autonomous car from the sidewalk is quite different.
Let me pose a hypothetical encounter.
It often happens that I’ll be walking down the sidewalk on the left-hand side of the street, approaching an intersection. A car on the cross street is waiting at the stop sign there, and the turn signal tells me the driver intends to turn right.
I am approaching that car from the driver’s right-hand side, so the driver is pointedly looking away from me, to her or his left, watching the oncoming traffic and trying to judge when it will be safe to turn.
In the town in question, pedestrians always have the right of way in such situations: I should be allowed to cross in front of the car.
However, I know quite well that it might not occur to the driver to glance to her or his right before hitting the accelerator, and I really would rather not be in the way just then.
So, I stop and wait.
Usually, the driver turns glances at me; we make eye contact, sometimes nods are exchanged, sometimes the driver waves me across; I wave back in thanks and go on my way.
I could spend a whole post digging into the significance of that brief, wordless exchange, but it seems to me that mutual recognition between human beings is ethically powerful in its own right, in a way quite distinct from benefits and harms.
Now, suppose I am approaching that same intersection, and a different car is waiting there to turn right. Suppose also that I can tell it’s a self-driving car, and that the timing of the turn will be determined by an array of sensors and an algorithm.
The human in what would otherwise be the drivers’ seat is reading something on a tablet, or snoozing, or grooming, but is in any case not paying much attention to anything outside the car.
Does that array of sensors detect my presence at the corner, or my intention to cross? Is the system programmed to yield the right of way to me?
Suppose, for the sake of argument, that the driving system in the car detects my presence, registers me as human and some fail-safe kicks in that would prevent the car from turning, effectively yielding the right of way to me.
How would I know that?
I can’t make eye contact with a sensor array. There can be no mutual recognition, not with the depth and quality of meaning to be found in a brief nod, or a wave, or a smile from a human being.
I would be at an impasse, unwilling to risk crossing in front of a machine to which I cannot relate.
It occurs to me that vulnerability is an important aspect of the mutual recognition between humans, even between humans in driver’s seats and those on foot: we are all mortal and fallible, we can all feel pain and loss.
That puts us on a the same level, so the nod or the wave is a gesture between equals, as members of a community.
A self-driving car cannot participate in that shared acknowledgement of vulnerablity.
I have the same concern about what are called “lethal autonomous robots” – killing machines with on-board systems that determine whether and at what to fire – but that’s very much a topic for another post.
It also occurs to me to wonder if I’d be able to tell, as I walk toward the intersection, whether the car waiting there is autonomous.

Leave a Reply to Someone Who LikesFactsCancel reply