Self-driving cars, like the one from Google, are not exactly news but they are far from being made available for public use. Next to the technical readiness of the solution there are other aspects that have not yet been fully settled. I don’t want to be the devil’s advocate here and I am looking forward to automated vehicles, but I wanted to share some critical thoughts with you.
Who owns the vehicle?
Some conceptions of self-driving vehicles are only focused on navigating and steering the vehicle, but I think that isn’t a holistic approach. To fully leverage the advantages of smart cities and centralized traffic control, self-driving vehicles should not be property of individuals like they are today. Users should be able to subscribe to a service, which allows them to use the service on demand. Having that in mind, in this conception the owner of the vehicles would be a central service provider which is either a private company or the smart city itself.
Who owns the software?
Self-driving vehicles rely on a variety of hardware and software to work. Considering the vehicles as endpoints of a centrally controlled transportation system, all those endpoints need to be registered and managed. Standard configuration and software would be deployed from the central management system. Updates to the software would need to be rolled out iteratively, to avoid a disruption to the on-demand service. The accountability of what the software, and therefore the vehicle, does lies with the service provider and requires highly professional service operation teams to avoid issues.
Kill or be killed?
Whether you call it an artificial intelligence (AI) or a centrally controlled vehicle, what happens in cases of imminent accidents is not clear. It does not matter if the vehicle makes a decision on its own or if the decision is calculated in a central server, what matters is, how does the vehicle decide what to do?
Imagine the following scenario: A family with parents and children is using the self-driving vehicle service within a smart city. All surrounding vehicles are also centrally controlled to better account for minimum risk of traffic accidents. A child runs after a ball across the street in front of the vehicle, transporting the family. This is an external factor to the traffic system and was not planned. The central traffic system now has to follow its primary routines and avoid harm to passengers and surrounding humans as good as possible. A crash is imminent even though a breaking process was started. The vehicle can now only “choose” the best possible option based on data it computes.
The vehicle can either choose to:
- Steer left to avoid the collision with the child, but heading straight into a truck from the other lane.
- Steer right to avoid hitting the child, but colliding with a group of teenagers on the sidewalk.
- Not steer off the lane, continue breaking and accept the collision with the child on the street.
That’s a tragic scenario and I hope it never becomes reality for anyone, but considering this scenario, based on what data would the vehicle make its decision? Would it calculate the likelihood of survival? Would it be calculating the least insurance cost for the service provider? Would it accept the death of itself and its passengers in favor of others?
How would you decide if you were driving manually? For humans this would be a very sudden decision and due to the lack of reaction time it’s likely that the person would break as good as possible but maybe not steer either right or left.
Things like this need to be considered when designing the transportation service and during the construction of smart cities. Any type of potential external influence and risk needs to be proactively mitigated. I am a strong supporter of using technology to enable smart cities and autonomous transportation, but it needs to be safe. Being “safer” than traditional driving in annual death toll statistics is not good enough.