Self-driving cars have been creating quite a buzz in the technological and automobile world for more than a decade now. The thought of having driverless cars on roads seems promising given the numerous statistics which point out the magnitude of accidents that occur every year due to human (drivers') error. The concept of the driverless car can be traced back to the 1920s. However, the first driverless car came to exist during the 1980s. The Mercedes-Benz EUREKA Prometheus built the first driverless car jointly in the year 1984. Since then, many companies have rolled out their versions of self-driving cars.
Despite seeming as a key element that can change the course of future transportation, there are a number of both legal and ethical issues that need to be addressed before its full implementation. Most of the debate about the ethical aspect of these cars are focused on whether the car shall decide to save its occupants over pedestrian or vice versa. In other words, the question is, who should the car harm if it finds itself in one of those unavoidable situations? Do children, older people, or other factors change the equation? Further to make it worse, it is possible that the self-driving car doesn't eliminate the likelihood of a car accident. Besides, there is no legal guide for how the case would be handled or who would compensate for the damage.
The most common dilemma resembles a famous philosophical brainteaser called the "trolley problem." Under this in a scenario the driver of runaway tram has to decide whether to save a group of five track-workers in the path by pulling the lever to divert tram onto a different set of tracks, where a solo track worker is standing; thus killing the solo track worker or do nothing.
Although these questions are tough to answer because of uncertainties about the consequences – such as who will be affected and to what degree, the problem is similar with self-driving cars too. While arbitrary decisions caused accidents are better than predetermined ones, one should also remember than machines are not blessed with rational and cognitive thinking yet. These skills are also hard to engineer or train a machine too. Therefore, we arrive at what is known as "Moravec's Paradox."
In 2018, Germany became the first country to attempt to answer these questions with actual guidelines. The proposed rules state that self-driving cars should always strive to minimize human death and shouldn't discriminate between individuals based on age, gender, or any other factor. Human lives should also always be given priority over animals or property.
There are other equally pressing concerns like difficulty in identifying the different local and state rules of the road, privacy, cybersecurity from hackers, computer malfunction, weather affecting sensors, and inability to interpret the human traffic signals. Moreover, experts feel that self-driving cars may create additional problems like broadening the social gap, unemployment of drivers, and more traffic since people without driving skills will gain the confidence to step out and go for a ride. The social gap issue will generally be due to financial inequalities, i.e. not everyone can afford such a technology, and the probability that person who pays more to get better service than others—much like the economy and first-class in airlines.
The utilitarianism of self-driving cars has been welcomed and attacked. While humans are not better drivers themselves, stakeholders argue that it is unethical to entrust the lives of humans to mere machines. They believe that other people should not decide the fate of people and accidents should happen naturally versus predetermined results determined by algorithms months to years in advance. Although positive outcomes outweigh the negative side of self-driving cars, much work needs to be done before they hit the roads. To manufacture ethical vehicles, developers should continue to learn from past experiences in risk management and morally challenging situations. Engineers must make trade-offs between mobility and environmental impacts. And humans will need to create tools, such as rules and regulations, to protect themselves.