Lesser of Two Evils

02 May 2017

Ethics in Software Engineering

Software engineering as a discipline gives someone the ability and power to create programs that will run in various ways based on certain conditions. This can be great since software can help automate numerous amounts of tasks that would otherwise take people large amounts of time to complete on their own. This being said, there can be certain ethical implications that come up with regards to designing software when the programs in question are being used in a situation for which it is possible for the output of these programs to have life altering ramifications. Therein lies the ethical dilemma when it comes to software engineering. Overall I believe ethics within the context of software engineering means creating software that will in the worst-case scenario do the lesser of two evils.

Ethical Dilemma

A great example of an ethical dilemma in software engineering is the programming of self-driving cars. Self-driving cars can be extremely useful for a lot of different reasons; maybe you’re not a good driver so you feel it would be safer to have a car with a self-driving feature, or maybe you don’t want to risk driving home tired or buzzed from drinking that night. These are all reasons that make self-driving cars seem extremely attractive on the surface. However, the ethical dilemma behind this form of software comes in when you must decide what your software will do when you enter a situation for which there is no way to avoid someone getting hurt; whether it be people in your surroundings or the driver and passenger themselves. There are a few different trains of thought to consider when thinking about what the lesser of two evils would be in this case.

On one hand do you have the car prioritize saving the driver and passenger and risk hurting the pedestrians? Or do you prioritize the pedestrian’s safety over the occupants of the vehicle? It’s hard to really say because each situation is different. For instance, what if there were more pedestrians that would get hurt versus vehicle occupants, do you merely choose to hurt less people? In some respects, this would seem like the easier answer: “go with the option that hurts the least amount of people”. Consider this though: what if one of those passengers is a child? Can we weigh one child over 10 people that aren’t children? It is a rough choice because those adults could be parents too and have children that will be left as orphans.

Here is another scenario, say for instance car companies choose to go with a safety option that would save the “most” lives, however this option does not guarantee the safety of the vehicles inhabitants. This could pose a problem because when consumers find out that their car is not set up to protect them first, they most likely will not want to buy a self-driving car which will hurt the business for self-driving cars. Would you buy a car knowing that it is programmed to kill you in a specific situation? The practical choice for companies who sell self-driving cars would be to save the driver and passengers. If this were to happen, it would defeat the purpose of self-driving cars since the big selling point is that it is safer than a man-driven car. Should this happen, the number of lives at risk on the road would be no different than the current status quo.

So, this raises the question, do you prioritize the safety of the vehicles inhabitants no matter what in order to have more self-driving cars on the road, with the rational being the more self-driving cars we have on the road, the more lives we can save overall since less man-driven cars = less accidents overall? When you keep all these things in mind it’s easy to see that there is no correct answer to it. When you start to give yourself a more detailed scenario it makes it even harder to come up with a clear-cut “right” choice, and that is part of what makes ethical decisions so difficult at times.

Personal Standpoint

From a personal standpoint with regards to self-driving cars and the possible ramifications of the decisions the software behind it can create, I think the ethical choice would be to program the car to harm the least amount of people. This decision comes from the standpoint of choosing the lesser of two evils, in this case prioritizing saving the most amount of lives possible. To be honest though, if working for a self-driving car company, I would pick the choice of saving the passengers of the vehicle. This will guarantee business for the car and create jobs for people in this field. In the long run, it will also save more lives, because more self-driving cars = more lives saved since there will be less accidents that are caused from drunk driving or falling asleep on the wheel. It seems to have a lot more benefits not just for the economy, but also the lives of our fellow citizens.

When making these kinds of decisions, you not only have to judge what is the lesser of two evils, but also consider does the practical choice outweigh the ethical choice? Do you choose the side that will ultimately change the future to have more self-driving cars and generate more jobs which will overall help the economy? Or do you choose to base your decisions on what truly is the lesser of two evils? It is still very difficult to say what is right and what is wrong since you can never truly weigh one person’s life against another, nor can you decide whether it is better to risk more lives now for the sake of creating a safe driving environment for the future. I think that this is the crux of what makes ethical decisions so hard, and that is the fact that there is never truly a straightforward right answer, when it comes to making these choices everything is relative.