The Rules of Professional Conduct Will Have to Change to Include “The Ethics of Things”
The Bencher—July/August 2017
By Richard K. Herrmann, Esquire
I recently came upon an article in Wired magazine that caught my attention. It concerned autonomous vehicles. The article posed the question relating to robotic ethics in the context of mortality. The example discussed involved an autonomous car headed at high speed toward a group of pedestrians; does the vehicle swerve into a concrete wall that will kill its passengers or does it strike the group of pedestrians? Certainly, if you were driving the car, you would make every reasonable effort to avoid striking the innocents in the middle of the road. But would you do it if it meant certain death to your family in the vehicle with you? Are you prepared to delegate the decision-making process to your Honda Accord?
I have to tell you, I have watched many science fiction movies during my life, but it never occurred to me I would be engaged in the discussion of whether robots should make ethical decisions. Yet here we are. It is difficult enough to deal with the many ethical issues relating to technology and lawyers. This is a giant step and I am not sure it is forward. It is like a step into an entirely different dimension. Now, I am sure there are many of you who might think this is much to do about nothing, and Richard simply doesn’t have anything better to write about. You are mistaken. You are all familiar with the International standard organization known as IEEE. With more than 400,000 members, it holds itself out as “ the trusted ‘voice’ for engineering, computing, and technology information around the globe.” Since 2004, the IEEE has had an active committee studying robotic ethics. http://www.ieee-ras.org/robot-ethics. In fact, the growth and relationship of robots to humans has developed into an entire suborganization known as the IEEE Robotics and Automation Society (RAS).
The issues we as lawyers will need to resolve in the products liability law of the future are staggering. Let’s go back to the now simple example of the autonomous vehicle. If the car kills the passengers by swerving into the wall, do the passengers’ estates have a claim for wrongful death, arguing fewer pedestrians would have actually been killed had the car not tried to avoid them? If we are talking about the thought processes of the vehicle programming, does this fall within a strict liability claim for defect or will the standard be one of traditional fault, i.e. what would a reasonable robot do under the circumstances? Will culture and religion become an issue? If the vehicle programming was developed in another country, will the cultural attitude or religious beliefs of life and death of the programmers be relevant?
I am telling you, this is not a simple academic exercise. For years, the IEEE RAS has been discussing this very issue regarding military robots, “whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make those decisions autonomously.” The group now has subcommittees focused on service robots and even social robots. We are already using Amazon’s Alexa to keep our shopping lists and we know our refrigerators are also connected to the Internet of Things. “We” are actually just in the way. They don’t need us to tell them what foods need to be reordered. They can keep track and order our groceries without us. If we forget to alert one of them about a peanut allergy, who is responsible? Am I allowed to have that piece of cake at 9 pm tonight or will the refridgerator simply refuse to open?
The folks at WestLaw have been studying artificial intelligence for some time and have indicated it will not be long before our research will be done for us. On a daily basis we are faced with the issue of how close do we get to the ethical line of an argument. Will 5.3 of the Rule of Professional Conduct change? Currently it makes us responsible for the conduct of the person to whom we delegate responsibility. Will “person” need to become a defined term to include artificial intelligence?
There is little question the law will need to change more quickly to keep pace with technology and the “Ethics of Things”. We face interesting times as we develop rules relating to more than our mere ethical competence of the technology we use in our practice. We will actually need to develop ethical rules regarding the technology itself. The IEEE has been looking at this for years. It is about time the adventure begins for us as well.
Richard K. Herrmann, Esquire is a partner in the firm of Morris James in Wilmington, Delaware. He is a Master in the Richard K. Herrmann Technology AIC.
© 2017 Richard K. Herrmann, Esq. This article was originally published in the July/August 2017 issue of The Bencher,
a bi-monthly publication of the American Inns of Court. This article,
in full or in part, may not be copied, reprinted, distributed, or stored
electronically in any form without the express written consent of the American Inns of Court.