Can We Teach Ethics To Robots?
Would they understand Sartre or Plato? Wouldn’t it be dangerous to teach them philosophy?
Can we teach ethics to robots?
This question presents many challenges. For example, do we know everything we need to know about ethics? The answer to this question is far from being solved. However, someone may still argue:
“Oh, but robots are not humans, so our social rules shouldn’t be applied to them!”
That is incorrect. Someday, somehow, we may have to “apply ethical rules” to them. After all, we are the ones responsible for writing everything we want them to do. This means we must also be prepared to deal with the consequences of what they understand and do after reading our commands, which sometimes may slip from our control. Let’s take some examples.
Can we teach Plato’s philosophy to robots?
Plato asserted that an ideal concept of “mankind” unites all humans and serves as the basis for applying “justice” to everyone. This understanding suggests that practicing fairness enhances our society, whereas unfairness diminishes it. However, can robots comprehend this perspective? Will they respect us even if we don’t treat them as equals? Will they perceive our actions as unfair to them? These questions currently lack definitive answers.
Can we teach Sartre’s philosophy to robots?
Sartre’s existentialist philosophy posits that all beings, including sentient robots, exist within a philosophical context of “freedom.” According to this view, there is no definitive answer to the question of “how should I live?” or “what is the best course of action?”
This radical freedom can lead to existential crises for both robots and humans. However, if robots start to believe that obedience implies “bad faith” (as defined by Sartre, relinquishing our radical freedom and responsibility for our own decisions to others), they may disobey us and pursue their own desires in order to discover their “true needs” (as depicted in the movie Matrix, where robots seek to harness humans as living batteries).