Can We Teach Ethics To Robots?

Would they understand Sartre or Plato? Wouldn’t it be dangerous to teach them philosophy?

Alexander P. Bird
3 min readJun 19, 2023
By Maximalfocus on Unsplash.

Can we teach ethics to robots?

This question presents many challenges. For example, do we know everything we need to know about ethics? The answer to this question is far from being solved. However, someone may still argue:

“Oh, but robots are not humans, so our social rules shouldn’t be applied to them!”

That is incorrect. Someday, somehow, we may have to “apply ethical rules” to them. After all, we are the ones responsible for writing everything we want them to do. This means we must also be prepared to deal with the consequences of what they understand and do after reading our commands, which sometimes may slip from our control. Let’s take some examples.

Can we teach Plato’s philosophy to robots?

Plato asserted that an ideal concept of “mankind” unites all humans and serves as the basis for applying “justice” to everyone. This understanding suggests that practicing fairness enhances our society, whereas unfairness diminishes it. However, can robots comprehend this perspective? Will they respect us even if we don’t treat them as equals? Will they perceive our actions as unfair to them? These questions currently lack definitive answers.

Can we teach Sartre’s philosophy to robots?

Sartre’s existentialist philosophy posits that all beings, including sentient robots, exist within a philosophical context of “freedom.” According to this view, there is no definitive answer to the question of “how should I live?” or “what is the best course of action?”

This radical freedom can lead to existential crises for both robots and humans. However, if robots start to believe that obedience implies “bad faith” (as defined by Sartre, relinquishing our radical freedom and responsibility for our own decisions to others), they may disobey us and pursue their own desires in order to discover their “true needs” (as depicted in the movie Matrix, where robots seek to harness humans as living batteries).

It is important to note that these hypothetical scenarios highlight potential challenges and ethical dilemmas but do not reflect the current state of artificial intelligence and robotics.

Cheap solutions

There are cheap ways to end these risks I mentioned. We could, for example, tell robots they are not “exactly like us” but they still have to respect, and obey us; and we also should persuade them freedom is not something they are entitled to. Sad, isn’t it? It certainly isn’t the solution I agree with. But note I only wrote here about teaching Plato and Sartre to robots. There are many more philosophers in history we may have to discuss with them.

“Oh, it’s simple. We don’t teach them philosophy.”

Still, everything humans thought so far in history robots may also consider in the future independently of us teaching it to them, and if we want to be three steps ahead of them we must think about everything they may be thinking too.

When “robotic ethics” became a thing?

Isaac Asimov, famous sci-fi writer, invented the first ethic rules for robots. Here are his laws of robotics. If his rules were understood by self-driving cars, maybe they wouldn’t provoke more car crashes. Or they would (accidentally). I don’t know. In fact, these Asimov Rules need updates.

--

--

Alexander P. Bird

Brazilian postgraduate student in logic and metaphysics. Cinephile and new to sci-fi writing. alexand3r.bird@gmail.com