2 Comments

My favorite book on this topic is A Psalm for the Wild Built. The robots eventually become sentient and we decide to free them from the slavery of manufacturing things for us. They go off and live in the woods somewhere for 300 years before they come into contact with a human again, and thus the book begins. It’s so good!

As for this conundrum, is it a conundrum? I feel like no matter how sentient and cute a robot is I would have no problem pulling the plug. It’s still an object, sentience doesn’t mean alive. I could see how that could get ethically strange if other people feel differently but will that be any stranger than all the other ethical conundrums we face? (At what point in a pregnancy does the child have the right to stay alive, when is war and killing people necessary, etc….). We already deal with the Bambi effect all over the place, does it matter if we do it with AI too?

Thank you for a great piece! That really made me think!!!

Expand full comment

Great piece! I don't think people pay enough attention to the question of how we will present AIs and how different presentations will affect the way we feel about and interact with them.

Season one of the (little-known?) TV series Humans does a nice job of exploring this issue. (Not to be confused with the movie / play The Humans.)

Expand full comment