My favorite book on this topic is A Psalm for the Wild Built. The robots eventually become sentient and we decide to free them from the slavery of manufacturing things for us. They go off and live in the woods somewhere for 300 years before they come into contact with a human again, and thus the book begins. It’s so good!
As for this conundrum, is it a conundrum? I feel like no matter how sentient and cute a robot is I would have no problem pulling the plug. It’s still an object, sentience doesn’t mean alive. I could see how that could get ethically strange if other people feel differently but will that be any stranger than all the other ethical conundrums we face? (At what point in a pregnancy does the child have the right to stay alive, when is war and killing people necessary, etc….). We already deal with the Bambi effect all over the place, does it matter if we do it with AI too?
Thank you for a great piece! That really made me think!!!
Thank you for the comment! I'm intrigued by "It’s still an object, sentience doesn't mean alive" - are you saying that for you the key factor in whether something is deserving of moral consideration is not whether it's sentient, but whether it's biologically alive? I think a lot of people would disagree - there are lots of things that are alive that we have no trouble killing (bacteria, plants, certain animals, etc), whereas if something is sentient (even if not biologically "alive") to me that seems like a more relevant factor for considering its welfare/preferences etc.
Hmmmm I suppose I would need both. For it to be alive AND sentient. Because you’re right, I have no qualms killing the mosquitos in favor of sentient human welfare. But I wouldn’t kill humans in favor of technological sentience that we created. Is that odd? It seems much odder to me that we would put things we created on the same food chain level as us? (Or even above us?)
Great piece! I don't think people pay enough attention to the question of how we will present AIs and how different presentations will affect the way we feel about and interact with them.
Season one of the (little-known?) TV series Humans does a nice job of exploring this issue. (Not to be confused with the movie / play The Humans.)
My favorite book on this topic is A Psalm for the Wild Built. The robots eventually become sentient and we decide to free them from the slavery of manufacturing things for us. They go off and live in the woods somewhere for 300 years before they come into contact with a human again, and thus the book begins. It’s so good!
As for this conundrum, is it a conundrum? I feel like no matter how sentient and cute a robot is I would have no problem pulling the plug. It’s still an object, sentience doesn’t mean alive. I could see how that could get ethically strange if other people feel differently but will that be any stranger than all the other ethical conundrums we face? (At what point in a pregnancy does the child have the right to stay alive, when is war and killing people necessary, etc….). We already deal with the Bambi effect all over the place, does it matter if we do it with AI too?
Thank you for a great piece! That really made me think!!!
Thank you for the comment! I'm intrigued by "It’s still an object, sentience doesn't mean alive" - are you saying that for you the key factor in whether something is deserving of moral consideration is not whether it's sentient, but whether it's biologically alive? I think a lot of people would disagree - there are lots of things that are alive that we have no trouble killing (bacteria, plants, certain animals, etc), whereas if something is sentient (even if not biologically "alive") to me that seems like a more relevant factor for considering its welfare/preferences etc.
Very glad to hear you enjoyed the piece!
Hmmmm I suppose I would need both. For it to be alive AND sentient. Because you’re right, I have no qualms killing the mosquitos in favor of sentient human welfare. But I wouldn’t kill humans in favor of technological sentience that we created. Is that odd? It seems much odder to me that we would put things we created on the same food chain level as us? (Or even above us?)
Great piece! I don't think people pay enough attention to the question of how we will present AIs and how different presentations will affect the way we feel about and interact with them.
Season one of the (little-known?) TV series Humans does a nice job of exploring this issue. (Not to be confused with the movie / play The Humans.)
Thank you so much! I remember watching Humans when it came out - can't believe it's almost 10 years ago now - maybe it's time for me to rewatch it!