Can the Church baptize robots?

I was raised Catholic. I should know the answer to this. But somehow it just never came up, in catechism. If you asked me what various theologians would have to say on the issue, I might be able to speculate (Augustine would likely say no, on the grounds that robots have no soul to save; Aquinas wouldn’t know his own mind until he wrote a 13-volume treatise on the subject; Benedict would probably consider their rote behaviour as Godlike in its simplicity, purity of purpose, and dedication to service; Ignatius would advise you to look into your heart and discern the answer). But as to whether the Holy See would ever, well, see artificial life as in need of salvation…well… let’s have a look.

  • The Church accepts arguments for evolution, but will not support in-vitro fertilization, on the grounds that only sexual union can produce “sacred” life. Meaning that even if you imprint your robot baby with a combination of you and your partner’s hangups, she’ll still enter catechism strictly for intellectual purposes, not spiritual ones.
  • Penal substitutionary atonement extends, so far as I know, only to humans, and not their creations — Christ did not die to redeem the robots. However, this goes back to the soul question: if the android doesn’t dream, then what is there to redeem? The Church might look on attempts to initiate robots into the Sacraments as akin to, say, baptizing a new lawnmower.
  • And for that, one might need Shinto. After all the Kusanagi sword (a tool, albeit a sacred one with a prominent place in Japan’s foundational mythology) holds kami, though whether that kami lies in the sword, or whether it is the sword is a question worthy of better minds than mine: namely Oshii Mamoru and some liberal-minded transubstantiation experts. The point is that objects and substances carry greater weight in Shinto than they do in other religions, even if they are the work of human hands. 
  • Granted, some works of human hands seem rather important to the Church, so it’s probably a wash. I still think the problem would lie in whether the Church agreed that androids had souls. (Which is different from sapience; sapience can create a perception of wrongdoing and thus sin, but without a soul the sin taints nothing, and elicits no need of spiritual redemption despite the need for earthly punishment.) I’ve never read an argument for sapience on the part of the Grail; stories about the Grail all rely on the seeker’s wisdom in finding it, not the other way ’round. It remains strictly inanimate despite its history.

All of which leads me to a funny story: in one of my recent Japanese classes, I asked my teacher whether robots required use of ????(arimasu, “to be,” non-living items) or ??? (imasu, “to be,” living people). She pondered for a moment, then said that robots should use the former. 

“Demo, Atomu-san..?” (But, AstroBoy…?)

“Aa, Atomu-san, hai… Maa…Atomu-san wa kawaii desu, dakara…imasu!” (Yes, AstroBoy… Well, AstroBoy is cute, so…imasu!“) 

…Make sure your robot baby is cute, folks. You know, Kara Thrace cute. That way, God might call her into service, after all.

Addenda:

Death Ray also points out that the Church’s treatment of artificial intelligence would likely begin as a netroots movement, or at least a regional sentiment within various parishes, likely the ones which had more frequent dealings with robots.

Another key factor that I missed earlier was the perception of free will: if the Church were to agree that androids had the capacity for decision-making, then their behaviour would mean a great deal more. In order for robots to be religious, we’d have to abolish the Three Laws (or the equivalent restrictions) and ask for self-directed good behaviour, with the possibility for temptation. 

But avoidance of temptation (and deliverance from evil) is one of the reasons we savour the possibility of artificial intelligence in the first place. It already has the benefit of the mental conditioning that years of cultural and religious training can provide, in the form of programming. But the trouble with programming, as with commandments of any shape, is context. Much of the discourse surrounding android ethics has to do with rebellion (and has done, ever since Golem stories, from Prague to Metropolis to the Tyrell Ziggurat; there’s a reason that Blade Runner and Innocence both reference Milton), and not with behaviour-as-adaptive intelligence. Most of the sexiest science in robotics has to do with teaching robots to learn autonomously, and some are already asking for ethics software, if not ethics regulations on our end regarding appropriate use and programming of robots for military application. But these are still commandments: Thou shalt, thou shalt not. Do this, not that. And commandments without context are simply invitations to trouble.

For example:

R: “Sir, the subject’s heart is beating at an abnormal rate and my sensors say his electrolyte levels are low. Without enough potassium, this source will fail. Will you continue questioning?”

CO: “If I wanted your opinion, I’d ask for it. Keep shocking him.”

R: “I offered no opinion. Was I disobedient?”

CO: “…No. Just, you know, we’re here to do a job. Your job is to shock him, so I don’t have to. Your job is to shock anyone sitting in that chair, no matter what they say.”

WEEKS LATER, AFTER A SUDDEN BUT INEVITABLE TAKEOVER:

CO: “Robbie! Buddy! Come on! Put the taser down!”

R: “Sir, you are sitting in the chair.”

 

Do I honestly think our programming will be that simple? No. But we’re only just figuring out the basics of how our own brains work, including the roots of our decision-making (little of which are we actually conscious of). Now we want to cut and paste the process. Imagine if the CO had made another rule, not pertaining to furniture but skin colour or dress: “Only shock the brown guys wearing orange.” This raises fairly obvious concerns about task prioritization and image recognition, not to mention locative context: how does the CO teach Robbie that shocking is only necessary within certain rooms of the prison, and only with certain people? How does he teach Robbie about the difference between evasiveness and lack of knowledge? (Can he himself tell the difference? Does he care?) What happens when Robbie interprets certain facial tics and vocal patterns as evasiveness on the CO’s part? (“Oh…great…you fixed that little bug that was keeping you from going online. And you’ve been looking at…junkyards. Great. What? Retirement? Never! Our government would never spend millions of dollars on new robots just to stimulate a failing economy!”*)

Autonomous (free-willed) decision-making requires both adaptive intelligence and a sense of context. Of this tension are some of the best dramas made: Agamemnon can’t satisfy either the gods or his armies without sacrificing Iphigenia at Aulis, but doing so will sever his ties with Clytemnestra forever (and initiate a tragic cycle of family vengeance). Cordelia knows what Lear wants to hear, but values honesty more. Oisi Yoshio was obedient enough to commit suicide after leading the charge against Kira Yoshinaka, but only as a result of his primary obligation to his master’s memory. Which do you choose? Your family? Your master? Your country? Your god? Which is best? How do you know? How do you teach it? How do you program it? What does the code look like? 

Say you do the impossible. Say you build a soul. How do you save it? 

 

*Actually, they probably wouldn’t. Sadly.

Scroll to Top