“A Room With a Deja Vu — Gods and AI” by our resident skeptic, James R. Cowles

James R. Cowles is a member of the diverse Bardo Group Beguines, which publishes The BeZine, a publication that  I manage and edit. James also regularly contributes to The BeZine’s sister site, Beguine Again. The article James is referring to here is An AI [Artificial Intelligence] god will emerge by 2042 and write its own bible. Will you worship it?  by John Brandon in VentureBeat. At any given time, on any given theme, whether you agree with him or not, James will always make you think, revisit, question. That, of course, is a very good thing. / J.D.



Every so often, I read something – a newspaper story, a journal article, an interview, a campaign speech … whatever – that elicits from me the following reaction:  Whiskey-Tango-Foxtrot? Ain’t we been here before? And that reaction is often followed by a supplementary reaction:  Whiskey-Tango-Foxtrot?  Ain’t we still here? (“It’s déjà vu all over again!” – Yogi Berra) The latest example of this species of déjà vu is an article about the potential of artificial intelligence (AI) to eventually evolve into a kind of silicon god with the capability of directing all human  affairs, planet-wide. The claim is that this will happen, perhaps midway through the 21st century. The article then goes on to speculate on what choices would have to be made, should such an achievement be realized, and the problems that would be posed by such a machine intelligence.  (Think of this as a real-world realization of the recent television series Person of Interest. Or perhaps more apropos would be to think of a real-life version of SkyNet from the Terminator movies.) What I find so drool-inducing, is that we – meaning the human race – have already done this. In fact, we have already done it, not just once, but several times over several millennia. We already know what choices have to be made. We already know the potential hazards.  The Venturebeat author is no doubt awesomely competent at assessing the technical complexities of artificial intelligence, neural nets, heuristic systems, etc., etc. – but has evidently never so much as touched a book of world history. Certainly least of all European history.

o Creating gods … been there, done that, bought the pure-white t-shirt and tinfoil halo

It is quite possible to argue that human beings create gods out of whole cloth in essentially the same way they create sculpture, paintings, literature, ghost-pepper dipping sauce for chicken wings ( … I’m still in recovery … ), “twerking” (izzat still a “thing”? … ), and Sarah Huckabee Sanders’ haute couture collection.  This was apparently the position of Voltaire, who once waggishly observed that if triangles conceived of gods, the gods of triangles would have three sides.  People undergoing near-death experiences usually have visions of the gods and holy people from their religion’s hagiography. I know of no instance where a dying Christian envisioned, say, Sri Ramakrishna.  I know several anecdotes about the death visions of various people prominent in the hyper-fundamentalist denomination of my youth, but have never heard tell of anyone in that company having a vision of anyone dancing or drinking alcohol.   (Going to KKK meetings, maybe, but never doing any really Big Nasty!) Does this mean that people see what they expect to see, and that they create gods from the ground up, with no admixture of reality, that gods are entirely artificial, ontologically? Maybe, but not certainly. There may well be a Reality behind those particular appearances. Maybe God, understanding that people are dying, graciously vouchsafes to them religious visions comfortably congruent with the expectations of the dying person’s religious tradition. Who knows for sure?

But two things we do know for sure are that, regardless of the gods’ ontological character, (a) the attributes and actions of those gods are certainly modeled according to the history and culture of their worshippers, so that (b) those attributes and actions serve to validate the values, morality, and actions of the worshippers’ culture. Or at least the way the gods’ worshippers wanted their culture to be. The god(s) are basically the worshippers’ culture writ large. So … e.g., did YHVH actually command the Israelites to slaughter the Amalekites, root and branch? Probably not, but Israel was a tiny nation in a Levant of vast empires, and so, as a defense mechanism, styled itself as a martial culture possessed of a degree of military prowess that enabled such a tiny nation to punch above its weight. And the archaic Israelites modeled the character of YHVH accordingly, as a god of war (Ex. 15:2-4).

And even that is not the earliest example of nations depicting – creating? – their gods. We could consider the Mesopotamian civilizations that predated Israel. But the point would still be the same:  contra Venturebeat, we humans have been creating and / or modifying gods for literally thousands of years. AI technology merely automates a process that was archaic when the cornerstone of the first Pyramid was laid. Now we can create gods at near light-speed.

o Creating texts that contain the teaching of the AI, the AI’s rules of conduct, and interpretive / hermeneutical principles

The Venturebeat article cites an example of how the presumptive AI god could write its own equivalent of the Bible, perhaps variations on the theme of an existing sacred text like the Bible, but with strategic forensic variations.

If you type in multiple verses from the Christian Bible, you can have the AI write a new verse that seems eerily similar. Here’s one an AI wrote:   “And let thy companies deliver thee, but will with mine own arm save them: even unto this land from, the kingdom of heaven”.

If this heuristic example seems possessed of … shall we say? … somewhat less than lapidary clarity, then whoever wrote the underlying AI program has already mastered the dubious art of making an AI that can write prose as slipperily ambiguous as traditional scriptural passages from traditional religious texts … something religious authors have been doing for — again! — thousands of years. (Of course, to a large extent, this is excusable, given that religious writers are very often dealing with subjects that do not lend themselves to quantitative or lexical precision.) There is very little, if anything, that humans can learn from AIs about writing texts whose meanings are vague enough to motivate humans to kill and maim one another over who has the “correct” interpretation.  It will be multiple millennia before AIs can hold humans a candle in this regard.

Furthermore, programming an AI to write a text of interpretive principles solves nothing. Again, humans have been doing that, time out of mind — and always failing. Why? Because how-to-interpret texts are just that:  texts. I.e., texts that themselves are subject to a plethora of interpretations.  At that point, the only way to avoid an infinite regress of interpretations is to do what the Catholic Church did in the Middle Ages:  at some point, turn from reasoned exegetical argumentation to naked force, and send in the soldiers with bonfires beneath stakes, to which heretics were attached. All of which raises a macabre possibility:  once the AI has written its sacred text, would the next logical step be for the AI to launch an AI Inquisition to enforce the AI’s official interpretation of the AI’s normative text? Again … we have already done that! So — one more time — the Venturebeat author is ‘way behind the historical curve.

o The real spanner in the works:  Fyodor Dostoyevsky’s “underground man” and existentialist theoreticians of the Absurd generally

But theological issues aside, perhaps the paramount issue to be addressed is the exact-same issue that — again, for multiple thousands of years — has pertained to human relationships with traditional, non-AI gods:  how to maintain human freedom in the face of Divine sovereignty. And that issue remains unchanged, even if the Sovereign’s intentions toward human beings are benevolent. (Which need not be true:  see, e.g., David Blumenthal’s Facing the Abusing God … but that is another rant for another time.) How would we go about preserving human freedom, even in the face of the benevolence of God? 

The answer, of course, is that not everyone would want such protection, given — by hypothesis — that an AI god would always be benevolent. We can assume benevolent intentions on the part of an AI god, but such intentions are purely suppositional on our part, and need not be true.  Venturebeat’s point is well taken:  … if an AI god is in total control, you have to wonder what it might do. The “bible” might contain a prescription for how to serve the AI god. We might not even know that the AI god we are serving is primarily trying to wipe us off the face of the planet. (Cue the video clip about SkyNet from Terminator here. Also recall the Twilight Zone episode where the alien visitors brought with them a book entitled To Serve Man … which, upon translation, turned out to be a cookbook.) But for now, let’s stipulate benevolence on the part of the AI god. Existentialist philosophers / authors like Camus and Dostoyevsky counsel rebellion, even rebellion against unrelenting benevolence, for the sake of preserving human moral autonomy.  The former counseled rebellion in the face of the Absurd, to the point of constructing our own purpose for existing. The latter advised preserving the freedom to act against one’s own pragmatic self-interest by recognizing the Underground Man’s primal freedom even to harm oneself:

I am a sick man. … I am a spiteful man. I am an unattractive man. I believe my liver is diseased. I don’t consult a doctor for it. … But still, if I don’t consult a doctor, it is from spite. My liver is bad, well — let it get worse!

Actually, the Underground Man’s advice is not so much advice as a statement of what actually is. It is not nearly so much a matter of telling humans what they should do — rebel against their own self-interests — as it is a matter of describing what humans will in fact do, if the alternative is to submit to even benevolent bondage:  they will rebel.

That, I think, would constitute the fatal fly in the ointment of any AI god. And — again — this is hardly the first time this issue has been dealt with. According to orthodox Christian moral theology, sin did not originate with artificial intelligence technology. If anything — this is my heterodox / revisionist gloss on Christian teaching — sin originated with God’s insistence that human beings should be constrained to act in a manner consistent with their own good:  the Prescription caused the Disease. Compulsory self-interest turns even the Garden of Eden into Hell. Again, if Venturebeat is just now discovering this tragic truth, it is because Venturebeat has neglected the study of history. To say nothing of philosophy.

The above is not to prematurely judge whether AI technology will eventually develop to the point where an AI god is a practical reality. For all I know, it might actually happen. I am simply at pains to point out that, in essence, the entire subject has nothing whatsoever to do with artificial intelligence, neural nets, heuristic systems, etc. Rather, the whole issue pertains to the perennial issue of the relationship of human freedom to the Divine, which has been true since the abacus was invented.

Even an AI Eden would require an AI serpent.

© 2018, James R. Cowles

Image credits

Face behind code … Pixabay … Public domain
Ape … Max Pixel … Public domain
Mars rover … NASA … Public domain
Giacometti sculptures … City Square Alberto Giacometti Gallery of Art DC … CC BY 2.0


ABOUT

Advertisements