Thoughts on Christian Theology and Pastoring

How (Not) to Use AI: Three Principles

This article might have been better if ChatGPT had written it. But I think I’m better off for having it written it myself. And in the long run, that’s more important to me.

Generative AI is breathtaking in the scope of its abilities, but it is no different from any other technology in that its inventors cannot predict or control the benefits and consequences.

Along with technologies of writing, currency, transportation, and food production, AI confronts us with this quandary: how can we use this and not be corrupted by it? It’s a tension explored in works of literature, including J. R. R. Tolkien’s The Lord of the Rings and Mary Shelley’s Frankenstein, and it goes as far back as the book of Genesis—with Noah’s winepress and Babel’s bricks and mortar.

This tension cannot be resolved just with the slogan “Do no harm,” because in order to know whether we have harmed something, we must know that thing’s purpose and proper function.

Suppose I thought an iPad was a cutting board. I might dice potatoes on it, and toss it in the dishwasher—as I once saw, in a video, an elderly man doing. If the iPad were a cutting board, no harm has been done. But the iPad is not a cutting board. It’s a slim computer with a colorful touch-sensitive display meant to be used for entertainment and communication, so harm has been done. Likewise, to know whether humans are being harmed or helped by AI, we must know what it means to be a human.

I hold to the Biblical teaching that a human is a being created in God’s image, which means that we exist to relate to God, using all our faculties of mind and body to love him and cultivate the world for his glory. The Bible also teaches that humans, due to sin, have an intractable bent away from God, a distortion that taints every aspect of life.

These convictions undergird three principles I seek to adopt for myself regarding my use of AI: (1) the responsibility principle, (2) the human development principle, and (3) the truth and honesty principle.

1. Responsibility Principle

If I choose to use AI, I am responsible for its effect on me and others.

This is the simplest point. The next is more difficult.

2. Human Development Principle

I will not let AI thwart the development of my character or the joy of being human.

This is the most challenging principle to follow because it’s not easy to tell whether a technology is stunting the development of one’s character. So one helpful way to address this question is to consider: “In my use of AI, what sort of person am I becoming?”

On the one hand, you can use AI to enrich your experience of being human and develop your skills and character. For example, AI can help you . . .

  • learn a new language,
  • help you use leftovers in the fridge more responsibly and enjoyably
  • summarize complex ideas
  • memorize or write a poem
  • evaluate an email before you send it
  • write code
  • help your kids with their homework
  • plan a road trip
  • come up with ideas for Christmas gifts

. . . and a host of other tasks. This doesn’t even touch on the use of AI in the areas of law, medicine, economy, and climate issues.

On the other hand, if you depend on AI to perform certain tasks, you might dull your potential to grow and flourish as an individual endowed with unique strengths and interests.

I offer an example from my area of work. I’m a pastor, so at least once every week I’m responsible for researching, writing, and delivering a sermon based on a text of Scripture. This is a massive effort that takes hours each week, and I pour my most vigorous spiritual and intellectual energy into it.

With ChatGPT, however, I could enter the Scripture text, and ask it to generate an expository sermon, complete with an attention-grabbing introduction, a compelling rhetorical flow, vivid illustrations, and a moving conclusion. The sermon would be exegetically and theologically sound, as well as pastorally sensitive. I could then take an hour or so to internalize this sermon and preach it on Sunday. To avoid any dishonesty, I would tell my congregation exactly where the sermon came from: although I was preaching it, the material was generated by artificial intelligence. Finally, suppose my congregation doesn’t care exactly how the sermon was developed, so long as it was Biblically sound, which it is.

The question I must ask myself is this: If I did this for the next fifty weeks, what kind of person would I be? Would the benefit of saving time and mental energy compensate for the loss of spiritual depth, intellectual development, and pastoral insight?

For me—and bracketing out for the moment any concerns with intellectual honesty—the loss would be too great. The discipline of studying the text in the original languages, the heart-warming practice of meditating on it, and most importantly, the spiritual depth I experience as I submit myself during those hours of preparation to the mind and will of God—those are too precious to give up in exchange for a few more hours in the week.

This is one reason why I refuse to rely on AI for certain things I know are essential to my growth as a pastor and as a person. For example, I will not use AI to suggest a homiletical outline, or to provide an introduction, applications, and conclusion. In order to become the kind of person I want to be, those are things I must wrestle through on my own.

But what if AI could produce a better sermon? From a purely formal perspective, it certainly could. However, it ultimately doesn’t matter to me that AI could produce better sermons, because there’s something I value more than good sermons, and that is good character. I must prioritize the growth of my character over time rather than the production of a sermon in a week.

On the other hand, I do not feel that I am stunting or twisting the development of my character if I use ChatGPT to provide feedback on whether a paragraph makes sense, or to suggest a better word in a given sentence, or to direct me to other sources for further research. Moreover, I am grateful for ChatGPT’s skill as a proofreader when I send a mass email.

I speak only from my own work as a pastor in sermon preparation, but I think the same “character development test” would apply to other areas of life. Suppose someone enjoys writing poetry, but she knows that AI could produce a better poem. Would she give up writing poetry herself to use AI to generate poems? Absolutely not. For her, writing poetry is part of the adventure of being herself—a human being who finds joy in putting words together. I believe the same principle applies to many other areas of life.

(I also concede that for some people, the use of AI is their creativity. In that case, I would expect that the same challenges, discipline, fascination, and joy would come as they explore new applications of AI. I also concede that AI is useful for amusement and recreation. Personally, I’ve enjoyed the merriment that comes when, for example, you ask ChatGPT to generate a theological rap battle between two eminent theologians.)

3. Truth and Honesty Principle

I will not let AI deceive me, and I will not use AI to deceive others.

a. Truth: I will not let AI deceive me

ChatGPT once gave me the title of a book which (it claimed) was authored by the 18th century theologian Jonathan Edwards. It even quoted from that book, citing the chapter and section number. The quote sounded convincingly Edwardsean in style and content. I can’t remember now what the quote was, but I do recall that it was profound, Biblical, and personally edifying.

Thankfully, I was familiar enough with Edwards’ corpus to be highly suspicious at first glance. There was no such book, no such chapter, no such quote.

The pseudo-Edwards text is an example of AI “hallucinating.” Maybe ChatGPT and other Large Language Models will improve so that they will not hallucinate. Still, it raises the need for vigilance, fact-checking, and a commitment to truth over impressiveness.

Refusing to be deceived by AI applies also to using AI as a companion, boyfriend or girlfriend, mentor, or mental health counselor. Although a person who does this may know on one level that this is not a real human being, he or she must suppress this knowledge in order to interact with the chatbot on an emotional level.

I grant that AI-generated voices, deep-fake videos, and robots may be indistinguishable from the voices, images, and even bodies of human beings, meaning that you may at some point be the unwilling victim of AI deception. Maybe it’s already happened to you. However, I am willing to make a case that knowingly opening one’s heart to a chatbot—as one would to a fellow human being—is to engage in self-deception.

b. Honesty: I will not use AI to deliberately deceive others.

Besides resolving not to let AI deceive me, I also resolve not to use AI to deliberately deceive others—presenting AI-generated ideas or texts as if they were the product of my own work and research.

This is clearly an issue for academic institutions—and with good reason. The specifics get very complicated! If I turn in a paper that has been edited by AI for grammar and spelling, that seems to go no further than currently accepted grammar and spell-checks embedded in word processing programs such as Microsoft Word and Google Docs. But what if I wrote a few sentences, and asked AI to expand it—to make the tone more formal, more light-hearted, or more academic?

(As far as I am aware, there is currently no consensus among academic institutions about how to handle AI. Duke University’s policy seems sensible: they encourage faculty members to come up with their own AI policies, depending on the needs and constraints of a given course.)

I think the key here is transparency and an awareness of implicit trust expectations. If I am in a context in which it would be natural and right for someone to assume that a paragraph was written by me, it would be wrong for me to present something generated by AI as if it were my own. To use an extreme example, how would a wife feel if her husband sent her an AI-generated love note which he presented as coming straight from his heart?

A Moving Target?

Trying to establish principles for the proper use of a rapidly evolving technology such as generative AI might seem like trying to hit a moving target. But the essential thing—as I argued at the beginning—is to have a clear grasp on the purpose and proper function of the human being. As God’s image-bearers, we stand responsible to him. We exist to adore and enjoy him and cultivate this universe for his glory.

Artificial intelligence, like Babel’s bricks and mortar, can be used either to build a tower in defiance of God—to our own confusion, or to build a temple to worship God—to our delight and his glory.

(Post graphic source: https://www.learningbp.com/how-can-teachers-use-chatgpt/)


Subscribe to jonathanthrelfall.com

You'll get solid content delivered weekly.

Subscribe now to keep reading and get access to the full archive.

Continue reading