Artificial Intelligence: More than Byte-Sized Issues

Return to Intersections Home

Artificial intelligence (AI) has come to increased prominence over the past year thanks to the introduction of large language models such as OpenAI’s ChatGPT, Google’s Bard, and text-to-image and text-to-video models like DALL-E, StableDiffusion, Adobe’s Firefly, and others. Indeed, in the time I have been writing this article, Google/Alphabet’s DeepMind began making their Gemini AI available, and Meta has placed their AI image generator on a stand-alone website, “Imagine with Meta AI.”[1] But what is AI, what capabilities does it have, and how should we as Christians think about and approach it?

While “AI” is all over the news, artificial intelligence is difficult to define. Providing a definition of artificial intelligence seems to be something of an academic industry in itself. Philosophy and ethics professor Jason Thacker offers a good, accessible definition:

Artificial intelligence is an emerging field of technology defined as nonbiological intelligence, where a machine is programmed to accomplish complex goals by applying knowledge to the task at hand. Because it’s nonbiological, AI can be copied and reprogrammed at relatively low cost. In certain forms, it is extremely flexible.[2]

However, embedded in this definition are difficult-to-define terms like “intelligence” and “knowledge.” In “What Do You Mean by ‘AI’?” Temple University’s Pei Wang looks at five ways of defining AI: “in terms of structure, behavior, capability, function, [or] principle.”[3] AI, then, can refer to something along a continuum of algorithms of various complexities to the eventual goal of achieving artificial general intelligence (AGI). It is indeed difficult to define.

AGI is the hope of some and the fear of others. It is artificial intelligence that is at least as smart as human beings and would quite likely be able to create AI that is even more intelligent. In other words, AGI is a form of AI that is capable of creativity, including the capacity to reproduce and improve upon itself. This form of AI is sometimes referred to as superintelligence, and some predict that it would likely result in the total displacement and perhaps even elimination of human life.[4] Of course, many who are involved in the development of various forms of artificial intelligence do not believe that we will ever achieve the kind of generalizable human-like intelligence to threaten human life directly.[5] However, other concerns are already evident, such as the generation and amplification of misinformation, bias in training inputs and data outputs, and issues of privacy and intellectual property related to the vast amounts of data needed for training.[6]

Applications of AI in Medicine

One area where AI is already playing a role is in the realm of drug discovery. Machine programming can mix and match various chemical elements in an attempt to discover new medicines. However, in one experiment, researchers at a North Carolina pharmaceutical company made a small change to an algorithm’s parameters that caused the AI to generate 40,000 deadly molecules, which could have then been created in a lab.[7] This demonstrates not only the power of AI but also ways in which it can be directed toward both curative and destructive ends.

Also in the area of medicine, but this time in patient care, AI has been employed to help detect sepsis earlier in hospitalized patients. However, The Wall Street Journal in June ran an article entitled “When AI Overrules the Nurses Caring for You” that illustrated the problem of allowing an algorithm to supersede human experience and intuition:

In a survey of 1,042 registered nurses published this month by National Nurses United . . . 24% of respondents said they had been prompted by a clinical algorithm to make choices they believed “were not in the best interest of patients based on their clinical judgment and scope of practice about issues such as patient care and staffing.” Of those, 17% said they were permitted to override the decision, while 31% weren’t allowed and 34% said they needed doctor or supervisor’s permission.[8]

Recent articles in The New England Journal of Medicine also highlight the ethical issues of AI’s use in healthcare, such as “Artificial Intelligence and Machine Learning in Clinical Medicine” and “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine.”[9] These articles and others are a lead-up to the launch of a new journal in the New England portfolio: an online-only, monthly journal on AI

to identify and evaluate state-of-the-art applications of artificial intelligence to clinical medicine. In addition to original research, NEJM AI will provide reviews, policy perspectives, and accessible educational material targeted at practicing physicians and clinician leaders interested in applying AI, computer scientists seeking to translate algorithmic advances to clinical practice, and policy makers and regulators.[10]

The inaugural issue of NEJM AI was just published on December 11, 2023.

AI is demonstrably a tool that can be applied in good and helpful ways as well as in ways that are harmful. The survey of nurses who have been overruled or contradicted by AI raises the practical and immediate problem of adjudicating who, finally, decides: the human being with experience or the AI with vast programming? In order to deliver excellent patient care, decision processes must be streamlined so that delays are minimized, but often those decisions are made using “inputs” that are not necessarily measurable or able to be programmed into an algorithm.

AI Ethics and a Christian Response

In June of 2023, the Southern Baptist Convention passed a resolution on AI and emerging technologies. Considered the first such statement by a religious denomination, the document emphasizes that AI is a technology that they will seek “to engage . . . from a place of eschatological hope rather than uncritical embrace or fearful rejection.”[11] Indeed, as Christians, we must not lose sight of the fact that our hope is not in anything in the created world but in the one who created all things.

AI pushes us toward increased efficiency and exacting precision in thinking and acting. It is worth asking whether the technological pursuit of efficiency and precision, as good as they are, might cause a diminution of other human goods. In other words, are these goods preeminent human goods? How do they rank among other goods, and how do we decide? A thoughtful essay in Plough, for example, asks the provocative question, “What Problem Does ChatGPT Solve?”[12] In the article, the author warns that it is possible that engaging with AI may lead to the atrophy of both our intellectual capacities and our creative abilities.

AI, like most digital technologies, holds forth the promise of great improvements to our lives. But a significant aspect of evaluating whether and to what degree human lives actually improve must not neglect hard thinking about what exactly human flourishing is, what counts as improvement—that is, movement toward human flourishing—and who, exactly, gets to make such assessments in our pluralistic and increasingly secular society.

For Christians, I suggest we begin by asking: in what ways does AI move us toward love of God and neighbor, and in what ways does it work against our obedience to these greatest two commandments? We must consider how these apply to our own use of AI as well as how the existence and use of AI generally affect us all. The answers are not obvious. It will require sustained, deep study and perhaps even the accumulation of some experience with AI to make sound determinations. May God grant us wisdom as we navigate our MedTech times.


[1] David Pierce, “Google Launches Gemini, the AI Model It Hopes Will Take Down GPT-4, The Verge, December 6, 2023,; Lance Whitney, “Meta rolls out its AI-powered image generator as a dedicated website,” ZDNet, December 6, 2023,; “Imagine with Meta AI,” Meta,

[2] Jason Thacker, The Age of AI: Artificial Intelligence and the Future of Humanity (Grand Rapids, MI: Zondervan Thrive, 2020) 23–24.

[3] Pei Wang, “What Do You Mean by ‘AI'’?” AGI 171 (March 2008): 362–73.

[4] See Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (New York: Oxford University Press, 2016).

[5] See, for example, Nir Eisikovits, “AI Is an Existential Threat—Just Not the Way You Think,” The Conversation, July 5, 2023,

[6] Tate Ryan-Mosley, “It’s Time to Talk about the Real AI Risks,” MIT Technology Review, June 12, 2023,

[7] Jess Craig, “Widely Available AI Could Have Deadly Consequences,” Wired, May 17, 2022,

[8] Lisa Bannon, “When AI Overrules the Nurses Caring for You,” The Wall Street Journal, June 15, 2023,

[9] Charlotte J. Haug and Jeffrey M. Drazen, “Artificial Intelligence and Machine Learning in Clinical Medicine, 2023,” The New England Journal of Medicine 388, no. 13 (2023):; Peter Lee, Sebastien Bubeck, and Joseph Petro, “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine,” The New England Journal of Medicine 388, no. 13 (2023):

[10] “Homepage,” NEJM AI, accessed December 12, 2023,

[11] Tom Perry, “Southern Baptists Pass Resolutions on the Great Commission Work of Women, the Ethics of AI, and the Office of Pastor,” Baptist Press, June 13, 2023,

[12] Jeffrey Bilbro, “What Problem Does ChatGPT Solve?” Plough, July 7, 2023,