Monday, January 5, 2026

Contending For The Faith---Part 47

 

In St. Jude 1:3, we read, "Dearly beloved, taking all care to write unto you concerning your common salvation, I was under a necessity to write unto you: to beseech you to contend earnestly for the faith once delivered to the saints." [Emphasis mine]. Contending For The Faith is a series of posts dedicated to apologetics (i.e.,  the intellectual defense of the truth of the Traditional Catholic Faith) to be published the first Monday of each month.  This is the next installment.

Sadly, in this time of Great Apostasy, the faith is under attack like never before, and many Traditionalists don't know their faith well enough to defend it. Remember the words of our first pope, "But in your hearts revere Christ as Lord. Always be prepared to give an answer to everyone who asks you to give the reason for the hope that you have. But do this with gentleness and respect..." (1Peter 3:16). There are five (5) categories of attacks that will be dealt with in these posts. Attacks against:
  • The existence and attributes of God
  • The truth of the One True Church established by Christ for the salvation of all 
  • The truth of a particular dogma or doctrine of the Church
  • The truth of Catholic moral teaching
  • The truth of the sedevacantist position as the only Catholic solution to what has happened since Vatican II 
In addition, controversial topics touching on the Faith will sometimes be featured, so that the problem and possible solutions may be better understood. If anyone had suggestions for topics that would fall into any of these categories, you may post them in the comments. I cannot guarantee a post on each one, but each will be carefully considered.

AI: Can a Machine Be Conscious?
To My Readers: This is my second installment on the dangers and challenges of AI. My first installment was in "Contending For The Faith---Part 45." (N.B.  This post is a compilation of all the resources, both online and print, which I used in my research. I take  absolutely no credit for any of the information herein. All I did was condense the information into a terse and readable post---Introibo).

It’s commonplace to hear the language of consciousness applied to computing technology, especially AI. Neural networks, machine learning, artificial intelligence, automated reasoning, knowledge engineering, emotion AI. This isn’t surprising, though, given AI’s (seeming) ability to approximate various functions of human consciousness. No harm, no foul. After all, we use language figuratively all the time. The problem arises when people believe AI literally has consciousness in the same sense in which human persons are conscious. People often point to Turing tests to support this idea. Contrary to popular belief, though, passing a Turing test does not establish that AI is conscious (or much else of interest). This should matter to Traditionalists, because to attribute genuine consciousness to AI is seriously to demean humans who were created in the image and likeness of God. 

What is a "Turing Test"?
Alan Turing (1912–1954) was a British mathematician, widely recognized as the father of modern computer science. His famous article, “Computing Machinery and Intelligence” (1950), asks the question, “Can machines think?” 
(See doi.org/10.1093/mind/LIX.236.433). 

To get at an answer, Turing proposes the “imitation game.” (See Ibid, pg. 433). 

The game itself is simple. We have two rooms. In the first room we place a person and a machine, and in the second room we place an investigator. Unable to see into the first room, the investigator knows the other person and the machine simply as ‘X’ and ‘Y.’ The investigator passes questions into the first room, directed to X or Y. For example, “Does X play chess?” The other person aims to help the investigator correctly identify which of X or Y is the machine, while the machine’s aim is to trick the investigator into mistaking machine for human. The object of the game is for the investigator to identify correctly, on the basis of the answers returned, whether X is the person or the machine. Hence, for the machine (or AI) to “pass the Turing test” is for it so to function in such a way that humans cannot recognize it as non-human.

For his part, Turing believed “that in about fifty years’ time it will be possible to programme (sic) computers…to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.”
(See Ibid, pg. 442). 

Was Turing right? More or less. One recent study, conducted by researchers at the University of California San Diego, evaluated three systems (ELIZA, GPT-3.5, and GPT-4). The report, published under the title “People Cannot Distinguish GPT-4 from a Human in a Turing Test,” claims to provide the first serious empirical proof that any artificial system passes an interactive Turing test. The study found that human participants “were no better than chance at identifying GPT-4 after a five minute conversation, suggesting that current AI systems are capable of deceiving people into believing that they are human.” (See arxiv.org/html/2405.08007). 

What is Consciousness?
However, even given the above information, who cares? Suppose we stipulate that AI is regularly mistaken for human consciousness. Would that establish that AI is, in fact, conscious in the same way as humans? Not at all. To see why, let’s reflect briefly on human consciousness.

Considerations of consciousness (and the philosophy of mind generally) can get fairly technical, so I'll keep this simple. Each of us, as persons, are directly familiar with our own individual consciousness. I experience my consciousness, but obviously I cannot experience yours--- and vice versa. I am directly familiar with what it is like to be me, but I am not — indeed, cannot be — directly familiar with what it is like to be you. And again, vice versa. This is because access to what it is like to be one is available only via one’s first-person, inner perspective. We each are the unique subjects of our conscious experiences, and in the absence of subjects there cannot be consciousness.

Each of us knows via first-person experience that there are various states of consciousness. We refer colloquially to being in a “semi-conscious state” when we’re half asleep or distracted, but that’s not the sort of state I mean. I’m referring instead to what philosophers call mental states. We experience sensations — being in pain, for example (“My toe hurts”). We also experience desires (“I’d really like to get out of attending that meeting”), beliefs (“I believe the party is at 6:00 P.M.”), thoughts (“I love my wife”), understanding, and others, all of which are impossible for AI.

Let’s focus on thoughts. Thoughts are about something (perhaps even something fictitious); they can be true or false; and they can logically imply further thoughts. As I type this, I can form thoughts about what I’m typing. I notice I can form thoughts about the appearance of the letters on the screen (“Gee, I meant 'there' not 'their'”), but I can also form thoughts about the meaning conveyed by what I’m typing (this paragraph is about one’s thoughts). We can use thoughts to have the mental state of understanding, and that’s pretty extraordinary. Again, these states are mental; they are not physical (e.g., brain) states.

The "Chinese Room" Thought Experiment
The suggestion that AI can form thoughts and have understanding depends on a radically different view: that humans’ (physical) brains are what have mental states; humans do not have (nonphysical) minds (the soul). “Mental,” on this suggestion, does not mean nonphysical. The suggestion is that mental states are to be understood as functions, and AI can certainly exhibit functions. To get the idea, think in terms of input = programming (plus enormous data, if you like) = output. That is fundamentally how AI works; humans’ minds are to brains what programming is to AI. (See John R. Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3, no. 3 [1980], pg. 421). When fed input AI produces output indistinguishable from that of human consciousness, and so AI is said to have understanding (consciousness). In a word, AI is a “mind” in the same sense you are.

Yet, as (atheist) philosopher John Searle explains, the input = programming  = output model cannot establish understanding. No matter how sophisticated the programming may be, functioning in a certain way is not identical to understanding. To see why, let’s imagine what Searle calls "the Chinese room." 

Suppose you have no knowledge of the Chinese language. Chinese characters are, to you, “just so many meaningless squiggles. Now suppose you’re given a handful of Chinese writings and then locked in a room. Shortly, a second batch of Chinese writings are slid into the room beneath the door. Meantime, the room contains a rulebook, written in English. The rulebook tells you how to correlate symbols (e.g., when you see squiggle symbol, put it with a squoggle symbol). You’ve no idea what the symbols mean, but you find you’re able to locate symbols in the writings that match these squiggles and squoggles and get on with the correlations. Later a third batch of Chinese writings appear beneath the door, along with further English instructions. These instructions enable you to correlate this batch with the first two batches, and then to pass your latest correlations back under the door. Unbeknownst to you, the people giving you these writings “call the first batch ‘a script,’ they call the second batch ‘a story,’ and they call the third batch ‘questions.’ Furthermore, they call the symbols [you] give them back in response to the third batch ‘answers to the questions,’ and the set of rules in English…they call ‘the program.’
(See Ibid, pg. 418). 

It’s easy to imagine that after a while you’d become really good at following the instructions for manipulating the Chinese symbols and the programmers would become so good at writing programs that someone outside the room would be unable to distinguish your answers from those of a native Chinese speaker. You passed the Turing test. Except you still don’t understand Chinese.

If you still don’t understand Chinese, then what exactly have you become really good at in the Chinese room? The answer is that you’ve become good at a certain syntactical operation, namely manipulating the symbols based purely on syntax. Your manipulations of the symbols, in other words, are based entirely on the shape of the Chinese symbols (e.g., squiggle and squoggle) and the order in which they appear. The instructions in the rulebook concern nothing beyond this syntax. 

Can AI perform this syntactical operation? Yes, perhaps even better than you can. In following the rulebook, though, are you not thinking “about” the Chinese symbols? Yes, in a sense you are — but only in the sense in which I formed thoughts about not liking the word on my computer screen. The manipulation of symbols in keeping with a syntax, after all, has (literally) no meaning. In order to understand Chinese (or anything else), you must be able to think “about” the meaning of the symbols.
(See E. J. Lowe, An Introduction to the Philosophy of Mind, [2000], pgs. 214-217). This is what Searle calls “semantic” understanding, and this cannot be done merely through complicated syntactical operations.

In the Chinese room experiment, you are in the place of AI. If you can follow the formal rules spelled out in the rulebook, after all, then surely an AI can, too. You’ve got the batches of writing (inputs); you’ve got ideal programming; and you’ve generated the expected outputs. Yet you lack any understanding whatsoever of Chinese. As Searle concludes, since “the program is defined in terms of computational operations on purely formally defined elements” (i.e., input = programming = output, which is how AI functions), the experiment reveals that mere program functioning cannot yield understanding. (See “Minds, Brains, and Programs,”pg. 418). AI can make an impressive simulation indeed of human consciousness. However, an impressive simulation of understanding is no more conscious than a computer simulation of rainstorms is wet.

Conclusion
AI will never be human. It will always be an "it;" a thing without a soul. St.  Thomas Aquinas explains: “Since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God.” (See Summa Theologica Ia q. 93 a. 4). Aquinas goes on to explain that “only in rational creatures is there found a likeness of God which counts as an image….As far as a likeness of the divine nature is concerned, rational creatures seem somehow to attain a representation of [that] type in virtue of imitating God not only in this, that he is and lives, but especially in this, that he understands.” (Ibid, Ia q. 93 a. 6). 

A real problem arises when people believe AI has consciousness in the same sense in which human persons are conscious. Such a view diminishes what it means to be a human and demeans the image of Almighty God. Don't fall for it.