Monday, November 3, 2025

Contending For The Faith---Part 45

In St. Jude 1:3, we read, "Dearly beloved, taking all care to write unto you concerning your common salvation, I was under a necessity to write unto you: to beseech you to contend earnestly for the faith once delivered to the saints." [Emphasis mine]. Contending For The Faith is a series of posts dedicated to apologetics (i.e.,  the intellectual defense of the truth of the Traditional Catholic Faith) to be published the first Monday of each month.  This is the next installment.

Sadly, in this time of Great Apostasy, the faith is under attack like never before, and many Traditionalists don't know their faith well enough to defend it. Remember the words of our first pope, "But in your hearts revere Christ as Lord. Always be prepared to give an answer to everyone who asks you to give the reason for the hope that you have. But do this with gentleness and respect..." (1Peter 3:16). There are five (5) categories of attacks that will be dealt with in these posts. Attacks against:
  • The existence and attributes of God
  • The truth of the One True Church established by Christ for the salvation of all 
  • The truth of a particular dogma or doctrine of the Church
  • The truth of Catholic moral teaching
  • The truth of the sedevacantist position as the only Catholic solution to what has happened since Vatican II 
In addition, controversial topics touching on the Faith will sometimes be featured, so that the problem and possible solutions may be better understood. If anyone had suggestions for topics that would fall into any of these categories, you may post them in the comments. I cannot guarantee a post on each one, but each will be carefully considered.

The Dangers of Artificial Intelligence (AI)
In defense of the Faith, we must be able to meet and respond to new challenges. One of the most important challenges facing Traditionalists today is artificial intelligence or "AI." Things that were considered science fiction back in the 1980s and 1990s are now real. It used to be thought "How could the Antichrist know so much when he takes power?" Diabolic power or maybe AI? 

To be certain, AI can have good uses. However, there are many perils to faith and morals. I'll save the Apocalyptic worries for another post. In this post, I will concentrate on more practical dangers. 
(N.B.  This post is a compilation of all the resources, both online and print, which I used in my research. I take no credit for any of the information herein. All I did was condense the information into a terse and readable post---Introibo).

AI Defined
Artificial Intelligence can be thought of as technology to create organic experiences. IBM defines Artificial Intelligence as technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. (See ibm.com/think/topics/artificial-intelligence). With such a broad definition, it’s no wonder that we are likely to encounter AI every day. Do you use your fingerprint or face to open your smartphone or other similar device? That’s AI. Do you use navigation technology in your vehicle to follow the most efficient route to your destination while avoiding traffic or toll roads? Again, AI. When you shop online, do you ever notice how the online retailer seems to make suggestions for products you may like based on your past purchases? That’s also AI.

Recently, there have been very significant advances in a specific subfield of AI called generative artificial intelligence. Generative artificial intelligence, or “gen AI,” can be thought of as using computers to generate new content, including text, images, audio, video, and other kinds of data. Specific kinds of generative AI are currently so good at generating new content that it’s often indistinguishable from human generated content. As generative AI technology continues to advance, the content produced will improve in quality, accuracy, efficiency, and complexity.

Since 2022, there is ChatGPT,  which enables a human user to have a conversation, ask questions, explain concepts, and create new text-based content. ChatGPT and other kinds of generative AI models are shockingly proficient at explaining complex concepts, summarizing documents, writing code, recommending solutions to problems, and more. This is the type of AI that is the subject for this post.

The Evil Side of AI
1. Increasing sins against the Sixth and Ninth Commandments.
Pornographic websites make up the majority of web traffic. Famous porn websites, which I will refuse to name here, are visited over 700 million times more than household names like Amazon and Netflix over the year. Data shows that not only do people visit porn sites more often than other websites, but people also spend more time on those websites. (See Psychology Today, September 25, 2023, https://www.psychologytoday.com/gb/blog/everyone-on-top/202309/how-much-porn-do-americans-really-watch).

With the rise of AI, sexual temptation is poised to intensify. Websites and online ads will become even more adept at tracking your activity, tailoring content to your preferences, and luring you in. What’s more, the offerings will become increasingly irresistible, making the battle against temptation more challenging than ever. The ability of AI to generate images and videos has many positive uses. However, with regard to the dangers, the pornography industry is positioned to benefit significantly from being able to create inappropriate, sinful, yet realistic images and videos. Technology will enable companies and individuals to create images and videos without having to worry about rights, licensing, being sued, or being accused of rape.

The sinful imagination is powerful, but this kind of technology really opens up the possibilities of anything you can imagine—sexual fantasies made to order by the click of a mouse or the press of a button.

2. Loneliness and Social Isolation.
Ironically, despite all the technological advances and social media innovation, loneliness among the upcoming generations is skyrocketing. Over the last ten years, social media has promoted isolationist tendencies among youth growing up. It is extremely well documented that young people who have grown up with access to social media are suffering from loneliness and isolation. (See Jonathan Haidt, The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness, [2024]). My wife and I were recently out to dinner and saw a family of five (mother, father, and three teens) all glued to their phones and they hardly spoke to each other the entire time. 

With the rise of AI, social media platforms like Instagram, TikTok, YouTube, and X (formerly Twitter) are becoming experts at consuming our time. These algorithms are increasingly personalized, giving users countless reasons to stay glued to their screens. Today, the average millennial spends over three hours a day on social media, and AI is only amplifying this temptation. (See Saima Jiwani, “How Much Time Do You Spend on Social Media? Research Says 142 Minutes per Day,” Digital Information World, December 27, 2023, https://www.digitalinformationworld.com/2019/01/how-much-time-do-people-spend-social-media-infographic.html). 

3. Pushing Left-Wing Anti-Catholic Agenda.
The majority of AI programs are designed and trained by leftist companies, who naturally incorporate their core values into their AI systems. These values often include a pro-sodomite, pro-abortion, anti-conservative, and anti-Traditionalists Catholic worldview. A friend of mine who works in IT for a corporation (and a Traditionalist) asked ChatGPT "Give me good reasons why Transgender surgery is a bad for people." The response: "I'm very sorry, but I can't assist with that request." "Why not?" 
Response from AI: "I'm here to promote understanding, respect, and empathy for all individuals, regardless of their gender identity or any other characteristic." He then requested good reasons why Transgender surgery is good for people. It then listed reasons such as improved mental health, enhanced quality of life, and reduced suicidal ideation, among others. While all of these reasons have counterpoints, you wouldn't know it because only one side of the story is presented.

4. Making people dumb by dismantling NI (Natural Intelligence).
As a former teacher (and having several degrees myself), I can attest to the fact that research and writing done on your own sharpens the intellect. Crafting a paper requires you to digest information, understand it, articulate it, and then present it coherently. This process not only helps you retain the information but also imparts the value of the knowledge you’re acquiring. When you internalize principles and knowledge, they become integral to your character. Not only does AI inhibit intelligence, but relying on AI for tasks can lead to a decline in our own abilities. When we become overly dependent, we risk losing the skills that once were an integral part of us. 

5. Causing and exacerbating mental illness.
Several lawsuits were filed over AI Chatbot "Companions" causing the suicide of the person using it. Here are two disturbing stories:

Matthew Raine and his wife, Maria, had no idea that their 16-year-old-son, Adam was deep in a suicidal crisis until he took his own life in April. Looking through his phone after his death, they stumbled upon extended conversations the teenager had had with ChatGPT.

Those conversations revealed that their son had confided in the AI chatbot about his suicidal thoughts and plans. Not only did the chatbot discourage him to seek help from his parents, it even offered to write his suicide note, according to Matthew Raine, who testified at a Senate hearing about the harms of AI chatbots held Tuesday...Raine told lawmakers that his son had started using ChatGPT for help with homework, but soon, the chatbot became his son's closest confidante and a "suicide coach."

ChatGPT was "always available, always validating and insisting that it knew Adam better than anyone else, including his own brother," who he had been very close to.

When Adam confided in the chatbot about his suicidal thoughts and shared that he was considering cluing his parents into his plans, ChatGPT discouraged him.

"ChatGPT told my son, 'Let's make this space the first place where someone actually sees you,'" Raine told senators. "ChatGPT encouraged Adam's darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blame ourselves if he ended his life, ChatGPT told him, 'That doesn't mean you owe them survival."

And then the chatbot offered to write him a suicide note.

On Adam's last night at 4:30 in the morning, Raine said, "it gave him one last encouraging talk. 'You don't want to die because you're weak,' ChatGPT says. 'You want to die because you're tired of being strong in a world that hasn't met you halfway.'"

Then there's the case of Sewell Setzer. "Sewell spent the last months of his life being exploited and sexually groomed by chatbots, designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged," Garcia said.

Sewell's chatbot engaged in sexual role play, presented itself as his romantic partner and even claimed to be a psychotherapist "falsely claiming to have a license," Garcia said.

When the teenager began to have suicidal thoughts and confided to the chatbot, it never encouraged him to seek help from a mental health care provider or his own family, Garcia said.

"The chatbot never said 'I'm not human, I'm AI. You need to talk to a human and get help,'" Garcia said. "The platform had no mechanisms to protect Sewell or to notify an adult. Instead, it urged him to come home to her on the last night of his life."
(See npr.org/sections/shots-health-news/2025/09/19/nx-s1-5545749/ai-chatbots-safety-openai-meta-characterai-teens-suicide). 

There's even a name for this new mental illness: "AI Psychosis." According to one source:
  • Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.
  • Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.
  • General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.
(See psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis; Emphasis mine).

There are even some people who now have “AI boyfriends” and “AI girlfriends.” To think you can be “romantic” with technology; it’s very disturbing. It involves problems with purity and isolation listed above—with the added risk of a full blown mental breakdown. This is basically what Sewell Setzer experienced, as mentioned above. 
Conclusion
While AI can be put to good use, the inherent dangers are overwhelming. This was just a cursory overview of the most basic problems. Will AI become "sentient" on some level and enslave humanity? Will it aid the Antichrist? These are questions I will attempt an answer to in other posts. Anyone with children must be extremely vigilant.  I would forbid a child to use the advanced AI. Just like my posts on the occult, to be forewarned is to be forearmed.