With the rise of artificial intelligence tools in recent years, it is truly the Wild West in both church and school environments. AI is no longer a niche technology used by a select few - over a billion people are using these tools today. We have a lot of well-intentioned people dabbling with tools to write, organize data, and perform tedious tasks. And it can be difficult as senior leaders of Catholic churches and schools to get our arms around what is actually happening on the ground.

I wanted to collect some thoughts based on my experience working in a Catholic church and school environment - where I have responsibilities to both organizations and have been pulled into just about every discussion about technology policy and implementation.


The Church Side

The church side is relatively straightforward. Our staff generally deals with adult-age parishioners. And while some of those adults may be tech-averse or elderly, we encourage our employees to play around with tools, become familiar with them, and see how they can be more efficient in their jobs.

There's not a lot of potential harm in using these tools for efficiency - in communication, keeping track of projects and ministries, and running the day-to-day of a church - as long as we don't lose our authenticity and humanity in the process. Back-office functions are the safest places to start. I tell my team all the time: "Your goal is to ensure people don't know you're using AI. If we can take repetitive processes and make them easier, our people will be freed up to do higher-value things. And perhaps more importantly, spend more time with our parishioners and be more present.

While there has been some concern about replacing humans with AI among some, it's not playing out that way for us. As someone responsible for the financials, I don't really think about AI in the sense of reducing the size of our team.

The goal is not financial savings but eliminating the tedium of repetitive tasks. If we can take repetitive processes and make them easier or even automate them, our people will be freed up to do higher-value things. And perhaps more importantly, we can spend more time with our parishioners and be more present.

In that sense, efficiency is a great thing and not morally or ethically inconsistent with the pastoral nature of our work. AI can enable more pastoral service and bring more humanity and dignity to our collective ministry.

But First: Protect the Data

Because we deal with a lot of sensitive situations - and with a lot of people whose dignity, no matter what the circumstance, must be maintained at the highest order - we have to take extreme care with artificial intelligence.

Everyone has experienced having a random conversation with a friend about a topic you don't normally discuss, only to see an advertisement for it on Facebook minutes later. The platforms said they weren't listening, but anyone with a brain knows that can't possibly be true.

Along those lines, we have to take similar care with artificial intelligence. Sure, the platforms tell us that your data is not used to train their models, and your data is yours alone in enterprise plans or in various terms of service. But we can't run on that assumption. If we're talking with AI about anything involving anyone - their life circumstances, challenges, and so on - we have to exercise the most extreme caution.

Anything identifying a person by name, by situation, by anything personal - that data has to stay out of the platforms. Anonymizing becomes a critical part of your internal infrastructure.

The IT Reality

Most church and school campuses have a person on staff responsible for technology - a director of technology or IT lead. But that person is typically tracking down internet connectivity issues, managing what's going on with Google or Microsoft accounts (or both), and in a school environment, keeping the faculty, staff, and students functional with their computers. The IT lead in many church and school campuses is stretched pretty thin, and innovation with artificial intelligence or building out AI infrastructure is not necessarily on the top of that person's priority list.

That said, we have people using tools who could very well be exposing personally identifiable information to AI systems that we can't guarantee will keep that information confidential.

Infrastructure Recommendation

A local LLM - one that has integrated anonymizing tools that abstract away PII before introducing data of any kind into the model - is a critical infrastructure piece for any Catholic institution. If your people are using AI tools in the course of their work at a church and school, you have to intervene immediately to make sure they're not exposing any PII to those platforms. That problem must be solved first. If not, there's a potential liability hanging out there for the parish.


The School Side: A Much More Loaded Question

The topic of using artificial intelligence in schools is much more loaded - for a variety of reasons.

First of all, the obvious one: we're dealing with kids. We're dealing with students who we need to flourish. We are helping to form souls entrusted to our care. We have to recognize that their brains are not fully formed at this point, and we are partially responsible for stewarding their development as people. This is true educationally, behaviorally, and socially.

The actions we take, the policies we put in place, and the technologies we use must be additive to their experience and not detract from their development as people. As Pope Leo XIV has suggested, the tools are at our service. Human dignity must be maintained as a top design feature. And anything we build must be in accord with the dignity of every person - the good actors and the bad actors, the good students and the bad students, the good employees and the bad employees. Everyone.

The Classical School Debate

I find that there's a philosophical element to this that comes out in most debates about whether artificial intelligence is good or bad for kids. Each school needs to deeply consider its technology strategy.

The simplest version of this debate has been around whether or not a school is a classical school. Classical schools tend to believe that classical education will put kids in a better position to use technology and contextualize it in the future. Schools that do not take a classical approach are effectively acknowledging that technology is a part of adult life, and they're trying to get out in front of it - getting kids comfortable with devices and with computing.

Whatever the strategic direction, each school's strategy should be communicated to parents clearly, so they know what to expect. The strategy should be driven by community standards and input. And it should be the north star for decision-making.

What kind of school are they running? What is the vision of the ideal graduate, and with what skills would he or she leave the school? This should drive philosophy about technology and computing.

With the acceleration of AI technology, schools increasingly need a full-throated, coherent technology approach, communicated to the parents and community as a whole.

Translating to Grade Levels

Once the goals and objectives of the school are known at a high level, they need to be translated to various grade levels or grade ranges. What does the school's philosophy mean for pre-K through second grade? Three through five? Six through eight? High school?

What is acceptable at each level? How much time should be spent on computing devices? What are good and bad uses of technology? How is it applied in math? In social studies and science and the arts?

It also goes down to the subject level. Even the most tech-native schools would probably acknowledge that children in their care need foundational skills to assess whether AI is delivering slop or delivering real answers. Building those foundational skills has to be a top priority.

These strategic decisions need to translate to hiring. Administrative positions, teachers, aides - the composition of your staff has to reflect your technology philosophy. A tech-forward school needs people who are comfortable with technology. A classical school needs people who believe in that model. As Simon Sinek puts it, hire people who believe what you believe. The "who" matters enormously when executing day-to-day on your strategy.

The Constructive Principle

Artificial intelligence must be constructive in the community. The right question to ask of any AI application is simple: does this make people better? AI that helps a student think more clearly, a teacher reach a struggling kid, or a parent stay connected to their child's education - that is constructive. AI that writes the student's essay for them, lets the teacher skip the hard work of knowing their students, or generates the appearance of learning without the substance of it - that is destructive.

The bigger picture has to be kept in view. AI that enables shortcuts that harm the user - intellectually, spiritually, relationally - is not a neutral tool. It is an actively bad application, regardless of how efficient or impressive it appears. The ease of using it does not make it right.

All stakeholders need to be encouraged and even trained on the proper, responsible use of AI. The technology should not create shortcuts from the learning or teaching processes or incentives for misuse. As leaders, we need to seek out and implement ways to properly empower our faculty and staff the right ways, and in a manner that sets proper example for our students.

What better way to achieve human dignity than for each child to be maximized in their potential?

Where AI Helps

Looking across classrooms, artificial intelligence can see patterns of how a particular kid might be reached better and educated more effectively. It can help special needs kids integrate better. It can assist with personalized education for every student.

It can help teachers identify gaps in their teaching methods, or find means to reach certain kids who are not responding to curriculum. It can find patterns in how kids are learning and detect nuance in the classroom that teachers are unable to find - simply because running a class of twenty kids, some of whom may not even be paying attention, takes their full undivided attention and focus.

AI tutors can also pick up on the specific lessons that kids in core subjects like math or language arts are struggling with, and guide them in a personalized manner to plug gaps in what they know - more effectively and more efficiently than a classroom of twenty where that kid's development may get lost in the shuffle. Not on purpose, but just by virtue of the limitations of a one-to-many classroom environment.

This is not theoretical. At St. Theresa Catholic School in Austin, we have been running an AI ethics elective for middle schoolers that puts these principles into practice. The course - available at humancode.st-theresa.org - is built around the conviction that students need to understand what AI is, what it is not, and how to use it in ways that build them up rather than shortcut their development. Human dignity and responsible use are cornerstones of the class. One quarter in, the results have been remarkable.

Where AI Must Be Kept at Arm's Length

It may be that certain subjects are just not a good fit for artificial intelligence. Language arts is one example. We would not want AI to be writing things on behalf of kids when they're trying to develop their foundational skills. We don't even want kids introducing high-level ideas to an artificial intelligence to have it write their ideas for them. We need the kids to go through the difficult exercise of writing paragraphs that are clear and coherent - without the aid of computing. We need kids to struggle so that they learn.

However, Socratic classes - where kids are responding, talking through their thought process, presenting to classmates, and learning to communicate with others - are an area where AI analysis can be very helpful. AI can pick up on the nuance of how certain kids function, how they learn most effectively, and where they need to improve their communication. And AI can be prescriptive about how they can do that.

The Calculator Analogy

Teachers have dealt with this for a long time with the use of calculators. I remember in my high school, I had a graphing calculator for calculus and advanced math. That was okay. But that was a purposeful decision made about when and where that device could be used.

This is really no different today. Before the internet, people used card catalogs and the Dewey Decimal system to access information. Then there were Usenet and early search engines. Some of you might remember that you could "Ask Jeeves" a question and get a half answer - an early, rudimentary example of AI in its current form. And then Google emerged from the search engine wars to dominate for over two decades. In its most basic form, artificial intelligence can be viewed as the next iteration of the search engine. Rather than entering a search term, one simply asks a question of an AI and gets an answer back. But the human needs to assess that answer and ensure that it's actually correct.

That foundational ability - the ability to know whether what you're reading is right - has to be built regardless of the technology used. And it must be paramount.

Disclosure and Transparency

There is a need to disclose to students, to teachers, and to parents exactly how artificial intelligence is being used in the school. Any experimental technology needs to be communicated openly. Everyone is figuring it out as they go - the teachers, the students, the administrators.

The first kids to use computers in a classroom - in the late nineties - were figuring it out. So were their teachers and administrators. Technology tools were new at that time. Many didn't even exist. Many were built to support the use of technology in a productive way in the classroom. We may forget that. After all, the introduction of the computer to the classroom was almost thirty years ago.

We're at that moment again - something as foundationally different as the introduction of the computer to the classroom.

The Cat Is Out of the Bag

There are going to be bumps in the road. This has always been the case with the introduction of new technology, and will be so this time around.

But I think a reflexive, generalized response that "AI is bad" misses the moment and potentially does a disservice to our church and school communities. How we implement AI in our churches and schools makes a huge difference.

What are we enabling with the AI? Is it constructive? Does it perpetuate Catholic teaching? Do our initiatives protect human dignity? Are we helping to build foundational skills or are we undermining them?

And most importantly, are we using these tools at our service to build and grow Christ's kingdom?

If we are, we're obligated to proceed carefully but diligently.

The real question isn't whether AI belongs in our churches and schools. It's already there. The question is: what are we preparing our people for, and how?