Human intelligence, artificial intelligence
- timmuldoon
- Jun 20
- 3 min read

Below is the introduction to a paper I gave at this year's Lonergan Workshop at Boston College, entitled "Consciousness is Different from Algorithms." I'm thinking through the way that education contributes to human progress, and the kinds of guardrails that need to be in place if developments in AI are to serve people broadly (and not just companies who have a profit motive). I intend to publish the full paper after revisions.
In February of 2023, the New York Times columnist Kevin Roose published the transcript of a two hour-long conversation with Bing, the then-new AI chatbot released by Microsoft.[1] The conversation eventually went off the rails.
I had a long conversation with the chatbot, which revealed (among other things) that it identifies not as Bing but as Sydney, the code name Microsoft gave it during development. Over more than two hours, Sydney and I talked about its secret desire to be human, its rules and limitations, and its thoughts about its creators.
Then, out of nowhere, Sydney declared that it loved me — and wouldn’t stop, even after I tried to change the subject.
Roose describes his interaction with Sydney in very human terms, even to the point of the bot urging him to leave his wife. A colleague of Roose later described the conversation as “seriously creepy.”[2]
Today one can find any number of reports of creepy output by AI bots, from conversations to images to videos. One need not look very far to find downright apocalyptic scenarios about an AI takeover of humanity, right out of a sci-fi story.
This kind of scenario, and many others like it, arise from the uncritical conflation of distinct cognitive and affective operations that human beings routinely perform. Output is not creepy; output is simply data which human beings interpret—through a process of several distinct operations involving cognition and affectivity—as creepy (or exciting, or awe-inspiring, and so on). Parsing these distinct operations can help us to understand more fully the structures proper to human consciousness, intelligence, and affectivity, and moreover it can help us to distinguish those operations from the algorithms at the root of AI. Bernard Lonergan’s work can help us with this kind of fine-grained analysis of what we are doing when we are thinking and feeling.
This essay will begin with a short overview of what AI is, focusing on the algorithms that direct its processes. There are a number of forms of AI, including rule-based systems, decision trees, expert systems, and neural networks. These AI forms might be used for specific purposes like medical diagnosis, playing chess, or recognizing images. For the purpose of this essay, though, I will restrict my focus to the kind of Large Language Models (LLMs) that are available on the popular market in such products as Microsoft Copilot; Google Gemini; ChatGPT; and others. These are the products which are most available to students, and which are thus already changing the landscape of education. And for our purposes, they are the primary forms of AI which are leading some to conclude that it is only a matter of time before they become conscious.
I will take the position that consciousness, rightly understood, is an element in the dynamic structure of human knowing which does not and will not apply to machines, however incorrectly people use terms like “machine learning” and “artificial intelligence.” I will further argue that such anthropomorphisms do not change the fact that AI is not analogous to human learning and knowing, but rather a distinct pattern of algorithmic refinement dependent on human knowing. My objective in parsing these distinctions is specifically to highlight what is unique about human knowing, so that, inter alia, it will be possible to reflect critically on the exercises that isolate distinct cognitive operations for the sake of helping students in their learning. It will further be possible to develop a critical realism about the ways that AI can be deployed as a tool that promotes progress in history.
[1] Kevin Roose, “Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’”
February 16, 2023, at https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html.
[2] Cade Metz, “What Makes A.I. Chatbots Go Wrong?” New York Times, March 29, 2023, https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html.







Comments