Last Friday Calvin Deutschbein, Assistant Professor of Computer Science at Willamette University, gave a talk on Artificial Intelligence, otherwise known as AI, to the Salem City Club. Deutschbein is holding a microphone in the photo below.
He started off by talking about intelligence in general. It has something to do with (1) representing the world, (2) finding patterns, (3) predicting the future, (4) acting optimally. His example pertained to his cat. I'll adapt it to our dog.
Mooka, our Husky mix, clearly is adept at making sense of the world. After all, ever since dogs became domesticated, probably via ancestral wolves who learned how to benefit from a relationship with humans, dogs have become highly attuned to living with us.
Mooka knows that most days I take her for a walk in the late afternoon. She's learned that when the sun begins to set (and her stomach feels empty), the pattern is that soon I'll be putting her leash on. To get her to come to the front door, I get a few small pieces of cheese from the refrigerator. As soon as I open the cheese container, Mooka appears. That's non-artificial intelligence.
An AI model doesn't have the form of general intelligence that dogs, cats, and people do. That may come in the not-too-distant future, but for now AI models like Chat GPT basically are simply predicting the likelihood of words. That's why they're called Large Language Models.
Deutschbein used Willamette _______ as an example. The blank could be Valley, Forest, University, or some other word. Given the context of where he works, Willamette University is most likely. Often the next word would be Is: Willamette University is _________. Members of the audience threw out possibilities for the next word, which his slide indicated was Located: Willamette University is located ________.
We know that "in Salem" likely would be the next words. But Deutschbein noted that for most Americans, the word following Salem would have a greater chance of being Massachusetts than Oregon, given the famous nature of the East Coast Salem and its witch trials. Yet to the City Club audience, Oregon would be the chosen word, because of the context.
A slide said, "The amount of surprise you experience depends on correlations between concepts in a context." Another slide explained what this means:
Based on my personal experience, I might believe in an equal chance of the word "cat" and the word "dog" in the following sentence: "I am a _______ person." However, if I observe more "cat" than "dog" when reading a text, I am surprised by a mathematically quantifiable amount. Computers can make reasonable predictions by minimizing this type of surprise.
This probably seems like a far cry from genuine human intelligence. Which, it is. That's why Large Language Models currently are "merely" very good at predicting the next word(s) in a statement, as shown in the slide below. (That merely downplays how powerful this capability is.)
Regarding neural networks, which attempt to mimic how the human brain works, as a long-time (unsuccessful) meditator, I liked Deutschbein's comment: "When neurons fire, I have a thought; when they don't, I'm happy."
In the Q & A part of the program he was asked about so-called alternative facts, and whether AI models could recognize such concepts. He answered that these models are not trained to be true or useful, just to predict the most likely words.
After the talk I went up to Deutschbein and told him that I'd read an article in a science magazine about the AI model developed by researchers in China that caused quite a stir when it had many or most of the capabilities of Chat GPT, but was much more efficient, requiring way less computer power.
I said that the researchers were surprised when the model started to arrive at answers to questions put to it by using a combination of Chinese and English, something they hadn't expected. The article quoted other AI experts as being concerned that at some point AI models might leave human languages behind, either "thinking" in pure mathematics or creating their own language with concepts we wouldn't be able to recognize.
Deutschbein didn't seem very concerned about this, which was reassuring. I told him that given how amazingly complex the human brain is, with its 80 billion or so neurons bound together by trillions of interconnections, it's a neuroscientific fact that when we have a thought or feeling, or make a decision, what produced that output is a mystery to our conscious self.
So, I said, I guess it isn't all that bothersome that AI models are similar: we can recognize their response to a question posed to them, yet have no clue about how they arrived at that response. Of course, if an AI model ever gets control of nuclear weapon launch codes, there's going to be a really good reason why this is bothersome.
Recent Comments