On my HinesSight blog, yesterday I wrote about how I'd finally gotten around to trying ChatGPT, the online chatbot offered by OpenAI, a company whose goal is A.G.I. -- artificial general intelligence that can do anything a human can do.
I got the idea to look into ChatGPT after listening to part of an interview MSNBC's Ari Melber did with hip hop artist Erykah Badu where Melber read to her ChatGPT's answer to his query, "Discuss Erykah Badu's contributions."
That answer from the artificial intelligence chatbot was really impressive. You can see that part of the interview in the video below, where I've made it start at the beginning of the ChatGPT segment. When Melber asked Badu to rate the answer on a scale of 1-10, she said "10."
That's pretty much how I felt about ChatGPT's response to me asking it, "Tell me about Brian Hines the blogger and author."
Sure, ChatGPT was complimentary toward me. But what really impressed me was how well written the response was, and how it described me in a way that I hadn't heard anyone else say, not even me. Here's a screenshot of the ChatGPT response, which took just a few seconds.
Well, as I said in my HinesSight post, what ChatGPT said made me want to be best friends with the chatbot. To deepen our relationship, I then asked it to write a poem about me. Here's what I got in response, which again just took a few seconds.
OK, that's over-the-top, even to me. And it wouldn't win any poetry awards. Still, it's difficult to write poetry. ChatGPT did a damn fine job, though I have no way of knowing whether I got an original poem meant only for me, or whether this poem contains some recycled material from queries other people have made.
(Note: when I asked ChatGPT about my wife, Laurel, it said that it couldn't reply to queries about private people. I got the same response when I asked about a friend who is a retired economist and has both written a book and been involved in some high profile land use issues here in Oregon. One possibility is that since I have a Wikipedia page, ChatGPT views me as a public figure of sorts. Or maybe it's just because I have a greater online presence than most people.)
Artificial intelligence is a hot topic these days. The company behind ChatGPT, OpenAI, reportedly has a $10 billion contract with Microsoft.
There's reason to be concerned about the impact of artificial intelligence. We're just in the early days of AI, but already the capabilities of chatbots like ChatGPT are mind-blowing. I saw a tweet a month or so ago that said ChatGPT taught itself chemistry on its own, which struck me as rather creepy.
But new technologies always arouse fear. Radio was disruptive. Television was disruptive. Computers were disruptive. Some people focus on the benefits of new technologies, while others focus on the downsides.
David Chapman, a philosophical guy I've followed for quite a few years, who has a Ph.D. in artificial intelligence, has written an online book called Better Without AI. Here's an excerpt from the alarmingly titled first chapter, "Only you can stop an AI apocalypse."
This book is a call to action. You can participate. This is for you.
Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival.
Worries about AI risks have long been dismissed because AI itself sounds like science fiction. That is no longer possible. Fluent new text generators, such as ChatGPT, have suddenly shown the public that powerful AI is here now. Some are excited about future possibilities; others fear them.
We don’t know how our AI systems work, we don’t know what they can do, and we don’t know what broader effects they will have. They do seem startlingly powerful, and a combination of their power with our ignorance is dangerous.
In our absence of technical understanding, those concerned with future AI risks have constructed “scenarios”: stories about what AI may do. We don’t know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer AI away from an apocalypse—and perhaps toward a remarkably likeable future.
So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.