Use code COMPLETE100 to get an additional $100 OFF 2nd-12th Complete Grade Packages or COMPLETE50 for $50 OFF K-1st Grade Packages

Podcast | 21 Minutes

What You Need To Know Now About Artificial Intelligence | Dr. Michael Collender & Dr. Jonathan Shaw

Marlin Detweiler Written by Marlin Detweiler
What You Need To Know Now About Artificial Intelligence | Dr. Michael Collender & Dr. Jonathan Shaw

Listen on Apple Podcasts | Listen on Spotify | Watch the Video

A.I. (Artificial Intelligence) has risen into the spotlight in recent years, and it’s easy to see why! What applications does it have, where is it going, and what elements does A.I. still lack that humans can possess? Dr. Michael Collender and Dr. Jonathan Shaw, authors of the book Wiser than the Machine are here to talk about all of this and more!

Episode Transcription

Note: This transcription may vary from the words used in the original episode for better readability.

Marlin Detweiler:

Hello again. Thank you for joining us for another episode of Veritas Vox, the voice of classical Christian Education. Today we have an old friend and a new friend, Michael Collender, who has been teaching for Veritas for probably 15 years, a long time, and his partner in writing a book, Jonathan Shaw. They have written a book on artificial intelligence, or as we tend to call it, A.I., and it is a spectacular book.

They sent me the manuscript and gave me a chance to read it. And it is really remarkable. But before we get into that any further, Michael, give us a brief bio on yourself. I know you've been on our podcast before, so people are probably familiar with you. And then Jonathan, when he finishes and maybe before he finishes, if he goes too long, please introduce yourself, your background, family, and that sort of thing.

Dr. Michael Collender:

Yeah. Marlin, Thank you for having us on. So, I've been teaching at Veritas for 15 years. I've got a couple of Ph.D.s that relate to the book. So the first one was dealing with modeling complex systems and the models of artificial intelligence, and the the second one deals with the relationship with how we grow and learn as it relates to empirical psychology. And so this book is a meeting of those two subjects as we're looking at classical Christian education and also artificial intelligence.

Dr. Jonathan Shaw:

So I got my Ph.D. back in 2006, working in machine learning. It was unifying perception and curiosity, trying to figure out how the brain is able to do both perception of vision and audition and all of that stuff, and also make intelligent choices in the world to try and meld those two together. Since then, I spent over a decade working at a little speech recognition company, and then I spent the last few years at Facebook, and now I'm at a little startup that is, I'm the head of machine learning at a company that's trying to figure out how to do translation for lots and lots and lots of languages, most of which are very low resource that the big tech companies are too hard to get data for. Not not doing much.

Marlin Detweiler:

That's wonderful. Well, A.I. hit the public main with the release and popularization of ChatGPT. What was your– you’re not new to this. You knew it was coming. People like I did not know it was coming. Tell us why it was so impactful to see those kinds of things become realities.

Dr. Jonathan Shaw:

I think it's a couple of different things. For one thing, for decades and decades, everybody in A.I. has been told that you can tell if something's human, if something has human-level intelligence, if it can pass for a human. And that's been kind of a holy grail of A.I., It's called the Turing Test, and people have been playing at it for a long time. And all of a sudden, here you had something that felt intelligent, that you could give it a new thing to do, and it would understand what you're saying and follow it.

And it was fully accessible to everybody. And you could just button press on your phone and you're like, “Look, it's like talking to a person, but it knows everything.” It was pretty shocking to a whole lot of people. And immediately, the question is, well, then what's left for me to do?

Marlin Detweiler:

That is a scary phenomenon.

Dr. Michael Collender:

Well, if I can add to that, there's another way in which that access to knowledge, to information that seems organized also is scary because it makes us ask the question, “Who's organizing this information?” So there's a little detail that I think I can share in Google. There is there was an A.I. that was tasked to and this is internal to Google.

There was an A.I. that studied all of the hiring and they concluded that white men were the best hire because they stayed with the company longest. But specifically, the most important thing is that men were the ones that needed to be hired. Well, the people who are looking at the algorithm decided, no, that's not what should be.

Instead, we need to have a preponderance of women being hired, even though they're not necessarily, you know, the pool of programmers who are female is smaller. But the reason why I'm bringing that up is that it shows the way in which the conclusions of A.I. are shaped by the way that that A.I. is trained and who runs it.

And recently, we can see from Google Gemini that there is a clear bias built into the A.I., and I think all of us are aware through a number of different tools that are that are coming online that there is a bias built into the A.I.. But that's something that I think all of us are aware of, that A.I. presents a great opportunity for the control of people, the control of ideas, the control of information that we have access to. And as a result of that, the decisions that we make.

Marlin Detweiler:

Yeah, well, there are– maybe we can inventory the concerns, the concern of having legitimacy to the programing and not have lies perpetuated as truth is a very significant one. And the Gemini example is a fairly fresh one in the in the context of our recording this episode, what are some of the other concerns? Because it seems to me some of them are fairly obvious that one turned out to be obvious, at least in the way it was disclosed or just, yeah, the way that it was caught. But there are other ones that are far more significant, aren’t there?

Dr. Michael Collender:

Yes, and I think some of the ways that we're going to see this in the near future is that the use of A.I. allows us to study things that we weren't able to investigate earlier and exercise control over them. So certainly, we're seeing this capacity increase in warfare. Still, also in the medical field, there are interventions that we can now do even in the human genome where researchers think they understand the genome because they're using A.I. to do the research and to try to predict what will happen.

But for anyone familiar with complexity theory and how complex systems work, there are all kinds of cascading effects that even if you have A.I. and you apply it in a certain domain, there are all kinds of dangers that can then be brought into existence as a result of that. So there's this nexus of nanotechnology, artificial intelligence and its interaction with the natural world that is a very scary area.

Dr. Jonathan Shaw

You know, they provided a fascinating example of that, actually. I read an article on the tail end of COVID where that basically said that they did a meta study of over 200 A.I. models that were designed to predict various aspects of COVID. How is it going to progress? How is this drug going to handle things? How are the masks going to play out?

All of this kind of stuff, different aspects of it. And their conclusion was that there was like two, I think, that were not clearly wrong and one that actually looked like it was right out of over 200 models. That's really bad.

Marlin Detweiler:

I don't want to bet on one out of 200 odds. But tell us that this is a fast-developing category of technology. I noticed on the Greylock website recently one of the premier investment firms in technology that America has ever known, that they're completely focused on investing in A.I. right now. It is clearly a hot topic, a big opportunity, at least if you trust Gary Locke's success and wisdom, tell us what it does today, and give some good examples and then tell us what it might do tomorrow.

Dr. Jonathan Shaw:

So today, ChatGPT being an obvious example. As you can talk to it, you can give it instructions, and it will interact in a very natural way. You've got the image generation stuff like Gemini, which was a classically terrible example of that, but there's a bunch of image generators out there that work very well. They're moving into video.

You see Sora is moving into predicting and degenerating video nicely. There's a lot of multimodal stuff there where you can describe things in text, get things back in, and images. I was playing with models recently that are doing speech-to-speech translation, so that's that's a pretty new thing, not being able to do that because now you can actually do translation without having a text system, a writing system.

So, these are all different areas that A.I. does. Then you get a bunch of medical kinds of things that we were discussing earlier, some of which work a whole lot better than the examples that we were just giving. There's A.I. for helping on practically any field like this. A.I. for reading legal documents and so forth. And so there's a lot of things that A.I. is doing.

Marlin Detweiler:

Let me have you pause there before you go to the future because you're so accustomed to this; you’re so caught in the field that it would be easily missed by the listener, I think. You mentioned text-to-video and text-to-images.

Michael shared with me, shared a link with me that showed me about 15 examples of that, and it was astounding how simple the text description was. A sentence or a four or five-sentence paragraph that turned into a very realistic and complicated-looking video. It would be possible to literally create a feature-length film from a paragraph! Those kinds of things seem to me to be very profound in certain fields and how they will affect certain professions.

Dr. Michael Collender:

Right. And Marlin, right now that this is affecting business decisions. So I heard that Tyler Perry –

Marlin Detweiler:

I read that, too. Yeah.

Dr. Michael Collender:

Yeah. So he for the audience that does that might not know, he is suspending certain aspects of his production company because he saw Sora and could see very clearly that people will be able to make movies from a prompt.

Marlin Detweiler:

I what I heard was that he walked away from an $800 million development, at least for now, because of the risk of that he thought A.I. posed to the success of that venture.

Dr. Michael Collender:

Right.

Marlin Detweiler:

Unbelievable. Okay. So we've got the ability to to tell A.I. to write a paper for us, which creates a problem in a school setting like what we have, Michael and I work in. We've got the idea of being able to create videos and movies and entertainment. One of my children who is in the video world, said that the sitcom is so easy to create with A.I..

Now, in terms of at least script writing, I'm sure the video could as well. There are so many things like that, and this is just the first volley, isn't it?

Dr. Michael Collender:

Right. There are so many different fields that this is going to impact. But one of the examples that we discuss in the book is the U.S. military will wargame various strategic planning scenarios, but in one of them, they actually assigned a kill authority to an A.I., and they gave a human being control over the A.I. to make sure that the A.I. didn't hurt anybody. Well, the A.I. went and killed the human. Not actually, but within the game, it killed the human.

And then they gave it the rule that you can't kill the human. And so it instead started attacking the communication line, the tower that the human was using to prevent the A.I. from its kill decision, from exercising its own kill decision. So that sort of problem, if you can imagine, the U.S. government is being careful about that.

But what about other governments? Right. All of this technology is becoming available. There was a book recently that was written about Chinese A.I., and one of the ways that the Chinese A.I. works is that they are taking all of these models that we've made that have been incubated in Silicon Valley, and they just take those, and they start throwing them at problems.

And their idea is to accelerate as quickly as possible without constraints. And as a result of that, you can see where we are very careful in our ethics. Well, at least we should be. But you can see how when you have a business model, I'm not suggesting that China is unethical, but as they're racing to catch up and gain power in the world, there are places where you can see that there are dangers if people hurry with these technologies.

The dog is let loose. It's in the world now, and lots of people are starting to make use of these tools.

Marlin Detweiler:

Well, you all wrote a book together called Wiser than the Machine. The question that I always like to ask about people who have written books is, why did you write it? What problem were you trying to solve, and are there things that you might do differently if you were doing it today?

Dr. Michael Collender:

That's a great question. Jonathan, do you mind if I start this and then go back clean up? Well, the way that this started off is that I wanted to write this book, and I had this idea for it, and there were certain ideas I wanted to discuss, but the core concern that I had is just what we're seeing now.

This idea that A.I. is now on the scene and where is it that education fits in the midst of this? Well, I think, and for very good reason. You know, presented in the book, I believe that classical Christian education specifically is crucial for us to use, to employ to educate our kids within in a world where A.I. is this dominant force.

And I saw that when I heard people talking about artificial intelligence, it seemed like they didn't understand it. They didn't understand how it worked and why. And so, in the midst of that, Jonathan and I have known each other for a number of years. I just always enjoy and appreciate just talking with Jonathan, and I think when we met, I was speaking at a retreat that I think your church, and I think you guys actually hosted me.

So that's where we met, and so I called him up and, and we started hashing through all of these different ideas and I was bouncing things off of them. And then, as we were talking, I think it kind of dawned on us together, Hey, we really should coauthor this. And so I, I just have found it incredibly fruitful. And regarding whether I think it's good or not or anything, I would change. Really? No. I feel like I feel like we nailed it and I need to leave that to the reader to see. But I'm very pleased with it.

Marlin Detweiler:

Yeah. Good place to be with the book.

Dr. Jonathan Shaw:

From my standpoint and it's it's natural limitations have been things that I've been thinking about for decades. I was pondering specifically not long before Michael contacted me about this, the fact that you can think of its limitations directly. With these new pieces that come out, ChatGPT, Dolly for Image generation, there are various systems like player games for ethics, trying to figure out how to do, how to work in the world. These various systems are limited in terms of beauty, goodness and truth, which I thought of it that way and I thought it's exactly the things that classical Christian schooling stresses, that is the things that A.I. is limited on. And so then it was probably a month or two later, Michael calls me and says, “Hey, I want to write a book. I'm working on this book that's about classical Christian schooling and A.I.. What do you think of this?” And I got just so happens I have a thought on that.

Marlin Detweiler:

Yeah, I have been thinking about it. Sure. Well, I don't want to blow the punch line of the book, but as I read it, I was captivated. Every now and then. It's probably no more than once or twice a year. A book keeps me up at night, and this one did. And I kept reading it. He kept challenging my intellect with some of the language and technology behind it, things that were above my pay grade. But I was able to follow you, and you gave me, you gave the reader and me at this point a reprieve, and said, “It's not necessary to read this.” Well, I didn’t skip anyway. I stayed there, and it was remarkable how you made the case for the concerns. You know, here I am reading this book with concerns about A.I., and you didn't just confirm my concerns; you made them worse!

I was far more fearful of the book. And I was at the beginning. And then you gave me an answer that just resonated and said yes, by the grace of God. I need not worry now. Now, I'm not supposed to worry about that. Doesn't mean I don't. And and it was just it was wonderful to see how you developed that.

And like I said, let's cause the reader to want to read the book and not give him the punch line of the answer. But it was incredibly enjoyable to be taken through that process and to be let down as if a fiction book that has a peak in and then a release. But it really did. Tell me how it came about. And working together on a book is not an easy task. I know that. But how did the interchange go that got you to that point?

Dr. Michael Collender:

Yeah, you know, that's a great question. I mean, as we talked about this, I really enjoyed working on this project with Jonathan. In fact, it's really been enjoyable. He is highly intelligent and very careful in the way that he reasons, and he is just a wonderful collaborator. And so we've even talked about, working on other projects together. I've pitched that to you.

So yeah, I've just, I've always appreciated Jonathan and just the way that he thinks, the way that he loves his family. And you've been a blessing to me. And so it's been just wonderful to work on this with you.

Dr. Jonathan Shaw:

Thank you. From my standpoint, I had no conception whatsoever of writing a book. And Michael called me out of the blue and said, hey, as I said, is writing a book. It's like, here, here's here's my first draft, here's my thoughts on it, here's where I'm going. And I read it, and I have comments, but this is really interesting.

And so then we started working on it and there was some of the stuff in the then I rewrote and he wrote parts of the end and a whole lot of the introductory stuff, a lot of the stuff that scared you at the beginning that was Michael's doing.

Dr. Michael Collender:

But Marlin, it is interesting that you bring that up. I mean, we did actually, you know, for those who are into this sort of thing, we did kind of structure it like a film. So it has an inciting incident. It has a first plot point that gets you into the middle act. It has a crisis point.

As we were going back and forth, I was consciously thinking of the structure of a film and creating that effect that you're describing.

Marlin Detweiler:

Well, the story effect that you picked up on is one that's well-worn as a good way to capture the reader's imagination and cause them to be moved the way you're trying to move them by writing in the first place.

Dr. Jonathan Shaw:

And it helped us to have a rhetoric teacher involved here.

Dr. Michael Collender:

It was a lot of fun.

Dr. Jonathan Shaw:

Yeah, it was.

Marlin Detweiler:

Jonathan, I’m stuck on something I want to go back to. You mentioned that only one or two, maybe three of the models out of a couple hundred, were anywhere close in predicting COVID; I guess was that the playing out, how it played out?

Dr. Jonathan Shaw:

I think it was a bunch of different aspects of it. Some of it was how it was going to spread or how masks were going to stop things or what medications were going to do or different government policies.

Marlin Detweiler:

We are a very small number at a very early stage in the development and effectiveness of A.I.. We've talked about using A.I. in the military to develop models. I don't like the idea of a one-and-a-half percent success rate for the military to do its modeling and make decisions off of. What does that mean?

Dr. Jonathan Shaw:

I read an article just the other day that was a study. That was the study basically said they tried out each of the big chat models, so GPT and Claude and Gemini and Llama, and they tried each of these models in a scenario where it was a wargame kind of scenario where each one had 27 things that it could do, ranging all the way from like, let's make peace and de-amplify this thing all the way up to launch the nukes.

And in one turn, 33% of the time they launch the nukes. And what is going on here? And I suspect that this is an aspect of we've trained the model based off of the Internet, and the Internet is often a pretty combative place. And so if you think about how people interact on the internet…

Marlin Detweiler:

So it’s based on how we talk to our neighbors!

Dr. Jonathan Shaw:

Great, isn't it?

Marlin Detweiler:

That's scary.

Dr. Jonathan Shaw:

Yeah.

Marlin Detweiler:

So you didn't answer my question, so I want to come back to it. And I understand it's a hard question. How does A.I. become reliable?

Dr. Michael Collender:

Can I jump into part of that here, since we're talking on this one topic, and then I can pass it back to you, Jonathan? Just on this topic of the unreliable COVID models, I bet the one model that worked, I think I was on the on a call on a Zoom call with the guy who did that model on Monday. So this last Monday. And one of the things that he mentioned on the call was that

he met with the governors of various states. So they built this A.I. model to track where were the actual dangers. And they discovered through this A.I. model that the real dangers for COVID were elderly people in rest homes, right? Nursing homes.

But for the rest of the population, it really wasn't a problem. So they're going around, and they're communicating the model, the assumptions behind it, how it was trained, and explaining to the various governors how to approach this. Well, they went to Florida, and Governor DeSantis basically closed out, you know, kicked everybody out of the room and met individually with the guy who had this model.

And he was explaining he grilled us. He had done his homework. He knew exactly what the issues were. And he was able to talk through this model, understand the assumptions, and then use it as a basis part, you know, a partial basis for his decision making to not completely close down Florida. He went over and met with Governor Newsom, and Governor Newsom didn't meet with him. They had a lackey who came in, and he was eating French fries the entire time. Right. Didn't care. So my point in saying that is –

Marlin Detweiler:

The record, we don't care that it's not about French fries. It's about the casualness.

Dr. Michael Collender:

That’s right, we love French fries, especially for McDonald's. But the point that I'm getting at is that here you had a model that was really carefully put together. Where the assumptions were tested and whatnot, and you can see two different responses to it. So all that to say that in terms of COVID, there were actually good uses of A.I., and in fact, in some cases, the good uses of A.I. were ignored by people who didn't like the answers they were getting.

Dr. Jonathan Shaw:

Well, yeah. So, an answer to your question. There are two aspects, I would say, of making A.I. reliable. One of them is technological, in terms of A.I. fails because it doesn't have the skill to see what it's doing accurately. It's just a large language model predicting the next token. And if you're just predicting the next token, you don't know what it is you're talking about. That's not entirely fair, but there's definitely a sense of that if you’re talking to an A.I..

Marlin Detweiler:

It’s a simplistic way of looking at it. Yeah.

Dr. Jonathan Shaw:

Yeah. If you're talking to an A.I. about surfing, it doesn't know how to balance on a surfboard. It doesn't know how to catch a wave, it doesn't know what water is. It's only looked at text, which gets us surprisingly far, but it's also got big holes in its understanding. And so there's technological advances that are happening, partly trying to figure out how to constrain it, to be more reliable.

And there's all kinds of tricks people are doing, I think more multimodal understanding is going to help with that. If you actually have something that can look at the world and if you have robots that can actually interact with the world, you can constrain it a lot. You can do a lot better. The other side, the other reason that it's unreliable is because it learns its viewpoints from humans that humans are unreliable, that can't be solved without making a more reliable, more honest culture.

Marlin Detweiler:

Yeah, that's what we saw with Gemini, and of course, people come with their biases. It would seem to me that larger and larger data sets start to average that out. But if we have comprehensive cultural deficiencies that really don't recognize truth for truth, we don't get past it. So it really is. We haven't gone very far from garbage in, garbage out.

Dr. Jonathan Shaw:
No, I don't know that we ever will.

Marlin Detweiler:

Well, what you just said is an incredible tease and setup because the way that you came and set up the reader to what the answer is was, yes, that's it. And we're not going to tell the listener. They're going to have to read it for themselves, aren't they? Guys, this has been great.

Thank you so much for joining me, folks. Thank you for joining us again on another episode of Veritas Vox, the voice of classical Christian Education. Jonathan, Michael, thank you.