Creative Space with Jennifer Logue

AI Expert Graham Morehead On How AI Creativity Differs from Human Creativity

Jennifer Logue

On this episode of Creative Space, I’m thrilled to bring you an insightful conversation with Graham Morehead, AI expert and author of "The Shape of Thought: Why Your Brain is so Different from AI." From his tech-rich upbringing in Boston to his current role as a professor at Gonzaga University and CEO of Pangeon, Graham shares his extensive knowledge and unique perspectives on artificial intelligence (AI) and human creativity.

We cover a lot of ground in this episode, from the history of AI to the creative process of a GPT and how AI creativity differs from human creativity.

I hope you find this episode as fascinating as I did.

For more on Graham, visit: https://grahammorehead.com/
You can buy his book here.

For more on me, your host and creative coach, visit: jenniferlogue.com.

To sign up for the weekly Creative Space newsletter, visit: eepurl.com/h8SJ9b.

To become a patron of the Creative Space Podcast, visit: bit.ly/3ECD2Kr.

SHOW NOTES:

0:00—Introduction

3:28—“Math is the language of the universe.”

5:00—The Influence of 2001: A Space Odyssey 

5:50—Diving into the history of AI, Alan Turing and the universal Turing machine

9:15—Defining AI at a high level

11:00—How AI models are trained

13:30—What is RLHF?

16:12—The Roomba, self-driving cars and discrete objects

23:00—The creative process of a GPT

26:30—How human creativity is different from AI

35:00—As an artist, part of your job is to say what hasn’t been said. 

35:30—What led you to devote your career to AI?

38:53—The story behind Parsimony 

41:44—But what about Blake Lemoine? 

43:33—Graham’s definition of creativity

49:32—Figure out what you want to say, do it and keep at it.




Jennifer Logue:

Hello everyone and welcome to another episode of Creative Space, a podcast where we explore, learn and grow in creativity together. I'm your host, jennifer Logue, and today we have the pleasure of chatting with Graham Moorhead. He's the author of the Shape of Thought why your Brain is so Different from AI. He is also the founder and CEO of Pangeon, cto of Topology and a professor teaching AI at Gonzaga University. For over 25 years, graham has been developing AI and machine learning solutions to difficult problems. His research has led to several TEDx talks, several tech companies and hardware and software patents. He is currently working to solve problems related to wildfire, real estate, intellectual property, linguistics and defense against AI weaponry. I'm thrilled to have him on the show. Welcome to Creative Space, graham.

Graham Morehead:

Thank you, it's so great to be here.

Jennifer Logue:

Oh my gosh, you have such an incredible background and first I got to ask where are you calling from today?

Graham Morehead:

Washington, Spokane, Washington, Actually a walking distance from Gonzaga.

Jennifer Logue:

Oh, that's wonderful Cool, do you spend a lot of time teaching?

Graham Morehead:

I teach one class at a time. However, this fall I will be sort of teaching two classes. We have a new program at Gonzaga for graduate level remote learning, so this summer I'm doing research with students, as I always do in the summer, but also preparing for that class.

Jennifer Logue:

And I got to ask where are you from originally?

Graham Morehead:

Boston, Mass.

Jennifer Logue:

Nice, okay, you're an East.

Graham Morehead:

Coaster Yep. Born and raised in Boston, I did travel around a bit because my dad was in the Air Force, so we spent some of my childhood in England.

Jennifer Logue:

Oh cool, Very cool. So international upbringing and who are your biggest inspirations in your early years?

Graham Morehead:

So I was growing up in a time when Boston was one of the leaders in technology, the area around Boston. There's a highway called 128, and a lot of modern technology was developed along that highway and Bedford is smack right on that. So we had Hanscom Air Force Base where a lot of research was being done, right next to Lincoln Labs, mitre, raytheon. All these different companies were doing really interesting work and all their kids would go to the same school as me, the public schools around there, as me, the public schools around there. So I was more or less inspired by a lot of my parents, friends and my friends' parents. So in both respects, a lot of scientists and mathematicians were just in the area and that made it more difficult to be on the math team, the science team, but also much more inspiring, and I just remember being exposed to a lot of interesting mathematics early on, and that got me going down that path.

Jennifer Logue:

Wow, so math was always a big thing for you.

Graham Morehead:

Oh yeah, I think math is the way forward. People are afraid of math, they hate math, but it really is the language of the universe.

Jennifer Logue:

When you put it that way, it's really interesting.

Graham Morehead:

But they didn't teach it that way, at least not where I went to school. No, they don't, and I think it's because it's hard to learn and hard to teach. Whenever I took a statistics class, I always felt like I was learning real truth with a capital T. Think about your instincts. We're all afraid of shark attacks and we're not afraid of driving down the street to the store. But way more people get hurt driving down the street to the store than shark attacks. Shark attacks are very, very rare. Our fear systems are not based on actual statistical likelihood of something happening. Statistics is a way to cope with our idiosyncrasies of how our brain works. We're not great at finding truth. Statistics is a coping mechanism we use to find truth.

Jennifer Logue:

Well, that is a whole other conversation. That is fascinating. Oh man, I want to study math. I think a lot of us need math to cope right now, in this world we live in, with all these new things to be afraid of, which maybe we don't need to be so afraid of them yep a lot of them.

Graham Morehead:

We don't but there are things we should be somewhat afraid of. But maybe there are people who spend their life not afraid and they have just as much chance of having trouble befall them. So maybe it's better to live your life not very afraid, just a little afraid.

Jennifer Logue:

Yeah, like somewhere in the middle.

Graham Morehead:

Yeah, yeah.

Jennifer Logue:

Cool. So when did you first get interested in ai?

Graham Morehead:

so I, as a kid, watched 2001, a space odyssey, and they had the how 9000 computer. I was fascinated by it. I watched the movie many times, including the long version that begins with they play the entire alzo splach, zada, thustra and just 10 minutes of black screen. You know, but I loved it, even as a kid. I took a course in my last year of college. I was studying physics. I took an ai course just because I had to fill out my schedule and it was so interesting and I remember thinking I thought this was sci-fi only in the movies. But this is real. Turns out, ai started in 1956 as a real direction, of course, of research, and before that it was already being done, but we hadn't named it yet. A lot of what we do in AI started with Alan Turing and his work in the 1930s and 40s. The Turing test have you heard of that?

Jennifer Logue:

Oh yes.

Graham Morehead:

Yeah.

Jennifer Logue:

Actually I've been watching a lot of the lectures at the Turing Institute.

Graham Morehead:

Really.

Jennifer Logue:

Excellent. Well, I wanted to learn more about AI and they were really helpful in giving me a big-picture overview, and I didn't know that AI has been around for that long. This isn't a new term. This isn't a new thing, as you said.

Graham Morehead:

Yeah, the Turing test is just one of the many things he did. He's well-known for that, but the Turing machine is much more influential. The Turing machine is a way to idealize what a computer does, and you can describe it well mathematically. This is not something we typically use in real life. It's just a way to study computation in the abstract. And the Turing machine was this idea that Alan Turing had, and he was able to talk about computation in ones and zeros as a little device that just either reads and writes a one or a zero in every place on this long tape. And using this abstract idea, he was able to prove things about computers, computers that didn't even exist yet, and because of that work we are able to. We knew ahead of time, but we're able to now, you know, have computers that do multiple things. You have more than one program on your computer and your phone and your everything, and no one was sure that was possible until Alan Turing did this work.

Graham Morehead:

There's something called this, the universal Turing machine, and it's the idea that if you make your computer correctly, then you can run any software on that one computer. You just need to get the right software any software on that one computer. You just need to get the right software. But that whole idea that software and hardware can be treated separately and you can run anything on one computer, that's Alan Turing. That's way back then. He also separately. This is another contribution from him. He wrote or designed the machine that broke the Enigma, the German communication device that the Nazis were using in World War II. So if it weren't for Alan Turing, you might be speaking German right now.

Jennifer Logue:

Oh, wow. It's amazing the impact one person can have on the future of humanity right, and I don't think the average person knows turing right now, apart from the turing test, if they're even following that at all yep so I highly recommend the imitation game.

Graham Morehead:

It's a movie about alan turing, and benedict cumberbatch plays him oh cool yeah, I have to check that.

Jennifer Logue:

I've heard of it. I need to catch up on my movies, but that's definitely on the list.

Graham Morehead:

It's a worthy movie.

Jennifer Logue:

For the actor and also for Alan Turing. Now for people listening who may not understand AI at all as a concept how would you define it at a high level?

Graham Morehead:

There's different kinds and I'm going to put them in different boxes. Okay, so one kind of AI the one that people are the most used to and have been using the longest time is the ability to find a needle in a haystack. That's what Google Search does. How many websites exist? Well, it's hard to count right. When you have something you want to know, maybe some website you want to go to, or some company you want to find, or a location, you go to Google and you ask, and it finds the needles in the haystack. The haystack is bigger than any haystack we can imagine and those needles are really hard to find. Google is so good at finding needles in the haystack and that is a kind of AI. Now, let's say you're going to go the other direction. You want to generate a haystack from a needle, right? So generating a whole lot from one small thing, that's another kind of AI. We can call that generative AI.

Jennifer Logue:

That's generative AI. What would you call Google? Google is the other one we can call that generative AI. That's generative AI. What would you call Google?

Graham Morehead:

Google is the other one. I would call that search.

Jennifer Logue:

Search okay.

Graham Morehead:

Call that search, so going the other direction instead of finding one thing out of many, going from one thing to many To many. That's generative AI and open AI has a number of products and there's a lot of these. There's 1000s of them now, actually, and a lot of them are open source, but the text based ones they are. You enter a little bit of text and you get a lot more text. In fact, you can keep going. You can have to write books for you if you want. Yep.

Graham Morehead:

Now I wrote my book myself as a human. I wanted a human effort because I think that's more valuable than ever Human creativity, human thought. Images can also be generated by AI now, and they're both trained different ways. So I'll talk about the text first. The way they make these systems is they train it with the word guessing game. Take a sentence like Jack and Jill went up the. Can you finish that? No, very good. Yay.

Graham Morehead:

Had guessed wrong, I would have told you that was wrong. And this is how much wrong. So the AI has this vocabulary, all of English or whatever languages it's being trained on, and it has a probability against statistics, probability for each word being the next word. And the way you train it is you give it a lot of these sentences. Jack and Jill went up the blank and he makes a guess and he gets it wrong. You tell it nope, the right answer was Hill.

Jennifer Logue:

Does a human tell it it's wrong?

Graham Morehead:

It's human text. It learns on its own from human books and websites and podcasts and everything.

Graham Morehead:

Oh wow, yeah, in fact, think about this. The average human is exposed to about 20,000 words a day. Right? Ai like ChatGPT has been exposed to well over a million years worth of that worth of 20,000 a day. So it's a lot of data. It's most of the internet. Wow, and what they did is they gave full sentences. They would black out a word and have a guess that word, do that again and again and again. It gets really good at guessing the next word.

Graham Morehead:

Now, what nobody expected was that if you train it just on that, it starts to do really well in understanding, general understanding. It seems to do well and it does well. Until it doesn't. It has weird breaking points, it hallucinates and it says weird things. Breaking points, it hallucinates and it says weird things. And the OpenAI company has hundreds, maybe more, maybe even thousands, but they have hundreds of people working on looking at the output and making it better or trying to make it better. So they have this system, which they call it RLHF Reinforcement Learning, human Feed, feedback and all it means is they have their original AI that generates text. Then they have a bunch of people looking at that text and then they grade it. They say good, bad, ugly, whatever, and then they're training another model to imitate what those people say. So there's always going to be a two model type system here. Okay.

Graham Morehead:

So they train it on the word guessing game and it's amazingly effective. It knows everything on the internet. It's like having that Imagine a person who's maybe very autistic, but they can remember everything they've read, and they have read the whole internet yes so they make unexpected mistakes, but they can tell you so much right and it is a little bit interesting when it makes mistakes.

Graham Morehead:

They're very funny, like and these are all mistakes that have been published, so they've been fixed later, right? Someone asked it if nine women work together, how fast can they make a baby? And it said, well, one month. And it shows the calculation, full paragraph explaining why nine women can each, you know, make one ninth of the baby and you get that done in a month. So it doesn't really understand it. It breaks in funny ways, but it's still very useful.

Jennifer Logue:

I guess, with the paradigm that AI has, I mean, if it's starting to reason a bit on its own from different things, it's pulling from the Internet. It's funny to see some of these hallucinations, you know, but it makes you think like if someone lands it on this planet from another planet and had no concept of our paradigms, would they make similar deductions.

Graham Morehead:

Jennifer, that's the perfect way to say it. A lot of people say it's an alien intelligence.

Jennifer Logue:

Oh really.

Graham Morehead:

Not actually from an alien, but like an alien intelligence. Interesting A lot of people have said that same analogy.

Jennifer Logue:

Oh, interesting. Okay, yeah, it's so fascinating to me to think about how this can develop, but that's a great explanation for people who may not know what AI is.

Graham Morehead:

I think that explains it at a really high level. So I want to add to that, though, the definition of AI. Those are two ways to think about it. Another one is general problem solving.

Jennifer Logue:

Ah, okay.

Graham Morehead:

Think about your Roomba. Do you have a Roomba or robot vacuum?

Jennifer Logue:

I do not, but my roommate did last year.

Graham Morehead:

So it has a problem to solve. It has to cover all the floor area somewhat efficiently before its battery dies and make sure it's clean. So the way it solves that problem different brands do different things. Some of them have a memory, some of them just have a nice algorithm to make sure they eventually cover it all.

Graham Morehead:

I have a Neato brand one and I think it does have it has a memory, actually. It remembers everywhere it's been and when it's done it will message you with the floor plan of where it was able to clean. Wow, and it's sensing how much dust is coming up. So it's if there's a lot of dust, it'll stay in an area for a while. If there's a lot of dust, it'll stay in an area for a while. So it has a problem to solve and it has a number of steps like if-thans you know, if this, then do that right, or keep looping here until something happens. And coming up with the right algorithm to sweep your floor is just one of the things that AI can do. A Tesla doing full self-driving is one of the most advanced systems that we have now and even though it's not perfect, I think it's really quite impressive.

Jennifer Logue:

Yes, and do you think we'll have self-driving cars as like a normal thing in a few years, or do you think that's really far off?

Graham Morehead:

I think it's pretty close, and it's got to be 10 times or 100 times better than humans before we'll let it drive, because, if you look at some statistics, supposedly right now it's already safer than humans on average. Supposedly right now it's already safer than humans on average. However, that's not good enough because it will still make stupid mistakes that a human won't. Okay.

Graham Morehead:

And I also think it's hard to know who to sue, and our system has to know whose neck to choke. You know who do you sue, who do you get angry at, and if it's just a computer program, it's hard for us to handle that. We need to have a person on which to put the culpability.

Jennifer Logue:

I didn't realize that that was part of the problem with getting them on the road. It's like who do you blame when something goes wrong? The system has to be ready for it.

Graham Morehead:

It's like who do you blame when something goes wrong? The system has to be ready for it. Insurance actually gives us a neat look into the mind of AI. So when insurance gets involved, they need to have assurances, they need to figure out how do you do an audit if something bad happens.

Graham Morehead:

And there was a famous Tesla accidents where a truck had fallen over sideways on the highway and a Tesla ran into it at full highway speeds without slowing down. It was difficult to watch. Watch. Now it turns out the driver survived fine, no injuries. Even I think teslas are very safe. But still, why didn't it at least pump the brakes right? And they went into the computer and they could see that it thought it was looking at an overpass. It thought it was going to go under a bridge, so why stop? It had never seen a truck sideways across the highway. So obviously computers don't live lives like we do. They don't think like we do. They have to be expressly shown things, just like with GPT-4, it had to learn from over a million years of English, whereas a human can start talking pretty well after a couple of years. Right, and that makes our brain different.

Graham Morehead:

Very different.

Jennifer Logue:

Yeah.

Graham Morehead:

So these computers have to learn with a lot more examples. So these computers have to learn with a lot more examples. So with the Tesla, they train with millions of examples and if it hasn't seen something ever, it's going to get confused. But what was important about that example was Tesla has a system by which we can peer into the mind of the AI and see the world that it sees. And when you're driving down the street in the Tesla, you can see right in front of you. When there's a car to your left, it recognizes that as a car and knows exactly where it is at every second and also try to predict its movements. Every car has momentum, it has a direction and, yes, it can stop and speed up and slow down and turn, but there's limits to what it can do.

Graham Morehead:

And if it recognizes a biker or a pedestrian, there's limits to what it can do, like a pedestrian is never going to catch up to you at 60 miles an hour and pass you in front of you, so it can predict what the next step in time might be. And it does that in this realm of these discrete objects D, I, s, c, r, e, t, e and it's a certain kind of math. Math is the way, remember. So it's discrete. So I could discrete person, the discrete bicycle, a discrete person, a discrete bicycle, a discrete car truck, these are all discrete, individual things that act as one. And this is how your vision system works too. You're looking out into the street and you see, let's say I see a cat walking by. I think of that cat as a singular thing. It's going to move in cat-like ways. It's not going to turn into five mice and go in five different directions. It's not going to do that. It's a cat. It has to obey the laws of physics of cats, and a car has to obey the laws of physics that apply to cars. So when we resolve the world into discrete things, we can much more simply understand them.

Graham Morehead:

Let's go back to GPT for a second, when you have words that come into GPT. They're discrete words at first, but as soon as they go into the computer, the computer doesn't read words. It thinks about things called vectors. This is some more math, so I'll try to describe it in a simple way. You think about coordinates, x, y, z, like space, right. Here's one coordinate, here's another coordinate, here's another one. That's how gpt thinks about words's in space. So let's say you have the word for cat and then you have the word for dog. Now, where is tiger going to be? It's going to be over here, much closer to cat than dog. And then inside the mind of GPT there's a bunch of points which are these words, and it's moving them around and turning them and squishing them and twisting it, and then in the end it turns those back into words.

Jennifer Logue:

It's like an association map almost.

Graham Morehead:

Yeah, it's a big, squishy, high-dimensional space with a bunch of these vectors and it doesn't think about words or meanings. It just thinks about those points in space. So when it's creative, it's creative in that goopy, fuzzy word vector space. It's not thinking about concepts like you or I. When we're creative, we're creative in the space of thoughts, meanings, meanings and ideas. It's creative in the space of those points in space.

Graham Morehead:

And another analogy I like to make is when you're making shadows on the wall with your hand yeah you have three-dimensional fingers, hands and wrists and you're using that to cast a two-dimensional flat shadow and you can see the imperfections. You can see, okay. I can see the wrist. I can see that those were human fingers. That's not a perfect rabbit or a perfect deer or a perfect duck.

Graham Morehead:

It's got imperfections around the edges because it was this three-dimensional thing and you're just casting a 2D shadow. Now a three-dimensional thing can be viewed from different directions, so if you cast a shadow on a different wall or in a different direction, then you're going to see something different. You have this multi-dimensional thing going down into a flat shadow. Now gpt lives in a very different kind of space than our 3d fingers. It's, I think it's actually more like a shadow generator that's in the space of a flat space already, so it can make a perfect deer or a rabbit or whatever. It's not going to have the same imperfections that the human shadow would have, imperfections that the human shadow would have. It's different and because it doesn't have those imperfections, it's also not going to have some of the same beauty I think that we find in our ideas and it's because it doesn't start off with a thought, so you have a thought your head, and that thought is high dimensional.

Graham Morehead:

That thought has many ways you can look at it. You could cast it on the different walls in different directions, and you choose one. Yes. And then the words. Well, the flat word shadow, in a sense, that's the whole world for GPT. It doesn't have any thoughts. That is turning in the words.

Jennifer Logue:

It's just playing the word guessing game, it's ideas for one and human creativity is so different and we'll get into this in the next set of questions but for me, as an artist, it's about the process, like it's not about the end result. Like when I create art, I'm in the process and sometimes the process takes me down a different route that doesn't make any sense. Like I might get inspired to go to a concert that changes how I approach a song later, you know, and because I'm able to have real world experiences, it's gonna change the output. And I really, because GPT is feeding, ai is feeding on the internet things that have been created before. It's copying, you know, but as humans, we're able to go out and have new experiences and the journey to the art is the most important thing.

Jennifer Logue:

And I feel like a lot of people talking about creativity and AI, they're looking at creativity as this thing you output. Like just because you generated a piece of art on Dali in 30 seconds does not make you an artist. I'm sorry, like there's something about the craft and just the human process that I can't explain. Creativity, human creativity I don't think any of us can. I think there's higher elements that we'll never understand Because, as you mentioned in your book. The brain is a very elegant machine and I mean we're not. There's so much we don't understand, and I think it's hard for some to accept that there is so much we don't understand.

Graham Morehead:

So I have a question for you, Jennifer. Sure. When you're writing creatively and doing your best to come up with something that can impact people or make them feel something or learn something, what part of your process do you think is you simulating the thoughts of your readers?

Jennifer Logue:

Simulating the thoughts.

Graham Morehead:

Simulating the thoughts of your readers. Do you think that's in your head at all? No, that you're trying to simulate how they would receive this, what they would feel.

Jennifer Logue:

It depends on what I'm writing Like. If it's for me, if I'm writing okay, if I'm doing work as a copywriter, that's something where you're thinking about the audience more. But when I'm writing as an artist, I'm expressing my truth and my experience, and my attitude towards that is if this resonates with one person, I've done my job. But my job as an artist is to express truth. My truth and everyone's truth is different. We all have different experiences, but that's because we're all in this world differently, experiencing this world differently, and a chatbot can't do that.

Graham Morehead:

How important is it to reach that one person? Or would you do it the same even if it would reach nobody?

Jennifer Logue:

even a reach, nobody, I would still do it why you? Know, because there's no, I don't know. I I wish I did. Sometimes I'm. It's like there's this pull to create and if I resist the pull, I go crazy.

Graham Morehead:

So that pull is something computers don't have, computers don't desire, they don't want, they don't need.

Jennifer Logue:

that is an excellent point. It's like where does that pull come from? I, I can't explain inspiration. And there's a wonderful book that I just went through this a few months ago I did the Artist's Way I'm not sure if you've heard of it by Julia Cameron, incredible book. It's a spiritual path to higher creativity and it's absolutely wonderful. But there's so much we don't understand about existence, human existence, and yeah, I just find it funny that everyone's, so many people are trying to distill like right off human creativity as this thing that can just be explained in equations and like algorithms. It's like no, come, come on.

Graham Morehead:

Yeah. Another reason I think it's so much more is, first of all, remember that space X, y and Z that GPT lives in, it's more dimensions than just three. It's many, but those dimensions are not inherently meaningful. There are many points in that space that don't match up with a word. There it's not really a meaningful space and when it explores that space, most of that exploration is it's not going to map on to something interesting to humans.

Graham Morehead:

We, as humans humans, when we explore, we're trying out different thoughts in our head and every direction we go in is a meaningful direction. It's a direction along some axis, like, am I being creative in the type of word I'm using or the type of metaphor I'm making? Or in the type of relationship I'm using, or the type of metaphor I'm making, or in the type of relationship I'm exploring between characters? What is the direction I'm exploring it, whereas for AI it's just exploring in mostly meaningless directions.

Graham Morehead:

Now, what's very interesting is that we've been able to make these systems work so well that when they do come out with a full output, we look at that as a human and we can imbue meaning onto it, and we can look at it and think, oh, it is meaningful and we can feel something, but really it's more like a idiot savant was the old term, people who could say things that sound really intelligent, but when you further dive into it and question them you realize they didn't understand a single word they said. But a human who does understand what they say and explores those ideas. That's really interesting to us because you almost feel like you have a relationship with the writer yes, there's intention.

Jennifer Logue:

You know art's about intention. It's not just like throwing stuff at the wall and like hoping something sticks. And hey, even if you're throwing stuff at the wall, as an artist, there's still some meaning there. Probably you might be getting some emotion out that you didn't. But from what you're describing with that space, it just seems like very vacuous and just like superficial it's. You know it ai requires us to attach meaning to it, if any yeah.

Graham Morehead:

Yeah.

Jennifer Logue:

There's no intentionality.

Graham Morehead:

Yeah, and for me it's not as interesting because when I do absorb some kind of art literature, photos, paintings, whatever if there is art there I feel something because I believe that a conscious human made choices and somehow I'm learning from their choices and somehow a marker of our time. Think about the Lord of the Rings great series of books. If it were to come out today as a brand new series of books that had never been written until now, people would think oh, oh, that's nice, it's like a game of thrones repeat, good job. The reason it had such an impact is because it was kind of beginning a genre. Now it wasn't the first.

Graham Morehead:

They stole a lot of stuff from the ring trilogy, from wagner and ring for the four series of operas about, you know, getting the gold out of the Rhine and one ring is power and stuff. But it was the first big impactful thing in the English language in that genre. And he was really good at descriptions and world building, maybe some of the best world building that had been done so far. If it comes out now it's just a copy, but if it came out then, then it's like wow, this is new, this is different. So, as an artist, part of your job is to say something that hasn't been said.

Jennifer Logue:

Hasn't been said and AI is not going to do that.

Graham Morehead:

AI is only going to say, in between the things that have already been said, it's a mashup.

Jennifer Logue:

It's a mashup.

Graham Morehead:

PPT is a mash-up of everything with what's on the internet. It's a mash-up of the whole internet yeah, so you're not.

Jennifer Logue:

You're not being creative not really really you're, you're putting things together, you're mish-mashing yeah, mash Mashups are a little creative.

Graham Morehead:

but that's old, it can be.

Jennifer Logue:

But it's nothing new, so it won't be as impactful. So what led you to devote your career to AI? Do you want to talk about that?

Graham Morehead:

Yeah, it felt super interesting. That's what it was, that's all it was. It was just super interesting. I remember three or four years after I graduated college I graduated in 1995. I sat down with an advisor or friend who had done grad school and he was a little bit ahead of me. His name was Mads. He sat me down at a restaurant and say Graham AI is dead, it's over. You shouldn't dedicate yourself to this, don't? It's dead, it's over. And this is probably, you know, 99 or 98, something like that. And I remember thinking I hear you, but it's too interesting. I still want to do this, and I don't think I responded to him that day, but I just decided I still want to do this.

Graham Morehead:

I grew up. My father was a neurologist and my mother taught languages French, english and Spanish and she studied Arabic too, and my dad was Air Force, so we traveled around. When I was a kid, language and the brain were two things that I was just so fascinated with how does the brain work, how does language work? And I always knew that those were what interested me. But in around 99, around a little bit after that conversation, I decided I want to study language in the brain, but I want to study it by making it happen, by doing it in a virtual brain, in AI. Interesting, that's what AI is, and Richard Feynman used to say that which I cannot create, I do not understand.

Graham Morehead:

He said that about physics, like if I can't generate this kind of particle or this kind of interaction, then I don't understand it. If I can't create something that understands language, then I don't understand language myself. So I'm trying to reverse engineer language and its meaning. Not just can I ramble off words that in retrospect seem to make sense, can I understand what sense it makes as they come out? How?

Graham Morehead:

does the brain.

Jennifer Logue:

Do this Now are you doing this kind of work with parsimony, one of your projects how do you say it, I'm sorry? Kind of work with parsimony, one of your projects. How do you say it? I'm sorry?

Graham Morehead:

parsimony parsimony okay that's all, and then to ask goes like ah, it's all right, parsimony so pangion, our startup studio, is an effort to streamline the process of launching ai startups because, no matter what it is linguistic imagery, data analysis, behind the scenes 75% of it's the same. We're trying to systematize that process and one of those startups we're launching is called Parsimon. Think about it as a cortex layer on top of an LLM. A large language model is really good at generating text. It's not good at understanding. It doesn't quite know what it means. I want to give you another example from the news. Air canada put a chat bot on their website. So our customer came and said you know how do I get bereavement discount tickets? It gave them instructions. The customer followed those instructions. When they got to the airport, the human said sorry, that's not our policy. No discount for you, but I followed your instructions. And the guy said we're not responsible for what our chatbot says. He sued them. And what Turns out? You are responsible for what your chatbot says. Yes, these chatbots are mindless, remember.

Graham Morehead:

They're very good at coming up with believable and understandable strings of characters, but in a mindless way. So we want to make a layer that can understand that, so it's not mindless. Okay.

Graham Morehead:

Think about again that tesla example. They could figure out what happened because they could see the world from the mind of the tesla, what it was looking at, how it understood the world. Okay, that's a discrete car, that's a discrete truck, that's a discrete bridge. So, even though it got the wrong answer, at least they could peer inside the mind and see why there was a why. Well, if with Air Canada, I'm sure the executives called the tech people in the room and said why? Let's look back in the mind of the AI and see what it was thinking, why did it give this answer? And the engineer probably had to say something like heck, if I know, it's just a bunch of numbers in there. Remember those points in space? Yes.

Graham Morehead:

I know it's just a jumble of points doing their thing. There's no why there's no true understanding. That jumble of points had jumbled around for a while and then words spat out. That's why there is no good, why there's no good answer. So we're trying to be that good answer.

Jennifer Logue:

Like a black box, almost for the interaction. For now it's a black box.

Graham Morehead:

Okay. We're making something that's not a black box, oh, okay. So same way, when you're driving down the street and you're testing, you see, okay, there's a discreet car on my left, there's a truck on my right and there's a pedestrian up ahead. We want to do that for words and language, so that as you're moving through a conversation, it will know what things mean and where they can go.

Jennifer Logue:

Okay, so now our earlier conversation. We were talking about how AI can't imbue meaning onto things. Now with this, it sounds like it'll be able to, so should we be afraid not afraid.

Graham Morehead:

Okay, it does not feel, it does not want, it does not need. Okay, consciousness is that which feels consciousness that which cares a computer, no matter what it says, it doesn't actually care if you turn it off well, blake lemoine oh the google guy yeah.

Graham Morehead:

So first of all, he doesn't understand the mathematics of what he was talking about. If you got a million people with graph paper and pencils, it would give, and they did the math instead of the computer. You would get exactly the same output. Now, would blake bemoin say that graph paper and pencils are conscious? Maybe he would. I disagree. I don't think pencils and paper are conscious, but they will give exactly the same answer as the computer would in his case. Now, the other way I can look at it is if I wanted to get awesome press about my company, I think what I'd do? I'd shoot one of my engineers and tell him okay, we'll give you a bonus, don't worry about this. But I want you to leak a story saying to the world that our AI is conscious, and then we're going to fire you and pretend to have a cover-up. That would be the best marketing ever.

Jennifer Logue:

I mean, man, it must have worked if that was the case, hypothetically speaking. Yeah, so I wanted to touch more on human creativity and AI. Like I know we covered it quite a bit, but before we do that, what's your definition of creativity?

Graham Morehead:

Creativity can be either actual voice or words or images or theater or something else, like what you know, people who do art. That's not easily categorizable, but you say something and that something is something deeply true to you and it's never been said before. Maybe no one's even thought it before. Love, that line from La La Land. They show us what colors to see, give us new colors, like are there new colors that no one has seen before?

Graham Morehead:

and once it exists, then that becomes a new touchstone, a new point from which we can build on even further that's a really good point, like increasing your paradigm, like your palette yeah, when you see a piece of art that can like expand your perspective.

Jennifer Logue:

Yes. So, on that note, I know we talked about this before, but how would you say AI creativity is different from human creativity.

Graham Morehead:

So AI creativity? First of all, it's not a relationship. One thing that's really amazing about humans is we all have our own take on the world, on existence and what it means to be human, what the universe is, what they are. Everyone has their own take and you perceive the world in a totally different way, like if I were to actually be inside your brain somehow, magically. Perhaps I'd look out the world like wait, the sky is red and trees are green and trees are blue. Like what's going on? Why is your system different? Because everybody's experience is totally different.

Graham Morehead:

Like when you have an interaction, buying coffee in the morning, things feel different to you than they would someone else. When tragedy happens in your family, maybe, things feel different to you than they would someone else. Everyone has their own unique view of life in the world, so they're able to say something that is expressive of that uniqueness. That's what real, the best of creativity should be. What it often is is something different. It's often where people are trying to guess what other people would like to pay for as a TV show or whatever, and they just do what other people are willing to pay for.

Graham Morehead:

And 80, 90% of the content that we get is like that. I might put all the Marvel movies in that and sadly Star Wars became kind of in that vein, with a few exceptions. But appealing to the broad base is not really creativity, not at its best. Real creativity should challenge you, should maybe make you feel things and excite that part of your brain that is only turned on when you see something you've never seen before, when you hear a word or a concept you've never thought about, or see a color you've never seen. You know that part of your brain should get turned on and there's a reason that we all like that. We like hearing a story that's we've never been told before in a sense, yes, maybe it's a different take on something.

Jennifer Logue:

Yeah, even being exposed to different colors, like I, I'm learning jazz piano right now because I've always been a vocalist but I've always played piano but I never really dug into jazz chords and stuff. And as I'm playing with these chords, I'm just like every time I learn a new one I'm like, oh my God, these colors, it's indescribable. And I listened to the old records and I'm like there's so much richness there. So, yeah, that's human creativity, the way we put those interesting things together. But AI creativity, as you were saying earlier, it's more flat.

Graham Morehead:

It's, yeah, it's, it's going. There's no drive behind it, there's no consciousness behind it. Now, just by chance, it will end up sometimes generating things that make us feel things that says more about us.

Graham Morehead:

Mashups of what other humans have done will make humans feel things great, but that's not what art should be for. What is art for? Art is about someone out there in the world decides to dedicate themselves to art. Now we already know everyone has a unique take on what it means to be alive. Some of those people a small set are going to make that their profession. They're going to be artists or they're going to spend some of their time on art, and those people who choose this for themselves, they're going to say you know what this unique take that I have on the world, this is worth telling others about, because I've seen something they haven't and I want them to see it.

Jennifer Logue:

Yes, now, something you said earlier made me think about this. But in the algorithm-driven world we live in now with social media, so many artists you know I get questions written into the podcast. Like you know, do I keep doing this. No one likes my posts on social media and they're good artists. Yeah. And I mean it's like the algorithms are almost deciding for us.

Graham Morehead:

Yeah, and that's messed up. And my biggest advice to everyone and I'm trying to follow this myself is figure out what you want to say, feel it strongly and just say and do it. And it's not going to work most of the time, and that's okay, just keep at it. I like one thing that I remember joe rogan saying years ago he was starting out his podcast and it was like three and a half hours long, three hours long, and no one was watching it back then and all of his advisors and friends would tell him stop this, you got to make it shorter, you got to edit it down, no one's going to watch it. And his response was well fine, let no one watch it.

Jennifer Logue:

I love it.

Graham Morehead:

And because of that, eventually, people who were going to watch it found him. I don't know all the parts of this whole puzzle, but I do feel strongly, especially in a world where good content can be generated at will. What I'm going to seek out, and what maybe other humans are going to seek out, are those interesting voices that actually say something they feel.

Jennifer Logue:

Yes, and you don't know when it's going to hit. You know no one can, it may never hit. It may never hit.

Graham Morehead:

Even if you reach no one, you still want to say that thing you want to say right.

Jennifer Logue:

Yes, because we have that drive to. We have to. It's like the result is none of our, our business, like we can't be focused on results as artists, as creators of any kind, I mean, unless you're running a startup. I mean then there's business involved and you know that's another conversation yeah, um I wanted to talk about your book a little bit. We have a few minutes now to talk about it.

Graham Morehead:

The book touches on a lot of the concepts that we've discussed today. How does AI work internally? Why is it so different from your brain? And some of the examples I gave today are in the book, but I discuss it in a lot more detail. But it's not technical. There's no code, just stories and thoughts and some pictures oh cool, so it makes it really accessible to everybody but it's all about the shape of your thoughts, and shape is what makes human thought different from ai thought oh, that's interesting. The mathematical shape.

Jennifer Logue:

Cool. Well, graham, thank you so much for appearing on Creative Space. I learned so much from having you on the show and I'm really looking forward to checking out your book. For more on Graham Moorhead, visit grahammoorheadcom. And thank you so much for tuning in and growing in creativity with us. I'd love to know what you thought of today's episode, what you found most interesting, what you found most helpful. You can reach out to me on social media, at Jennifer Logue, or leave a review for Creative Space on Apple Podcasts so more people can discover it. I appreciate you so much for being here. My name is Jennifer Logue, and thanks for listening to this episode of Creative Space. Until next time.