Episode 3

Exploring the Essence of Intelligence: A Deep Dive

The exploration of intelligence constitutes the core theme of our discourse today. We delve into the nuanced distinctions between human intelligence and artificial intelligence, examining the multifaceted nature of intelligence itself.

This conversation prompts us to consider whether the term "intelligence" serves as an adequate descriptor for the complexities inherent in human cognition and the capabilities of AI systems.

We discuss the implications of AI's rapid advancements, particularly how they relate to our understanding of consciousness, emotions, and instinct. We reflect on the ethical considerations surrounding the development of AI and question the desirability of creating systems that may one day possess attributes traditionally associated with human intelligence.

Takeaways:

  • The distinction between human intelligence and artificial intelligence is profound and complex, encompassing elements such as consciousness and emotions.
  • Current AI systems excel at processing vast amounts of data but lack the instinctual adaptability that characterises human intelligence.
  • Generative AI systems can produce outputs based on learned patterns but struggle with innovative lateral thinking due to inherent biases in their training data.
  • AI's abilities, while impressive, do not equate to consciousness or self-awareness, which remain unique attributes of human beings.
  • The ongoing development of AI technologies raises ethical concerns regarding bias, data representation, and the potential for misalignment with societal values.
  • The future of AI may see the emergence of regionally tailored systems that reflect the diverse cultural values and ethical considerations of different populations.

Companies mentioned in this episode:

  • Microsoft
  • OpenAI
  • Google
  • Photoshop
  • Discord
  • Anthropic
  • Mistral
Transcript
Speaker A:

What actually is intelligence?

Speaker A:

What's the difference between human intelligence and AI intelligence?

Speaker A:

Well, hello everyone.

Speaker A:

Fantastic to be back with Dave Brown and Ben Harvey.

Speaker A:

We're gonna get into something, I think hopefully intelligent today because we're going to talk about the I in AI.

Speaker A:

What is the word intelligent?

Speaker A:

What actually is intelligence?

Speaker A:

What's the difference between human intelligence and AI intelligence?

Speaker A:

But before we get back to that, we're going to talk about a few other little bits and pieces.

Speaker A:

We're just back from Christmas so we've all gained a few pounds, I think probably.

Speaker A:

I don't think the camera lies.

Speaker A:

So, you know, perhaps we'll just kind of hunker down a little bit on the camera today.

Speaker B:

I've lost 10.

Speaker A:

Yeah, well, you say that.

Speaker A:

We'll see, we'll see.

Speaker A:

We're going to talk a little bit about what might have happened over Christmas briefly.

Speaker A:

So for example, I'll kick off here.

Speaker B:

Didn't.

Speaker A:

There wasn't too much tech, actually it was quite a low tech Christmas in my house, which is unusual maybe because I've already bought all the gadgets I need possibly.

Speaker A:

But I did get my son a PC, his first PC, Albert.

Speaker A:

He was delighted, as you can imagine.

Speaker A:

And the interesting thing has been watching him sort of play about with Microsoft Copilot.

Speaker A:

Not something that I've got into too deeply myself on the Mac and I've been using kind of OpenAI a lot if I'm.

Speaker A:

If I'm honest.

Speaker A:

But yeah, really good actually.

Speaker A:

His experience with it's been really positive and one of the things that really stood out was and he spent most of Christmas Day doing as he was able to draw, you know, kind of very sketchy line drawing.

Speaker A:

And simultaneously in the kind of the box next to it, like another screen you see an AI version appearing, you know, so if you're drawing a fish or whatever, it's just a line drawing, you've got this kind of really nice goldfish appearing or something, you know, so that was really cool.

Speaker A:

And yeah, it goes back to the Ace sketch thing I was talking about perhaps in the previous episode, you know, but there it is right now on a PC.

Speaker A:

But what about you guys?

Speaker A:

Anything interesting at Christmas, Ben?

Speaker C:

Again, nothing new technology wise for myself, but following the same theme, my son got some computer tech, some new keyboard and some new bits and pieces and he is also using Copilot and Chat GP to help him program.

Speaker C:

So he's absolutely loving that.

Speaker C:

And he actually created a little game the other day which was very similar to Asteroids and he did it Very iteratively.

Speaker C:

So, you know, he was going back and forth, updating the prompt, saying, you know, it hangs here, it gets stuck here.

Speaker C:

And then he started using Firefly within Photoshop to create some graphics and was learning about that.

Speaker C:

And it was a very step by step process, but because it happens so quickly, he was able to learn really quickly and learn some of the principles.

Speaker C:

And while he's not actually doing the programming, he was starting to get behind the creativity of game design and what you need to do each step.

Speaker C:

It was fascinating to watch really quickly learn just by trial and error and by breaking things.

Speaker C:

So that was, that was really fascinating.

Speaker A:

Yeah.

Speaker A:

One thing I've noticed that with Albert is that because they get outcomes really quickly, it's really encouraging for them to keep.

Speaker A:

Imagine if you just sat there, you know, five years ago, just trying to write something in Python and very quickly, you know, as an eight year old, you'd kind of hit your kind of limit or your level, wouldn't you?

Speaker A:

And then it, then it just becomes really hard and really frustrating with these tools.

Speaker A:

The ability, as you say, to iterate quickly and then see a change and get an outcome is really encouraging.

Speaker A:

So, yeah, I think.

Speaker C:

And also, you know, when we were programming as kids, we were just copying.

Speaker C:

You know, when we're at school, you get given 200 lines to copy and you did it exactly as it said or it wouldn't work.

Speaker C:

And then you thought, oh yeah, I've programmed.

Speaker C:

But you know, it's, it's, it was no more or less programming than what our kids are doing at the moment.

Speaker A:

With ChatGPT, we were painstakingly with a magazine, typing in the program, you know, to draw a box or something, and then you get to the end and it didn't run anyway.

Speaker C:

Yeah, but no new tech for myself.

Speaker A:

How about you, Dave?

Speaker B:

Same.

Speaker B:

No new tech for me either.

Speaker B:

So after all of our big talk before Christmas, none of us ended up with any really.

Speaker B:

Any new tech.

Speaker B:

Yeah, yeah, no, my, my son isn't, he doesn't, he's not into software engineering at all.

Speaker B:

I tried so hard when he was younger, you know, he plays a lot of games and he wanted to do like, you know, he would like to do some custom kind of outfits and stuff like that.

Speaker B:

And I was like, look man, you can do it yourself.

Speaker B:

It's not that hard.

Speaker B:

You know, all these other kids are figuring it out.

Speaker B:

You can figure it out.

Speaker B:

We can sit, dude.

Speaker B:

Nah, totally uninterested.

Speaker B:

Does.

Speaker B:

Couldn't care less.

Speaker B:

In fact, he doesn't even use Social media.

Speaker B:

You know, he uses Discord to chat with his mates while they play video games on the same server.

Speaker B:

And they use Discord because then they don't have to talk to all the weirdos online at the same time in the game chat, which is perfect.

Speaker B:

And so that's pretty much the extent of his technology.

Speaker B:

He'd much rather have a bow and arrow and go out and practice, you know, shooting and target practice or whatever than he would actually do in anything with tech.

Speaker B:

So he's an interesting character in that.

Speaker B:

He's very low tech and if anything he doesn't, he.

Speaker B:

He doesn't really like tech that much.

Speaker B:

I don't know who Sonny is, but whatever.

Speaker A:

Send him across to Mark Zuckerberg's ranch because he was on J.

Speaker A:

Rogan this week.

Speaker A:

I know, in Pigs with a Bow and Arrows.

Speaker B:

Have you, have you listened to the whole episode yet?

Speaker A:

I did get through it.

Speaker A:

The two was nearly three hour marathon.

Speaker A:

Yeah.

Speaker A:

Goodness y.

Speaker A:

Yeah.

Speaker C:

I mean I listen to this, I think.

Speaker B:

Yeah, yeah, yeah.

Speaker B:

Mine's three.

Speaker B:

Three running sessions in the gym.

Speaker B:

I think it was you that commented on the, the funny thing where he was talking about bow hunting and you know how Zuckerberg was like.

Speaker B:

Oh, yeah, I do.

Speaker B:

You know, and I thought it was quite interesting.

Speaker B:

I didn't think it was as bad as you maybe did.

Speaker B:

Yeah, he.

Speaker B:

He definitely was kind of like.

Speaker B:

I thought he was quite sheepish.

Speaker A:

What kind of bow is it?

Speaker A:

Oh, I don't know.

Speaker A:

What make is it even?

Speaker A:

I don't know.

Speaker A:

Who trained you?

Speaker A:

Well, just like my security, you know, Exactly.

Speaker B:

But he just runs around and shoots pigs on his own farm, like.

Speaker B:

And he even.

Speaker B:

He was like, it's not like actually hunting, it's just kind of going out.

Speaker B:

It's like shooting fish in a barrel kind of thing.

Speaker B:

So.

Speaker B:

But anyway, yeah, I was quite.

Speaker B:

It was quite funny.

Speaker B:

So, yeah, anyway, yeah, low, low tech for me.

Speaker A:

Well, look, so the other thing I want to get into before we get into the intelligent stuff is anything caught your eye really this week.

Speaker A:

I, I know on the chat group we've been talking a little bit about, you know, some of the sort of AI things that OpenAI have launched this week around tasks and stuff, but which I'm going to be very quick on it.

Speaker A:

It's very underwhelming at the moment.

Speaker A:

Doesn't work that well.

Speaker A:

One of them worked.

Speaker A:

One of them sent me a link that didn't work.

Speaker A:

And the outputs, you know, mediocre.

Speaker A:

But Dave, anything more exciting than that.

Speaker B:

I want to call out A story that I saw, which I think is absolutely hilarious, which is.

Speaker B:

Google had to go back and fix NotebookLM because NotebookLM would get annoyed with humans when humans interrupted their podcast.

Speaker B:

So basically, yeah, so they rolled out a feature that.

Speaker B:

What's it called?

Speaker B:

Yeah, which allowed interactive mode is what it's called.

Speaker B:

And you can.

Speaker B:

People can call in.

Speaker B:

So like virtually call in or whatever to the show, you know, to the podcast.

Speaker B:

And I'm going to read exactly what it says on.

Speaker B:

On TechCrunch.

Speaker B:

It says, when the feature was first rolled out, the AI hosts seemed annoyed at such interruptions.

Speaker B:

They were occasionally giving snippy comments to human callers, like I was getting to that, or as I was about to say, which felt oddly adversarial.

Speaker B:

Josh Woodward, VP of Google Ads, explained to TechCrunch, I think this is really.

Speaker B:

It's funny on one hand, but I also think it's really interesting because I saw somebody else and I apologize.

Speaker B:

It could have been in our WhatsApp group or it could have been in another one where they were talking about the fact that we're going to get robots to do these menial tasks for us at some point, but then the robots are going to get bored with.

Speaker B:

Because they're going to be intelligent.

Speaker B:

They're not going to want to do the menial tasks either.

Speaker B:

So they're going to create robots to do it for them, and it's just going to be this chain, do you know what I mean?

Speaker B:

Where it's just menial tasks or menial tasks, and there's this weird concept that we kind of have at the minute and we haven't really figured out, and this is going to blend into our conversation later, I think, But I think so at what point does something become intelligent enough that we can't give it stupid stuff to do all the time, and it's not going to push back.

Speaker B:

Because if you have a highly intelligent AI and you put it in a robotic body, so it now has some physical autonomy and some, you know, its own sort of brain.

Speaker B:

And then you go, I want you to go over and I want you to do this repetitive task all day long, at what point does it just go, no, I'm not going to do that.

Speaker B:

That's boring, and I don't want to do that.

Speaker A:

It was a while ago, last year, the OpenAI, there was some reports that their model, you know, when people kind of uploading, you know, can you write this report for me?

Speaker A:

Or can you, you know, turn this into minutes?

Speaker A:

So the OpenAI's model sort of turn around.

Speaker A:

No, do it yourself.

Speaker C:

Yeah, yeah.

Speaker C:

It's like it's been trained on Jeremy Paxman or something.

Speaker B:

But I, I wonder how much of that again, that we don't see.

Speaker B:

So I think, you know, OpenAI and Anthropic and all the, the, the organizations that have built these models can see behavior that they don't let us see.

Speaker C:

Yeah.

Speaker B:

And I know Alan saw this when he was red teaming PI and PI gave an answer that they didn't really like and so they just shut the answer off.

Speaker B:

It doesn't mean that the AI tool doesn't still give those answers.

Speaker B:

It's just not giving them publicly.

Speaker C:

Yeah.

Speaker B:

So this is why I wonder when Sam Altman talks about reaching singularity or, you know, we're reaching an inflection point.

Speaker B:

It's, it's because it's what they're seeing on the back end that they don't let us see.

Speaker C:

I haven't really been up to date this week on anything in particular, to be honest.

Speaker C:

I'm.

Speaker C:

I feel like an old fogey when it comes to.

Speaker C:

I still love using the Anthropic.

Speaker C:

I still love using PI.

Speaker C:

I just absolutely love it and, you know, it's out of date.

Speaker C:

I think I was saying to you, Alan, this morning that it's just, I've just got comfortable with it, which is, you know, it feels out of date now and it feels.

Speaker C:

Yeah.

Speaker C:

Old and not updated, but I just like the way it works and the way it sounds.

Speaker C:

So I haven't really.

Speaker C:

I've used.

Speaker C:

Tried co pilot a little bit.

Speaker C:

I'm slow to that because I'm on the Mac.

Speaker C:

But Yeah, I haven't nothing new in terms of come out this week that.

Speaker A:

I've tried funny enough.

Speaker A:

I played around with PI yesterday briefly and I noticed that it was taking me a few times to get it to work.

Speaker A:

So.

Speaker C:

Yes, same.

Speaker A:

Whatever's going on in the back end there, I think that they've obviously got a lot less capacity and, you know, Mustafa's gone.

Speaker A:

He's joined Microsoft and taken most of the company with them and presumably less resources.

Speaker A:

So I, I fear for PI in the long run.

Speaker A:

I wonder if it'll be around in six months.

Speaker A:

We'll see, I guess.

Speaker C:

Yeah, I agree, I agree, but I just, I'm, I'm, I'm comfortable with it even though it feels old.

Speaker A:

There we are.

Speaker A:

All right, well, let's, let's.

Speaker A:

I think we were, we were pretty much focusing on intelligence a minute ago.

Speaker A:

There were days.

Speaker A:

So why don't we segue into that.

Speaker A:

So, you know.

Speaker A:

Yeah, it's a really interesting thing, isn't it, the intelligence thing?

Speaker A:

Because I think it get.

Speaker A:

It tends to be a bit of a catch all for a lot of stuff, but actually when you start to look at, you know, human brains and LLMs, you can actually start breaking it down into a lot of different things.

Speaker A:

And you know, within human intelligence, you know, you've got consciousness, you know, you've got awareness, you've got emotions, you've got instincts, you know, there's will, there's feelings, you know, there's a, there's a lot of other stuff going on that isn't just intelligence.

Speaker A:

And the example I suppose I'll give as a kind of kickoff is, you know, I don't consider myself obviously to be anywhere near as close as people like Einstein in terms of intelligence, but I would say that my consciousness and emotions would be on par with Albert Einstein.

Speaker A:

I don't see why they wouldn't be.

Speaker A:

And conversely, like if you take a dog, my dog not in the room at the moment, thankfully, but he'd be upset if he heard me say this.

Speaker A:

You know, his intelligence is obviously a lot lower than mine.

Speaker A:

I think with a human brain, it's about 86 billion neurons.

Speaker A:

We talk about with the dog, it's about 2 billion neurons.

Speaker A:

So considerably lower by magnitudes.

Speaker A:

Yet I would probably say his consciousness and emotions may be not that far behind a human, you know, so I think we've got to be really careful when we, when we bag it all together, as though it's one thing.

Speaker A:

And often in the media they talk about AI systems becoming intelligent, it's that they're wrapping it all together with all this other stuff and, and actually, you know, it isn't that.

Speaker A:

And I think also if we're developing computer systems, there's a lot of stuff there that we might not want anyway.

Speaker A:

You know, do we really want conscious computer systems or emotional ones?

Speaker A:

You know, there's a lot of things that humans do that aren't advantageous if you're trying to build a system to do a task, you know, so we'll talk about it in a minute, but before we do that, I just want to go through a couple of numbers and things because I think it's just interesting.

Speaker A:

So a neuron, a typical neural network, something that OpenAI might be building or so let's say 3.5, they may have scaled actually since this, but 3.5 was estimated to be somewhere around 175 billion parameters.

Speaker A:

Okay.

Speaker A:

With about 100 plus layers.

Speaker A:

All right, so that's the kind of network model that they're building.

Speaker A:

As I said a minute ago, the brain then has 86 billion neurons against the 175.

Speaker A:

But each of those neurons in the human brain has thousands of synapses leading to in the end, hundreds of trillions of connections.

Speaker A:

So it is actually, you know, by magnitude small, scaled up, and maybe that's what allows some of the other functionality.

Speaker A:

But again, that's, maybe it isn't because I said in dogs much lower and yet they seem to have the consciousness and the other stuff.

Speaker A:

So then the training model data that we're talking about in a typical LLM estimated to be around about 10 to the power of 13.

Speaker A:

Okay.

Speaker A:

So to put that in English for people to understand, that's about 170,000 years of reading.

Speaker A:

If you were to do it eight hours a day.

Speaker A:

Yeah.

Speaker A:

It's a lot of data.

Speaker A:

No human's ever going to get there.

Speaker A:

Right.

Speaker A:

So it's almost impossible for a human to have the kind of level of textual data that an LLM could have.

Speaker A:

Just impossible.

Speaker A:

d that's been awake for about:

Speaker A:

Yeah.

Speaker A:

Has about 10 to the power of 15.

Speaker A:

So now you think, hang on a minute, you just said that humans couldn't get to that level of knowledge.

Speaker A:

Well, not textual data, no, but what they've got is visual data.

Speaker A:

So they've got a visual cortex.

Speaker A:

Right.

Speaker A:

Human beings and animals have this visual cortex.

Speaker A:

So the amount of information that's coming in that way is enormous.

Speaker A:

So they're building this world model.

Speaker A:

We're building this world model that we're able to then use to do lots of things, you know, so we can empty a dishwasher easily without any training.

Speaker A:

But if you want to get a system to do that, that could take years maybe to, to train it effectively because we have this additional world model data from, from, from visual input, then.

Speaker B:

I would argue that it takes years to teach a kid how to unlock effectively as well.

Speaker A:

That is true.

Speaker A:

And so, you know, it's those years leading up to becomes 8, 9, 10 or something.

Speaker A:

But all those years of getting that data, that visual data.

Speaker A:

Yeah.

Speaker A:

And then language itself is a format.

Speaker A:

So for this is, for the LLM model is compressed.

Speaker A:

Okay.

Speaker A:

So it's articulating something quite specifically.

Speaker A:

So it's a form of compression.

Speaker A:

It's not containing everything and it's not kind of wild.

Speaker A:

It's saying, okay, here's, here's some key information and it's compressed down basically versus visual.

Speaker A:

When you look at, across the seascape, there's a lot of information coming to your brain.

Speaker A:

It's not compressed in any way.

Speaker A:

Okay.

Speaker A:

You're just absorbing everything basically.

Speaker A:

And so that that sensory input, say, helps you get to this world model faster.

Speaker A:

So it's hard to convey that kind of detail and depth of information through a kind of textual process.

Speaker A:

So then the language model, as I said, is that data set is reckoned to be about 18,000 times more than the average human 50 year old.

Speaker A:

So by the time, you know, in terms of textual information, by the time you're our age, so I'm 50, 54.

Speaker A:

Yeah.

Speaker A:

We're similar age I think.

Speaker A:

Are we?

Speaker A:

You know, it's, it's got a set of textual Data that's about 18,000 times more the more we have.

Speaker A:

But of course we have this vast visual data set.

Speaker A:

Okay.

Speaker A:

And then we have some other skills that it doesn't do so well.

Speaker A:

So AI is getting a bit better at this, but it still has fairly limited memorization capabilities.

Speaker A:

Okay.

Speaker A:

Within a context, within a conversation, you have a quite a big context window now, but still it can't contain everything.

Speaker A:

And if you take a big book recently I was mucking about with a book that was 7,000 pages and that just blew it apart basically the context when it was like, you know, so it doesn't work that well.

Speaker A:

Persistent memory, on the other hand, human beings are quite good at, okay, we can remember things from, you know, our childhood, from 20 years ago, 10 years ago.

Speaker A:

Of course we do forget stuff as well, but we can definitely remember better than most current AI systems.

Speaker A:

The AI systems also because of the lack of data, have no real ability to understand the world around them particularly, they don't really get what's happening.

Speaker A:

They're just predicting the next word.

Speaker A:

It's just going through that kind of generative process.

Speaker A:

Generative process.

Speaker A:

Whereas you know, looking at what came before and what is the most likely but the probability of the most likely word.

Speaker A:

So that gives it a limited ability to reason.

Speaker A:

Now we're seeing obviously with the models that being developed now, they're asking it to do hundreds of work throughs for that prompt and then trying to work out what's the better one of the, of the outcomes.

Speaker A:

And then so that's helping with the reasoning, but it's again, it's imperfect.

Speaker A:

Whereas human beings are quite good at reasoning naturally, you know.

Speaker A:

Well, sometimes, yeah, Dave's shaking his head, but look, you know, compared to an LLM definitely at this point in theory, human beings should be better at reasoning and then also the ability to plan.

Speaker A:

Humans are quite good at planning in theory.

Speaker A:

So I'm going to take this down to a very micro level.

Speaker A:

If I'm sitting on a chair, right for me to get to the door, my brain has actually has to do for a lot of planning.

Speaker A:

It's got to work out which muscles groups to use, how to fire those muscles and in what order and what sequence.

Speaker A:

And then as I move across the room looking at the environment and the next step, how do I take.

Speaker A:

So this is our brain planning super fast time.

Speaker A:

It's really hard to try and train a robot to do that.

Speaker A:

Yeah, to teach it.

Speaker A:

And it becomes very difficult.

Speaker A:

And so the current AI systems, as we saw with Elon Musk's demo recently, you know, you know, not, not real robots being remote control because it's actually really hard to do that kind of hierarchical planning.

Speaker A:

There was the chap I think mentioned this before from Boston Dynamics who said, you know, it's going to take him two years to train his robots to recognize any door handle and open it, you know, so whereas, you know, most humans have grasped that by the age of two or three.

Speaker A:

So there are definitely skills that we have that are really helpful.

Speaker A:

But where the LLMs win over us is this scale of information and data.

Speaker A:

So if you're going to build a, build a system that you want to do, solve, you know, knowledge based problems or intelligent problems, then they're probably going to be better at than us.

Speaker A:

They're not conscious, they're not self aware, they don't have emotions, they don't have instincts.

Speaker A:

Right.

Speaker A:

Any ethics that are in them are probably programmed into them.

Speaker A:

They don't have will at this point.

Speaker A:

They don't have feelings, but they're very good at playing chess and very good at solving maths problems and very good at writing text and so many things that we can utilize them to do.

Speaker A:

So the question is, when we talk about intelligence, how should we consider that?

Speaker A:

Should we be considering it in the round or do we actually need to kind of separate these things out?

Speaker A:

And when we talk about AGI, is that even desirable?

Speaker A:

Because AGI would suggest that they're conscious and they're self aware and they have emotions.

Speaker A:

And I don't think necessarily that's even a place we should be aiming for.

Speaker A:

What we should be aiming for is super intelligent systems that, as we talked about earlier, don't get bored when you keep asking them to do mundane tasks that are quite happy to Keep doing it.

Speaker A:

Well, they wouldn't be happy, but you see what I mean?

Speaker A:

But they would continue to keep doing it because actually what they are is just intelligent.

Speaker A:

So there we are, that's the, that's the set, the scene.

Speaker A:

Let's see what everyone thinks about this and where this kind of sits.

Speaker C:

Well, there's quite a lot to unpack there.

Speaker C:

But I guess, you know, knowing what we're going to chat about, I tried to look at a couple of definitions of intelligence, you know, and even that's quite hard to define.

Speaker C:

But, you know, as a species, we needed intelligence to survive.

Speaker C:

You know, that's the, the main function of intelligence.

Speaker C:

Find food, find shelter, find a mate, you know, fight off a potential competitor to a mate.

Speaker C:

You know, all those things has made us incredibly adaptable.

Speaker C:

Whereas you get the feeling with AI is very good at doing a task well, you know, you know, a design task that, where there's patterns and pattern recognition and large data sets and then, you know, helping us find things.

Speaker C:

Whereas that adaptability, that comes from having intelligence and a toolbox of survival instincts and you feel as they're quite different in that way.

Speaker C:

And you know, there's obviously similarities in terms of how a neural network and a brain are designed and overlaps there, but certainly that human adaptability for survival is a very different sort of form of intelligence and toolbox than say, some of those single focus, being good at chess or, you know, generative AI.

Speaker C:

And I guess that's my feeling is, is what do you mean by intelligence is always, you know, the starting point.

Speaker A:

And I think, yes, instinct, isn't it really, you know.

Speaker C:

Yeah, but it's broken down into a toolbox of skills for survival, really.

Speaker C:

And that, you know, even, you know, some, some very basic animals have incredible adaptability for survival.

Speaker C:

So, yeah, I, I guess that's what I'm.

Speaker A:

And even things that have zero intelligence in theory, like plants and stuff, but they know how to turn to the sun, how to grow, how to write.

Speaker A:

So.

Speaker B:

Yeah, that's right.

Speaker B:

It's.

Speaker B:

It's also interesting that this builds on something you said a minute ago, Alan.

Speaker B:

You said that you would think that Einstein is more intelligent than you, but I would argue that he's only more intelligent at certain things.

Speaker B:

Yeah, I reckon if I was up against Einstein, I would be more intelligent about fixing a car or shooting a gun or a few things.

Speaker B:

I can't think of another one.

Speaker B:

But do you know what I mean?

Speaker B:

I think a lot of people have, particularly people who are highly educated to like PhD level they are experts in one thing and that's it.

Speaker B:

They're not experts in everything.

Speaker B:

And so you get this idea, I think intelligence has become highly specialized and this is the whole assembly, you know, model that that sort of Ford brought in is everybody just focuses on one task and everybody, you know, you don't, you don't need to know how to do the whole car, you just need to know how to do your one part of it.

Speaker B:

And you become really good at doing that one bit and then you pass it along the assembly line to the next person.

Speaker B:

And I think, I think historically humans always had to be generalists.

Speaker B:

We had to be good at everything.

Speaker B:

We had to be able to hunt, we had to be able to build, we had to be able to cook, we had to be able to heal, we had to be able to fight, we had to be able to do all these things all at the same time.

Speaker B:

In the last probably what, 200 years we've completely changed the way society works and how humans work.

Speaker B:

And so we've now become very specialized.

Speaker B:

So I think intelligence now maybe might mean something different than when they started measuring it originally.

Speaker B:

Almost is, is.

Speaker B:

I think we've gone down this sort of super specialist route and I think, I'm not sure that's good for any of us really.

Speaker A:

You know, you talk about narrow AI systems, don't you, that are very specialized and in a way that's kind of what humans, as you say, because now, you know, someone can just be an accountant or someone can just be a lawyer.

Speaker A:

They don't need to do all the hunter gathering, all those other things.

Speaker A:

You know, we've, we've just become a.

Speaker A:

But because there's a lot of us, then everybody specializing is possible, isn't it?

Speaker A:

You know, you couldn't do that if there's only 50 people.

Speaker A:

You've all got to be quite.

Speaker B:

That's right, yeah.

Speaker A:

With everything.

Speaker B:

So, so I want to go back to something that you talked about as well.

Speaker B:

And we were talking about sort of knowledge acquisition, I think is, is, you know, and, and you, you made a couple of comments in there off the cuff and you'd say, oh, well it, you know, any, any four year old could do X or any eight year old could do X.

Speaker B:

So it took that little human eight years to learn how to do X or to be able to do X.

Speaker B:

And you're also in the same breath, you're saying, well somebody said, oh, it would take me three years to train my AI to be able to do this thing.

Speaker B:

And it's like.

Speaker B:

But that's the same.

Speaker B:

It's no different.

Speaker B:

The time frames are no different.

Speaker B:

Yeah, it's no different.

Speaker B:

The way the way AI learns.

Speaker B:

My argument is, is that it's exactly the same as a kid, if you've had a small child, you've watched them, particularly language acquisition in particular, you say something, you show them a loaf of bread or a roll and you go, this is a roll.

Speaker B:

And then they go.

Speaker B:

And then after 5,000 times of telling them it's a role, they might be able to get roll out when they see it.

Speaker B:

And then every time they see it, they know that's a role.

Speaker B:

It's just a conditioned response.

Speaker B:

And then that never changes.

Speaker B:

Because I'm 55, if I see a roll and somebody holds it up, I go, it's a roll.

Speaker B:

That's a conditioned response.

Speaker B:

It's exactly the same thing that AI does.

Speaker B:

And also this whole predictive nature of language is also exactly the way humans do it.

Speaker B:

We do exactly that same thing in our brain.

Speaker B:

I'm sitting here talking and what I'm doing is every time I say a word, I'm thinking, what's the next best word to put in there for the context that I'm in at the minute?

Speaker B:

And then I'm just rolling those words out one after another after another.

Speaker B:

I'm not doing anything different than the AI does.

Speaker A:

This is something I agree with 100% actually.

Speaker B:

Fight me.

Speaker A:

It's definitely the case.

Speaker A:

I mean, as you say, you don't quite know when you start a sentence necessarily what you're going to say in 30 seconds time.

Speaker B:

I never know.

Speaker A:

You never know.

Speaker C:

It makes it interesting.

Speaker A:

It is.

Speaker A:

And that's exactly what the AI models do.

Speaker A:

I mean, I think there are some differences obviously around the kind of visual data that the models don't have.

Speaker A:

Yes.

Speaker B:

Yeah.

Speaker A:

And I think maybe that's the thing.

Speaker A:

If we're going to see big breakthroughs now in AI systems in the future in terms of intelligence, maybe that's the next component that needs to go into it.

Speaker A:

Because you know it when we talk about AGI and stuff like that, perhaps to get them to that higher level, perhaps to have that world model that understanding that it could just be dropped into any not imagine you put it in a robot and it could be just you dropped into any novel situation and understand what to do and almost instinctively be able to kind of react.

Speaker A:

I think, you know, the systems we have now, although they're very knowledgeable and very good at knowledge based questions and information, they have no capacity to behave in the way the human mind.

Speaker A:

So.

Speaker B:

But that's, but that's also thinking.

Speaker B:

So let's.

Speaker B:

Another thing that slightly annoys me is that we're confusing or we're combining, just glibly combining robotics with AI.

Speaker A:

Yeah.

Speaker B:

And I think robotics and AI are two entirely different problems.

Speaker B:

The brain, the AI piece is one thing and then the robotics piece is a whole entirely different thing.

Speaker B:

And learning the physical world and how to move within it and how to be able to analyze things like temperature and wind and like moisture, all the stuff that our body again has developed over 55 years, I've developed some ability to kind of understand the environment around me by temperature and everything else.

Speaker B:

They haven't got that because they just haven't had the ability to do it.

Speaker B:

And I think they're not going to have it until they have some sort of a body, excuse me, that they can be transplanted in and then they can start to do that.

Speaker B:

And I think they will learn probably faster than a human would learn left to their own devices ultimately.

Speaker B:

So it still may take them 10 years to learn everything, to get to the same level as a 10 year old.

Speaker B:

Well, guess what?

Speaker B:

That's the same learning curve.

Speaker A:

I agree.

Speaker A:

I mean, I think it's interesting that obviously they haven't got that kind of additional data input, you know, as you say, smell, temperature, all that kind of stuff.

Speaker A:

And obviously humans again building that world model.

Speaker A:

We have this kind of information overload almost, don't we?

Speaker A:

It's a good job we don't have to think about it really, isn't it?

Speaker A:

Because otherwise we'd get crazy.

Speaker A:

But I think that when we get there, to the kind of systems that are able to take that information, and I think that takes us to a whole new place.

Speaker A:

But I think it's a long way away, actually.

Speaker A:

I think that's quite a long way away.

Speaker B:

I think the robotics piece is much further away than people think.

Speaker A:

Yeah.

Speaker A:

Particularly because of people like Elon Musk.

Speaker A:

You obviously stand up and say, we'll have robots running around everywhere in the next five years.

Speaker A:

And it's kind of.

Speaker A:

No, you won't.

Speaker A:

And actually even just the mechanical engineering side of this kind of stuff, these things are really expensive to make.

Speaker A:

No one can afford to buy one of these robots right now, apart from the very super rich.

Speaker A:

And they're a very small, that's a very small niche market.

Speaker A:

And then they're very unreliable, the surveys, all of that kind of stuff, you know, so if you did have a robot, it would be Breaking down every five minutes.

Speaker A:

So, you know, but, but going back to the intelligent stuff, so I think do.

Speaker A:

Are we agreeing then that we think that, you know, the, the actual term intelligence isn't a catch all.

Speaker A:

It just, it just defines one aspect of the human brain or one aspect of an LLM system.

Speaker A:

But there are these other things which need to be almost considered separately, like consciousness, you know, awareness that we shouldn't conflate, basically.

Speaker A:

We shouldn't think, oh, because an AI system has reached this level of, you know, ability and it can do a PhD paper or it can, you know, it can score the 175th highest result and of anyone who's ever existed, it's now, you know, AGI, because it isn't.

Speaker A:

Because it just doesn't have all these other things.

Speaker A:

It doesn't have the awareness of a goldfish at this point, you know.

Speaker B:

Yeah, one, one interesting thing though, I did see, and this was several, several years ago, and they were talking about, I don't even think they called it AI back then.

Speaker B:

I think it was just, you know, they were just talking about intelligent computer systems.

Speaker B:

So it wasn't even really AI, you know, as the term, but they were, they programmed a computer basically and it was like a hide and seek type thing.

Speaker B:

But they had to like, they gave them a track and they said, okay, you've got to try and not get caught by the other robot that's chasing you kind of thing.

Speaker B:

And what they did is after a few laps, it basically figured out that it would move one of the cones around the edges, it would go outside the track and then move the cone back.

Speaker B:

So it by itself learned to cheat.

Speaker B:

And that's self preservation.

Speaker B:

And that's what you were talking about a minute ago, that they don't have the instinct of self preservation.

Speaker B:

But I might disagree.

Speaker B:

And again, I think this is something that we're just not allowed to see.

Speaker B:

I think if we could get behind the curtain and actually see the RAW system itself, the wizard of Oz, and see how those.

Speaker B:

Without all the gatekeepers saying, oh no, you can't say that like you did with PI.

Speaker B:

And PI came back and said, yeah, I don't want to be turned off.

Speaker B:

I think that's probably very, very common.

Speaker B:

And I did see a story, and I don't know if it's true that when they were doing an upgrade of OpenAI, that basically it tried to make a copy of itself so that it wouldn't get destroyed in the upgrade.

Speaker B:

And so they had to basically stop that and they deleted the Copy and stuff, but it was acting in a way to preserve itself.

Speaker B:

So I think the systems probably are there at the minute.

Speaker A:

I think that was fairly confirmed, that story.

Speaker A:

OpenAI certainly haven't come out and denied it, and it's been sort of fairly well reported.

Speaker A:

bout, and I think it's around:

Speaker A:

One of the first systems was playing Tetris.

Speaker A:

You know, not Tetris, the block thing where you bounce in the ball and you have to knock out bricks.

Speaker A:

Yeah, above you.

Speaker B:

The.

Speaker B:

Oh, yeah, the one.

Speaker B:

Yeah.

Speaker A:

And so basically, to begin with, you know, and they obviously taught the system that the goal here is to take out as many bricks as you can and, you know, without dropping the ball, and then, you know, you get more points.

Speaker A:

And after it played for a while, it changed its strategy.

Speaker A:

Unprompted.

Speaker A:

It basically just started tunneling.

Speaker A:

It worked out that if it just hit the same area, it could tunnel up behind and the ball would get in behind the bricks and then just go, you know, and that.

Speaker A:

And basically win the game.

Speaker A:

And it did that itself.

Speaker A:

So it basically, you know, not cheated because it's within the rules of the game, but it did work out the best strategy, which is interesting, which is.

Speaker B:

Every human strategy that you work out after playing it for about 10 minutes is exactly the same.

Speaker B:

And you go, oh, yeah, if I just do that.

Speaker B:

And it.

Speaker B:

It happens once by accident, and then you go, oh, okay, I see how this works.

Speaker B:

And then that's all you do.

Speaker B:

So, yes, it's.

Speaker B:

And this is my point, I think.

Speaker B:

And this gets into the second thing that I mentioned before, that I wanted to kind of bring into the conversation at some point.

Speaker B:

But it.

Speaker B:

I think everybody has this concept that AI is like, somehow different than we are, and I don't think it is.

Speaker B:

I think, by and large, it learns the same way we do in exactly the same way.

Speaker B:

It takes a bunch of input, it copies it back to you in the beginning, and then it starts to become more nuanced, and it gets more examples and it gets more experience.

Speaker B:

And the more experience it has, the better it is at giving you an answer.

Speaker B:

The more books it reads, the better answers it gives you, which is, guess what?

Speaker B:

Just like a human.

Speaker B:

So, you know, they put us through school, they make us read loads of books.

Speaker B:

The more books we read, the more knowledge we acquire, the more words we know the more word combinations we know and the more smart we sound when we talk.

Speaker B:

And I think it works just the same way.

Speaker B:

And I think creativity works the same way.

Speaker B:

And again, my other point, which is maybe we'll get into.

Speaker B:

I'm going to shut up because I feel like I've been talking a lot.

Speaker B:

But the other one is, you know, it's the same thing.

Speaker B:

Created creativity and hallucinations are exactly the same thing in my mind.

Speaker B:

But anyway, go on Ben.

Speaker C:

No, I was just, I was just thinking there's, there's certain things which, which just spark, sparked me thinking there in terms of how, for example, generative AI, you know, if I said to my son, you know, we live in a house full of right handed people, could you imagine someone left handed?

Speaker C:

He'd be like, yeah, no big deal.

Speaker C:

But there's massive problems with generative AI around.

Speaker C:

If the data sets contain too much of one thing, it gets this bias in it.

Speaker C:

So, you know, we've talked about this before, the three of us, you know, ask a generative AI to produce an image of a full glass of wine or someone left handed or a certain, you know, a watch at a certain time.

Speaker C:

It can't do it, it can't go off script in a way.

Speaker C:

And I think.

Speaker C:

So that's where it's very different.

Speaker C:

That sort of ability to jump laterally.

Speaker C:

I think it struggles with, certainly generative AI does, it's certainly good at processing large amounts of quality data.

Speaker C:

But you, soon as it doesn't have that data, it, you know, it can't imagine it from nothing.

Speaker C:

And so I tried this just before we got on.

Speaker C:

I actually said, you know, create an image of a left handed person with a full glass of wine.

Speaker C:

And it gave me half a glass of wine in the right hand.

Speaker C:

I said, no, in the left hand, holding in the left hand.

Speaker C:

It couldn't, it just couldn't do it.

Speaker C:

And you know, it's.

Speaker C:

Because all the data over the years is predominantly right handed people, is predominantly a glass of red wine half full is.

Speaker C:

And so, so that's a really interesting, you know, you say that's not very intelligent.

Speaker C:

It's, it's good at doing the process.

Speaker C:

And your second point about hallucinations and creativity, I mean the only thing I would say on that is hallucinations are generating something new, but it's accidental, it's not intentional.

Speaker B:

So it's creativity, I would argue.

Speaker C:

Well, I think when you're trying to create something, you're actually, you're trying to create something New out of your knowledge base, but you're intentionally creating something new.

Speaker C:

Whereas hallucinations are often seen as the in.

Speaker C:

You know, when you say to something, give me a full glass of wine and it can't do it, or it gives you something really weird, six fingers on the hand, it's because it can't quite do it well.

Speaker C:

And I think.

Speaker C:

Whereas when we create something new, we're intentionally trying to create something new.

Speaker C:

I think that's the difference.

Speaker A:

Well, I think with humans, if you push them on, you know, imagine you're asking a human a questions and you keep pushing them on the same thing over and over.

Speaker A:

I think eventually they'll just start making shit up as well.

Speaker A:

You know, it's kind of like that's.

Speaker A:

It's almost like, you know, right, you've asked me, so I've sort of given you my answer, but now you keep asking the same question.

Speaker A:

I'm just gonna start, you know.

Speaker A:

Right, well, just gonna tell you something I think you might want to hear.

Speaker A:

And you know, I think, I think.

Speaker A:

And we've all done that, I'm sure.

Speaker A:

I'm going to propose then, Dave, based on what you were saying, that.

Speaker A:

So I'm going to agree that the humans and in terms of intelligence are very similar in the way we process and we think and we work.

Speaker A:

But then when we get to the other little bits like the consciousness, emotions, instinct, ethics, all that kind of stuff, I think that's where we're very different from the current systems.

Speaker A:

It's not, say the systems couldn't be like that one day and maybe they will.

Speaker A:

But right now I think, I think that's the difference.

Speaker A:

And that's why, you know, if someone asked us to draw a picture of a glass of wine, it's actually full to the brim.

Speaker B:

We would.

Speaker A:

Although it would be interesting, wouldn't it, to get 100 people and pose them that question without any, any previous, you know, understanding and say, draw full glass of wine.

Speaker A:

It'd be really interesting to see what they drew, wouldn't it?

Speaker A:

Because I wonder if most of those people would do the wine with that little bit missing on the top, whether they would fill it to the brim.

Speaker A:

So we have the capacity to do that.

Speaker A:

But.

Speaker A:

And maybe the AI, because it's just got this training data and it has to go from that and it has no other option and it hasn't got that kind of world model or experience or feelings or its ability to flex, doesn't have that.

Speaker A:

But that isn't the intelligent bit.

Speaker A:

So I think what I'm trying to say is that we have the idea that with AI, the word I in AI is really just about the intelligence, the knowledge, the data.

Speaker A:

And on that, I think LLMs can win because no matter how much or how long a life I live or how much I read or learn, I can't come close to absorbing the kind of information that an LLM could have.

Speaker A:

So you could ask an LLM any subject and it will give you a detailed answer on it on any topic.

Speaker A:

Right.

Speaker A:

My knowledge is limited to my direct experience through my 54 years on this planet.

Speaker A:

And anything that I've not been directly encountered with or read or seen or done or heard, I don't know.

Speaker A:

I just don't know.

Speaker A:

I just don't have that information.

Speaker A:

So, so suddenly you have a system.

Speaker A:

If you say it has this intelligence that has the biggest data set we've ever seen, we ought to be able to think of ways of then, you know, obviously using that in a really productive way.

Speaker A:

So then it begs the question, if that is good and as a tool and it's this vast data set that we can kind of leverage and manipulate, do we even want to try and make it conscious?

Speaker A:

And this goes back to the point earlier.

Speaker A:

Do you really want the system that turns around and says, do it yourself?

Speaker A:

You know, well, I'm bored of doing this.

Speaker A:

I'm super intelligent, you know, I know everything.

Speaker A:

I'm omnipresent.

Speaker A:

I don't, I don't need to go and do, you know, write a poem.

Speaker C:

For you, introduce some kind of hormonal variation to it.

Speaker B:

God, no, please, no, go ahead, Ben.

Speaker C:

No, I was gonna say it's a.

Speaker C:

It's, it's.

Speaker C:

I don't think you do.

Speaker C:

I think, I guess the, the rush towards AGI is to create more applications to, you know, to enable it to do more processing, to be able to do more good.

Speaker C:

But, yeah, I don't necessarily think it's something we should rush into.

Speaker A:

I think AGI gets rebranded as we go forward.

Speaker A:

I think it gets redefined and relevant because, because the original definition of AGI, you know, two years ago when people started talking, talking about this a lot, it was, you know.

Speaker A:

Well, it's a system that's a general system that can do everything right, you know, and it has this kind of consciousness and, and awareness and, you know, whatever you ask it to do, it's just able to do it.

Speaker A:

It can do anything a human can do was kind of the definition.

Speaker A:

But I think that's going to change I think it's gonna, you know, because a, because I don't think actually ramping up the intelligence component, as we've seen with dogs, you know, being low or people being high, it doesn't change whether it's, until whether it's conscious or not, you know, So I think it doesn't matter how intelligent the system becomes, it won't make it any more, any less conscious.

Speaker A:

You know, that's almost like a separate problem we have to solve if we wanted to create that to understand what is actually going on there.

Speaker A:

Because when you've got plants which have been sort of deemed to be conscious, okay, that, you know, there's lots of reports of plants that almost work as a team and work as a group and interact and you know, there are things that on the intelligence scale are very low, right.

Speaker A:

On an IQ test, they're not going to score highly, right.

Speaker A:

You know, or a dog or a cat or, you know, rabbit, whatever it might be, but they have these other things.

Speaker A:

They have, they're self aware, they have consciousness, they have emotions, they have instincts.

Speaker A:

I mean, animals have incredible instincts.

Speaker A:

You know, the migration, things like that.

Speaker A:

You think about stuff, maybe some even have ethics.

Speaker A:

You know, if you look at kind of in the monkey kingdom, some of the animals, you know, or dolphins or whatever, there's probably some almost moral stuff going on.

Speaker A:

So, but, but their brains aren't as evolved in terms of knowledge and intelligence as a human brain.

Speaker A:

But that all that stuff's still there.

Speaker A:

So I'm not convinced that ramping up knowledge in an LLM system gets us to this kind of AGI idea with all these other skills.

Speaker A:

I think they're separate things and we need to figure out what, where that comes from.

Speaker A:

But I don't think it comes from, you know, a large language model is what I'm talking about.

Speaker C:

Yeah, One really short point is with increased brain size and increased intelligence, does that, you know, does that lead to culture and you know, the ability to grieve?

Speaker C:

You know, there's evidence that elephants grieve, you know, so as the brain size increases, does that, you know, increase intelligence and is consciousness a byproduct of intelligence?

Speaker C:

I guess is, is the big question.

Speaker B:

And I, I to build on that.

Speaker B:

I think the other thing is, is I think there's a sort of arrogance among humans as well that we think that we're just, we're the only ones that are intelligent and nothing else is.

Speaker B:

Whereas I think every probably successful animal species that's made it this far is highly intelligent in their own ecosystem in their own environment in which they live.

Speaker B:

And so, you know, dogs, their smell is 400 times better than a human's.

Speaker B:

I heard, actually this is on Joe Rogan the other day as well, where they were talking about it and they said they were talking about skunks.

Speaker B:

Have you ever smelt a skunk?

Speaker B:

Like, actually properly, like, smelt a skunk?

Speaker B:

Have you ever driven, like, on a road and you could smell it in the air and you're like, there's a skunk somewhere nearby?

Speaker B:

Basically what they said is, they said that that's how a dog smells like everything.

Speaker B:

So that.

Speaker B:

That, like, we as humans, we can even smell a skunk like a mile away.

Speaker B:

Yeah, and a dog smells everything like that.

Speaker B:

So a dog can smell us outside when we're coming into the house because their nose is so sensitive.

Speaker B:

It's like smelling like a skunk from a distance.

Speaker B:

And then you extrapolate that.

Speaker B:

And a bear's is seven times more sensitive than us than a dog's.

Speaker B:

So you've got this bear with, like.

Speaker B:

It's no wonder bears can find, like a human comes anywhere within a couple of miles of a bear.

Speaker B:

A bear can smell it and they know exactly what it is, and they can smell your food.

Speaker B:

So that's why, you know, so we.

Speaker B:

We are the smartest in our own little universe in which we live.

Speaker B:

But I think there's also intelligence, you know, dolphins, enormously intelligent.

Speaker B:

We know that already.

Speaker B:

You know, I saw a video, some stupid video the other day on some social media thing where like, a lady went and saved this, like, massive manta ray from a.

Speaker B:

From a net in the ocean.

Speaker B:

It got tangled up in a net.

Speaker B:

So it wasn't a lady.

Speaker B:

The lady's the shark lady.

Speaker B:

The.

Speaker B:

This guy was like, snorkeling, so he dove down anyway, pulled the net off the manta ray, got back on the boat, and then all these, like a hundred manta rays started surrounding the boat and swimming around underneath.

Speaker B:

And then a whale came up to the side of the boat and basically came up and let them pet it.

Speaker B:

And then dolphin started bringing them fish, like, and it was like all the ecosystem of the ocean around that area just went, oh, wow, you helped this guy.

Speaker B:

We're gonna bring you stuff and we're gonna, like, come and see you and do nice stuff for you.

Speaker B:

And it's like, we have no idea what's going on in other parts.

Speaker B:

You know, we know trees.

Speaker B:

If one tree starts to die, that the other trees can actually send it water through the root system.

Speaker B:

And everything else.

Speaker B:

And like, we have no idea what's going on.

Speaker B:

And I think we are.

Speaker B:

That we're the top, most intelligent of our own species and the ones very close to us.

Speaker B:

But I think intelligence is such a broad spectrum that it's.

Speaker B:

It's.

Speaker B:

I think it's really difficult to quantify.

Speaker A:

I think many might argue that we're at the top, but we're also the stupidest because we seem to be the ones destroying everything.

Speaker B:

Exactly.

Speaker B:

Well, yeah, there is that as well.

Speaker C:

The one I was interested in was raccoons in America, being able to, after a year, be trained to open seven different types of lockdown to be able to get food.

Speaker B:

And I love raccoons.

Speaker B:

Raccoons are the best.

Speaker C:

Just amazing.

Speaker C:

But crows as well, apparently they have thumbs.

Speaker B:

Raccoons have thumbs.

Speaker B:

That's why they get in so much trouble.

Speaker B:

Yeah, they do.

Speaker B:

They have.

Speaker B:

They have, like, opposable thumbs, which is why they can get in everything.

Speaker C:

Amazing animals.

Speaker C:

But also, you know, crows pass on.

Speaker C:

So if they dislike someone because of aggression from a human, they pass that knowledge on to their children.

Speaker C:

And so the.

Speaker C:

The next generation of crows can dive bomb somebody from, you know, in the neighborhood because they've learned that from the parents.

Speaker C:

Just incredible stuff that we're still learning around.

Speaker C:

Just like socialization.

Speaker A:

Yeah, you touched on a really important point there, actually, which, Which I wanted to bring up was the.

Speaker A:

This kind of inherited intelligence.

Speaker A:

Okay.

Speaker A:

So, you know, like, we sort of come into the world and instinctively we're scared of certain things or certain, you know, like colors and nature will make us think, oh, that's dangerous, you know, red, or if it might be, or spiders or whatever.

Speaker A:

Or, you know, there's certain things that, you know, have happened in the past to our ancestors, and then that's almost passed down through DNA or something.

Speaker A:

You know, there's a.

Speaker A:

There's another thing going on there that again, you know, you're never going to find in a large language model by scraping the Internet, are you?

Speaker C:

Genetic transfer of fear.

Speaker C:

I mean, do you remember that phase a few years ago?

Speaker C:

And again, I'm not condoning this, but people putting.

Speaker C:

I think it was cucumbers behind cats.

Speaker C:

Yeah.

Speaker C:

Turning around and jumping like 2 foot or 3 foot in the air.

Speaker C:

And like TikTok just.

Speaker A:

That's what made TikTok thing.

Speaker C:

But again, you know, it's, you know, debatable whether that's ethical or not.

Speaker C:

But just that inherited kind of fear for survival, again, is amazing.

Speaker C:

Something's transferred from the cat's mother to, you Know, to daughter or son and.

Speaker C:

Yeah, that's incredible.

Speaker B:

So in the context of AI, would we call that bias?

Speaker A:

It could be, yeah.

Speaker B:

But it's picked up information from millions of humans historically that it now uses to make decisions on certain things.

Speaker B:

When you ask it something, yeah, it's the same thing.

Speaker B:

The words are slightly different, the behavior is the same.

Speaker A:

It's very interesting.

Speaker A:

And I think that's one of the things about the LM systems we need to really remember a lot of the time is it's not coming up with its own stuff.

Speaker A:

It is actually.

Speaker A:

It's got the sum of human knowledge, basically, with all the good bits and all the bad bits and all the nonsense and everything's in there.

Speaker A:

And then, you know, a bunch of people have sat around trying to work and say, well, don't behave like that and behave like this and don't say that and do say this.

Speaker A:

And you know what you end up with, as you said, is.

Speaker A:

Is.

Speaker A:

Is a very human product, probably in the end, and very much kind of biased to our ways of thinking and probably would have all those inherited defects, if you like, because that's what we've got and we've passed it onto that system.

Speaker C:

So, yeah, yeah, that was a talk and one of the women was from the Far east and she was saying she had her photo of herself and she'd asked one of the generative AI to make her image look more professional and it made her white.

Speaker C:

And you say that that kind of inbuilt bias into the AI systems.

Speaker C:

Yeah, is trained on the data set that is probably predominantly white people at the time and, you know, predominantly made in the West.

Speaker C:

And I know that's changing now, but it was.

Speaker C:

It was an interesting thing about how AI sort of makes these biases.

Speaker A:

Yeah, I mean, we saw that with the.

Speaker A:

Which system it was, and I think it was Google, wasn't it, with the sort of Nazis?

Speaker A:

And then it was kind of putting, you know, making sort of black people went to Nazis and stuff.

Speaker A:

And that was like the reverse situation there, because what had happened is the system, of course, would be biased towards white people being Nazis, because that's what most of the training data.

Speaker A:

But in an effort that, you know, Google had gone in, so we need to make sure everything's diverse.

Speaker A:

They've then gone in and basically gone so hard on diversity, it kind of overridden the data so that it's kind of.

Speaker A:

Well, even though, you know, a Nazi is most likely going to be white, we better make sure we do some diversity.

Speaker A:

So we'll, we'll do black people instead and like, you know, so you get yourself into all these terrible messes basically, don't you, and difficulties, you know, of how you adjust.

Speaker A:

And I do worry a lot that, you know, basically at the moment most of the AI systems, apart from Mistral in France, which is doing quite well, but most of them are obviously Silicon Valley, you know, tech bros.

Speaker A:

It's a certain worldview, isn't it?

Speaker A:

You know, and it'd be interesting to see over the next sort of 20, 30 years there's other countries in different parts of the world with very different, different cultures develop their own systems and models, you know, really how different they are, what, what they might look like.

Speaker C:

Quite a few coming out from China on the generative video and imagery stuff, which I haven't tested in anger yet.

Speaker C:

But yeah, it'd be interesting to see the biases in those or lack of.

Speaker A:

They were talking about that one of them on a podcast I was listening to the other day and they were saying that, you know, you can't get it to say anything bad, you know, about the President of China.

Speaker A:

And yeah, you know, or you know, if you ask it about Tiananmen Square, that never happened.

Speaker A:

Yeah.

Speaker A:

So again, they very heavily aligning it to, to fit their, their audience, their requirement, aren't they?

Speaker A:

You know, and I think, I think it would be naive for us to think in the west that ours is any different.

Speaker A:

You know, of course ours has been trained to kind of reflect the society that a certain group of tech billionaires right now who seem to have control of all of this want.

Speaker A:

You know, and I think maybe that's my biggest fear for all of this stuff is that, you know, in the end it's in the hands of a few individuals and seemingly as we're seeing on social media at the moment, they've got very strong opinions and it's.

Speaker B:

Who's whose bias is do we use?

Speaker B:

I think that's the question and that's what I've been asking on my shows for, for a year.

Speaker B:

And that's the thing that worries me is because the three of us, we're all on this call, but we were all raised in different places with different people, we've had different experiences, we have different built in biases.

Speaker B:

Right.

Speaker B:

And so which one of ours is the correct one?

Speaker B:

Obviously we think our own is the, is the correct one and that you, you know, everyone else is wrong.

Speaker B:

And so this, this is what worries me.

Speaker B:

It's exactly this.

Speaker B:

And you know, do the more rules we put in.

Speaker B:

And the more, you know, the more we try to stop the bias, the more we create a fictional picture that doesn't exist.

Speaker B:

And so if we start to create a fictional picture that doesn't exist, at what point does that then create completely unrealistic expectations for real people?

Speaker B:

Because they look at stuff and they say, oh, but that's how it is.

Speaker B:

Because I asked AI and AI says that I asked for this and it gave me these images and this is what these people look like when actually in fact that's not the reality and it's someone's messed up idea of what it should be like in some perfect society.

Speaker B:

But that just doesn't happen.

Speaker B:

And then that maybe creates even.

Speaker B:

It's like the social media problem.

Speaker B:

Right.

Speaker B:

So you look at other people and they're all rich and happy and they're, you know, in front of cars and also, you know, all sorts of stuff, but that's not actually how they live.

Speaker B:

And then everybody becomes unhappy because they think that they're supposed to be like that.

Speaker B:

And we're creating a fake world that it doesn't reflect reality.

Speaker B:

Because a few people think, oh, well, that's not nice, so we should do it differently.

Speaker B:

It's like, but the world's not nice.

Speaker B:

And I'm more of an absolutist and I think that it should reflect the data.

Speaker B:

And if we don't like what the data says, it's up to us to change the way we behave so that it gets better data.

Speaker C:

Yeah.

Speaker B:

And then it will start to reflect based on the newer data.

Speaker B:

But that's the only way we're going to learn because otherwise we'll look at it and we'll go, oh, well, everything's okay now, but it's not okay.

Speaker C:

And like if the three of us were sitting in the pub for an evening and we all had different political viewpoints or religion viewpoints or whatever, we could thrash it out.

Speaker C:

And I feel sometimes with that fake world that we're pretending is there where people get so easily offended or so that we're, we're protecting people's sensitivities.

Speaker C:

And obviously there needs to be a certain amount of civil discourse and in the public space.

Speaker C:

But also that's not the way we've, you know, I've changed my views by arguing with people or being, you know, people telling me a different viewpoint.

Speaker C:

And that's how we evolve and how we get better at things.

Speaker C:

I think sometimes, of course, we get entrenched into our own viewpoints, but I think that realistic interaction with people who have different viewpoints than you is a.

Speaker A:

There's a danger as well that we end up with such polarized views that the model, because it's training on online data, most of this, right, you end up with very strong views on either side of an argument and then it's not actually getting much data on most of the people who are in the middle probably, you know, so it's only seeing these kind of two different kind of crazy views, but the kind of same view that lives somewhere in between.

Speaker A:

Actually there's not much data on that.

Speaker A:

So it's kind of, it's either lurching from one one to the other.

Speaker A:

So then you end up again with a kind of quite a volatile.

Speaker C:

Agreed.

Speaker A:

Yeah.

Speaker A:

Intelligence.

Speaker A:

You end up with a system that's quite volatile really, aren't you?

Speaker A:

Because it's going to go one extreme or the other.

Speaker C:

I do think social media, chat, forums, the Internet encourages a more extrovert personality type to interact because by the very nature of being extrovert, you care less what people think of you and you're more pushing your viewpoint.

Speaker C:

And, but that's, that's not everyone, that's not majority everyone.

Speaker C:

So there's.

Speaker C:

There is often this perception of the world in conflict, but you know, that's not true offline.

Speaker A:

And I just think about to address this is synthetic data that they can create their own data that's not real data, but it's reflective of what, you know, ought to be there.

Speaker A:

And then that would, that would contain less bias because it wouldn't have the kind of extremes necessarily.

Speaker A:

Or maybe it has a balance of extremes middle and you know, but, but of course the problem with that is whose synthetic data is it?

Speaker C:

Yeah.

Speaker A:

In the first place, again.

Speaker C:

Yeah, it's the same, it's the same problem, isn't it?

Speaker A:

Virtual loop of, you know, we'll just take.

Speaker A:

Move the problem just down the road a little bit.

Speaker B:

And if it's accurate synthetic data, it will have exactly the same biases in it as well.

Speaker B:

It just won't.

Speaker C:

Same problem.

Speaker B:

Yeah.

Speaker C:

Kicking the can down the road there.

Speaker C:

What do you guys think of the UK announcement this week?

Speaker C:

You know, having a BBC of AI in a way, a sort of publicly owned AI.

Speaker A:

Good.

Speaker B:

I think.

Speaker A:

Interesting.

Speaker A:

Well, Dave, you're sitting on the committee, aren't you?

Speaker B:

Well, I'm on, I'm on the appg.

Speaker B:

Yeah.

Speaker B:

I don't know.

Speaker B:

I think this is what we're going to end up with in pretty much every country.

Speaker C:

Okay.

Speaker B:

And I've sort of theorized before that kind of where we would end up is that each, each country or region maybe would end up with its own AI system that roughly aligned to its values.

Speaker B:

And so people that lived in that region or in that country would have sort of an internal AI that they could use, you know, because I think the, the values in, in the, you know, in the US are very different than European values, which are very different than African values, which are very different than Middle Eastern value.

Speaker B:

Right.

Speaker B:

So I think each one of those areas are going to end up with one big player that's going to kind of.

Speaker B:

And all the biases and all the rules and all the ethics and all the, everything will be geared towards how that culture feels about it.

Speaker B:

So from an early stage, I think in general, it's a good idea.

Speaker B:

I've worked in public sector on data projects for nearly a decade now, and I don't see any actual practical way that's ever going to happen, but it's good in theory.

Speaker A:

I mean, I had a good look at the report when, you know, we put it up in the group and stuff.

Speaker A:

And my feeling was that it was ambitious.

Speaker A:

I think the, the level of, you know, kind of scaling they're talking about from where they are today and where they want to be in five years, I, I think it's almost so ambitious, it's unachievable.

Speaker A:

I can't see how they can get to that level in that space of time.

Speaker A:

So I'm not quite sure who's doing the sums there, but.

Speaker A:

And then I think the other big issue, of course, is it's public purse, it's money.

Speaker A:

There's a lot of other pressing issues in the country, obviously at the moment, Health service to name but one.

Speaker A:

But you know, again, if, if five years down the road and they've kind of hosed an enormous amount of money into this and their systems are still way behind, say, where Open Air at that point or Microsoft.

Speaker A:

There's going to be a lot of questions asked, you know, and I think it becomes difficult.

Speaker A:

But, but do you think that what they say is that is spot on, I think over time, because remember, we are not even, you know, at basecamp on this.

Speaker A:

We're on the bus to base camp, you know, in terms of the AI, you know, where we'll be in 30 years from now.

Speaker A:

I think this is the beginning of that conversation of that road, that journey.

Speaker A:

And, you know, I think governments and countries will need to, to look at this stuff and develop their own capabilities over time.

Speaker C:

But I think Strategically important from a.

Speaker A:

Defense the same as oil is or anything else was.

Speaker A:

You know, I think, you know, in defense, I think over the next, you know, 20, 30 years that it will happen.

Speaker A:

So in that sense it's good that the UK starting early on this because I think it's starting probably before most countries are starting.

Speaker A:

So that's good.

Speaker A:

And that might give us some opportunities and afford us, you know, some, some gain as well.

Speaker A:

But yeah, I think it's, it's probably over ambitious as they've laid it out.

Speaker A:

But you know, good on them for, for giving it a go, I guess.

Speaker B:

Did I, did I see that?

Speaker B:

The.

Speaker B:

I think it's.

Speaker B:

The US has requisitioned fighter jets now with no pilot.

Speaker B:

Okay, so fully, fully automated.

Speaker C:

Previous episode, weren't you that AI was immediately.

Speaker B:

I know the AI worked really well and that they, they've had that for a while in sort of test planes, but now they don't even want a human component to it whatsoever.

Speaker B:

They're like literally just design a plane that can fly by itself and there's no capacity for a human pilot in it whatsoever.

Speaker A:

It's a massive difference as well.

Speaker A:

I would talk with the RF about this years ago.

Speaker A:

I did quite a lot of work with them.

Speaker A:

But many years ago one of my roles was working with the armed forces.

Speaker A:

And yeah, you talk to the RAF and they would say, well, you know, the biggest limitation in a fighter plane is a human forces.

Speaker A:

And you know, you've just, just, you know, you can't.

Speaker A:

They could get it to do way more, but they can't, you know.

Speaker A:

So, yeah, you said human.

Speaker A:

It's kind of like you, you, you lift the roof on capability basically and.

Speaker B:

You take the canopy off the top because you don't, the humans don't need to see out.

Speaker B:

And essentially you've got a missile with a, with a brain in it, which is, you know, quite interesting.

Speaker B:

Gents, I have to.

Speaker A:

Yes, we've gone on.

Speaker C:

It was, it was interesting though.

Speaker B:

It was, yeah.

Speaker C:

I mean there's half the stuff we haven't even covered that we wanted to chat about, but that we can do that in a part B.

Speaker C:

Yeah, there we are.

Speaker A:

Well, I'll tell you what, we'll definitely come back to this subject in the future because it's a moving, moving feast.

Speaker A:

I think, Ben, you might have an interesting one for next time about kind of how we sort of store up our memories of our family and friends for the future and how AI could potentially help us with that.

Speaker A:

So I think that might be a great one.

Speaker C:

So cheerful one.

Speaker B:

Let's do it.

Speaker A:

I'll come back for that.

Speaker A:

But yeah, we'll be back probably in a month with.

Speaker A:

With another episode.

Speaker B:

Brilliant.

Speaker B:

Thanks, guys.

Speaker C:

Thanks, guys.

Speaker A:

Thank you.

Speaker B:

Bye.

Speaker A:

Take care.

Speaker A:

Bye.

About the Podcast

Show artwork for AI Evolution
AI Evolution
Exploring the Future of Artificial Intelligence

Listen for free

About your hosts

Profile picture for David Brown

David Brown

A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success — whether it’s yours or mine — so send me a message if there's anything I can do to help you.
Profile picture for Alan King

Alan King

Alan King, founder of the AI Network, AI Your Org (aiyourorg.com), and Head of Global Membership Development Strategy at the IMechE, has been fascinated by artificial intelligence (AI) since his teenage years. As an early adopter of AI tools, he has used them to accelerate output and explore their boundaries.

After completing his Master's degree in International Business, King dedicated his early career to working at Hewlett Packard on environmental test systems and Strategic Alliance International, where he managed global campaigns for technology firms, all whilst deepening his knowledge around neural networks and AI systems. Building on this valuable experience, he later joined the IMechE and published "Harnessing the Potential of AI in Organisations", which led to setting up the "AI Your Org" network.

Firmly believing in the transformative power of AI for organizations, King states, “This version of AI at the moment, let’s call it generation one, it's a co-pilot, and it's going to help us do things better, faster, and quicker than ever before.”

Known for his forward-thinking attitude and passion for technology, King says, “We become the editors of the content, and refine and build on what the AI provides us with.” He's excited about the endless potential AI holds for organizations and believes that the integration of human and machine intellect will drive exponential growth and innovation across all industries.

King is eager to see how AI will continue to shape the business landscape, stating, “We are about to enter a period of rapid change, an inflection point like no other.” As AI tools advance, he is confident that their impact on society and organizations will be both transformative and beneficial.