Episode 1

Will AI Replace Us? Exploring the Future of Autonomous Living

This debut episode of AI Evolution sets the stage for an insightful journey into the world of artificial intelligence.

Host Alan King, alongside co-host Benjamin Harvey, welcomes guest David Brown to discuss the transformative potential of self-driving cars and humanoid robots. Together, they examine society’s readiness for fully autonomous vehicles, challenge Elon Musk’s aggressive timelines, and explore the deeper implications of integrating AI into daily life.

From concerns about job displacement to the tension between convenience and control, the trio tackles the big questions: what role will humans play in an AI-driven future, and how can we shape that vision ourselves?

It’s a compelling start to a series that asks not just where AI is heading, but whether it’s the future we truly want.

Takeaways:

  • The future of AI and humans will involve a symbiotic relationship, evolving together over time.
  • Elon Musk's vision for self-driving cars is ambitious, but timelines may be overly optimistic.
  • Public perception of self-driving cars is mixed, with many preferring human control for safety.
  • Robotic technology is advancing, but widespread domestic use is still decades away.
  • Young generations show less interest in driving, preferring public transport or ridesharing.
  • Concerns about job displacement due to AI and automation highlight the need for societal discussions.

Links referenced in this episode:

Transcript

Alan King

::

Welcome to AI Evolution, the podcast. In this podcast we're going to be discussing obviously AI as we've said in the title, but we want to think about things slightly differently.

You hear lots of conversations these days around the latest AI, the latest tools, the latest tricks, the latest gadgets, but we wanted to actually think about where this is all going actually.

What does this mean for the future of AI, but also the future for humans and actually how humans and AI are effectively now going to start almost symbiotically evolving together. And yeah, where does that take us?

So to help me on this journey today we've got Ben Harvey who's going to be the co-host of the show and Dave Brown who's the producer for the show and also runs an AI network. I'm going to let them introduce themselves now and tell you a little bit about what they do. Dave, do you want to go first?

David Brown

::

Yeah, sure, thanks, Alan. Yeah, I run a podcast network called WithAI FM and I think we've got eight shows or something now addressing different verticals.

So you know, education, the law, relationships, creatives, startups, all that kind of stuff. And Alan, you've been on, I think, Creatives WithAI in the past, so you can go look that one up.

And I also have a media company so I help other people with their own podcasts, audio, video production, all that sort of stuff.

Benjamin Harvey

::

And yeah, hi, I run a media company based in London. I'm the South Wales part of it. We, yeah, video, photography, web design, all that kind of thing. So.

And yeah, I've been particularly interested in how AI interacts with the creative industry.

Alan King

::

Excellent, thanks. And it's worth sort of saying I guess at this point as well, you know, we're going to have different guests on each week.

Got Dave on, particularly this week, apart from obviously being the producer and obviously you know, the host of the network, but also because he's had a fair amount of experience recently with automotive and talking to automotive companies about, you know, self driving vehicles. And so hopefully with the conversation we're going to have today is going to reflect around that.

So hopefully he can bring some interesting insights with that.

So without further ado, I think we'll just kind of crack on actually just to say that, you know, the show is obviously sponsored by Dave and myself and Ben for our various organizations and I run the AI Org network and anyone wishing to sort of reach out and join that network, they can contact us through the show and we'll happily let you come and join the conversation now. So what we want to do today is think back a few weeks.

It seems like a lifetime ago now that this event happened, actually when we first started talking about doing this podcast, because a lot's happened since with the, with the people involved, but Elon Musk stood on a stage, it was probably, probably three or four weeks ago now, and unveiled his latest, or vision if you like, for self driving cars and indeed robots. As I said, a lot has happened since then as everyone listening to this is going to be aware.

But I want to kind of roll back to that a little bit because when I watched that event it really struck me, the thing that really struck me was that, you know, he was a billionaire who had a very sort of singular vision of how he saw the future and perhaps what the world might look like now.

His timelines, I think he said himself, are very aggressive and I don't think any of us are probably buying into his kind of two year vision for fully self-driving vehicles rolling around on streets. But certainly the vision is there for the long term. And you know, again with the idea of robots kind of wandering around your homes as well.

So let's, let's take a bit of a dive into that today and think about, well, what does that mean, you know, 20, 30, 50, 100 years? What, what would society look like actually if we end up going down this path?

Because sometimes I feel that billionaires, it's almost like they read a bunch of sci fi books and kind of went, oh yeah, that's what we should do. And maybe, maybe, just maybe it doesn't reflect actually what society might actually want.

So I'm going to throw Ben straight into the deep end here and let's perhaps look at the car theme first, shall we? And then we'll move into the car robots a bit later on.

But how do you think people will react to the whole idea of self driving cars in the future? Is that, is that something that actually people even want? What do you think, Ben?

Benjamin Harvey

::

It's a really good question.

I've gone around thinking about this for the last couple of weeks quite a lot and I think I've gone from cynical to kind of excited and back and forth and I've probably ended up somewhere in the middle. Um, like, I guess user error is a massive thing with, with driving a car, you know, tiredness, drinking, distractions with phones.

So, you know, there's a, we look at the technology to help us improve things. So there's, there's that aspect of it which I think is really good.

But I also think multi billion pound companies, dollar companies, do they have a best interest at heart as well? So I think it's down to how we want to look at how society is going to advance and how. And how we're going to end up with these things.

And I don't know whether people want fully automated cars. I mean, I probably don't. There's times when I want driver assist or help, but I don't want to be out of control.

I like driving, I enjoy it as a personal freedom.

But yeah, there's times in a city where you're traveling with lots of film gear and you want to get from A to B quickly without, you know, you can't drive in London, for example. It's impossible almost sometimes to drive in London, to park in London.

So there's definitely situations where I would love it, but as a rule to everyday life, I don't think I would.

Alan King

::

Yeah, it's interesting, isn't it? Because I think, you know, I personally can see times when I think, yeah, okay, maybe that'll be quite good.

But I actually really quite enjoy the whole process of driving actually a lot of the time. And I think maybe there's a lot of people who do think like that.

I think we should probably, for the listeners as well, sort of clarify a little bit around what self driving actually is because I think again, there's a lot of misnomers around this area. Elon certainly sort of tries to persuade people that self driving that he has is full, but it isn't really.

It requires a lot of human input and human assistance.

And probably, I think in terms of the scale it's on what they call level four and level five would be when you step into a realm where a vehicle is simply able to do everything itself in any conditions, rain, shining, you know, dark light, whatever it might be, the car is able.

You can throw it into a novel environment that's never been in before and it can navigate the roads, find its way, get to where it needs to get to perfectly successfully.

You know, it could take itself down a windy road in the Gar or Cornwall or somewhere and throw itself up a bike to get out of the way of a tractor if it needed to. You know, it would have that level of kind of, you know, intelligence.

And at the moment the systems that we have really are quite a long way, I think that's my understanding anyway from that. We have systems that are capable of moving vehicles around, but in pretty much predefined spaces that they've been trained on.

And, you know, obviously they have a lot of equipment on board as well, like lidar, radar and video feeds as well. Although it should be said, actually with Elon's cars, I think there is no lidar and radar.

It's just video feeds he's trying to do it on, which could present its own problems, I would have thought, down the line. But, Dave, you've spent some time, I think, with some automotive companies and looked at this.

What are your thoughts about this and actually how far away is this reality? Because Elon will tell you it's two years. My instinct is it's a lot further out than that in terms of the evolution of this.

David Brown

::

It's interesting.

I'll pick up on a couple of the points that you mentioned, but I'm going to throw some new stuff into the mix that probably people haven't thought about. First of all, one, I agree with you.

I think we are going to see fully autonomous things like buses, and we'll probably see them in the next couple of years. I think the technology to do that is actually fine.

If you wanted to have a bus that ran a route in a city like Cambridge, for example, a bus navigating itself down a kind of route that it runs, and it knows how to go back and forth all the time, and all it's trying to do is just not hit other cars, that's pretty well proven that that works. They have them running in Oxford, for example. On some of the science parks, the buses run fully autonomously.

There's a person sitting in the seat doing absolutely nothing and never intervenes in what happens. The problem is that the regulations and the laws haven't caught up to allow that to happen.

So at the minute, you still have to physically have a person sitting in the seat even though nothing ever happens. So I think there's a big part of the laws and the regulations catching up even to enable that.

So probably getting that done is going to take longer than the technology actually being ready to do that. But like you mentioned, again, that's kind of a fixed area. Excuse me, where it knows where it's going and it's just repeatable.

But I think that's where it's going to start. And then it's going to expand in urban areas and potentially peri.

Urban areas, which is like your suburbs, because again, there's a lot of infrastructure that's already there. There's a lot of houses, there's a lot of signals, there's a lot of lights.

It's well lit, all those sorts of things that make it easier for those vehicles to determine and know where they're going.

I think your example of running down a single lane road in Devon or something like that is probably pretty far away, but I think we can cover 80% of travel in a fairly short amount of time if we can have the laws catch up. And I think there's a knock on effect.

And not to do a shameless plug, but one of our hosts doing places with AI, which is all about smart cities and stuff like that, his very first guest that he had on is Darren Capes who works for the DFT in the uk. So it's part of the government department and one of the things they mentioned in their last conversation was insurance.

For example, what's the impact as autonomous vehicles start to roll out and they're safer than humans?

At what point does the insurance company say, well, we're not going to insure you because you're a human driver and you're too risky because it will reach.

Alan King

::

Interesting, I hadn't thought of that.

David Brown

::

But you know, you're humans. We're actually the wild card.

And I think as more and more vehicles are on the road that are fully autonomous and can work with each other, so you've got the V2V communications and then you've got the V2X which is the vehicle to infrastructure communications.

Once you have an environment where that's all that's there, I think that is, that's the tipping point is when you can start to have more of those than you have people driving. So that's, that's one bit of it.

The other bit that I think people don't talk about a lot that we really do that you need to have in the back of your mind is that governments globally have agreed that the, everything that they're working towards under the environmental argument is they don't want people with personal vehicles driving at all. So the goal is to remove cars completely. So they don't even care about driverless vehicle, they don't even want driverless vehicles on the road.

They want no vehicles on the road because of the environmental impact. And this is they are very steadily working towards pricing people out of being able to drive entirely.

It won't matter if you won't have your own car, but you might be able to get an autonomous taxi to take you somewhere if you need to go somewhere in a car. But something like 78% of the population, certainly in the west, live in urban areas and peri. Urban areas.

And so in, they're not really worried about the other percentage.

Yes, there's an issue with people, farmers who live rurally and all that, but by and large the vast majority of the population can be serviced that way and shouldn't with better public transport and that sort of thing. Shouldn't actually need to have vehicles. And that is the agenda and that is what they're working towards.

So when you see all these stories come out in the news about they're raising taxes here and they're doing this and they're taking, you know, petrol vehicles off the road and all of that, that. That is the global plan.

And I've personally sat in meetings and seen presentations where they have that written on slides and they know it's unpalatable and they know that people don't like it, but they don't care and they're doing it anyway. They're just doing it little, little bits at a time and just chipping away at it.

Alan King

::

It'd be really interesting to see how that develops. Actually, I haven't given too much thought to the kind of pricing strategy, almost pricing people out of driving in the future.

I guess we sort of seen this before with smoking, actually, haven't we?

You know, the government sort of take a stance where they effectively incrementally raise prices and eventually you reduce the number of people smoking. Some people still smoke, obviously, but the numbers are considerably lower than they were say, 20 years ago.

So I guess you could do the same with cars. As you were talking as well, I also just kind of thought about the bus thing. You know, actually that's almost a good place to start, isn't it?

Because people who are getting on a bus already have already opted to say, I'm actually not that bothered about driving myself. You know, I'm happy to let someone else do the driving. So then. So then the only argument you have to win with them is do you feel safe?

You know, as long as they feel safe. Well, they don't really mind if there's a driver or an AI system.

Maybe the people with the expensive cars who are happy to pay a premium, maybe they're the ones that are going to be the hardest ones to kind of convert and bring across.

David Brown

::

I think there's two elements to that. Just quick. Sorry, I don't want to dominate the conversation, but there's two elements to that.

One is, I think the direction that the luxury manufacturers are going is going to be a membership.

So what you'll do is you'll actually have a membership to BMW or you'll have a membership to Mercedes and when you need a car to come and pick you up, you'll get picked up in a Mercedes or a BMW. You won't get picked up in a Skoda or whatever. So I think that's the first step.

So the autonomous vehicle will just be an autonomous, nice car, and you won't get shoved into some, you know, tiny little taxi box, whatever. And that's what sort of the plans that they've been talking about for quite a long time.

So you're starting to see it with the pricing on all, like, the features, I think in BMW, if you want heated seats now, it's like a. It's like a subscription, sorry.

So you have to pay for it on a monthly basis and there's all these features on the cars that they enable and disable based on your subscription.

Alan King

::

Because if we imagine a world, you know, 20, 30, 50 years from now, where this is technically capable in terms of AI evolution, this is what the tech companies want because it's where they see the revenue, this is what the governments want because it helps them meet their ecological, you know, issues. But what about the people? What about, what do people want?

Because do people quite like having personalized vehicles, you know, that they get in their car? It's got all their stuff in it, the keys, the bits and pieces, you know, and. And it's almost an extension of who they are.

And it represents them in the world, doesn't it? Does that go away? People try and cling onto that. Kids don't care. Go on, then, to explain that.

David Brown

::

But the kids don't care, like a lot of the kids. My son's 17. He doesn't really care about driving. The only reason he wants to drive is because we live six miles out of town on a farm, and he.

He can't get anywhere. So, you know, the kids.

Alan King

::

Worst nightmare, I think.

David Brown

::

Yeah, exactly. But. But most of the kids in his class don't drive and have no desire to drive and don't care. They literally don't care.

They are completely different than we are. You know, we came from the generation of people who, you know, I learned to drive when I.

When I turned 16, I was like, on a Wednesday, on the Friday, I went and took my driving test and then I was out driving on my own in the evening. It was, you know, but.

But that was in the US and I think in the US it's a little bit different again because everything is so much spread, so much further spread out, and you don't have public transport and things like that unless you're in one of the big cities. So that. There's still a lot of that there.

But I think in Europe in particular, where everything's much closer together and it's much easier to get around, honestly, the kids don't care. They just see it as, it's so expensive.

I think it's, it's like three or four grand a year just for insurance on, you know, on a tiny little car that has a smaller engine than I have on my motorcycle. And it. Which is just, I mean, it's ridiculous. And so it's so expensive and they just don't see it as a priority.

So I think we're dealing with that where as we get older, the new generation's coming in. It's not even an issue.

Alan King

::

Ben, do you think this dies with, with us, with, with us old fogeys you've grown up with?

Benjamin Harvey

::

Carson, When I was just.

I had some of the same thoughts as David there, you know, when I, I spent 25 years in London and a huge amount of that, I didn't have a car because it was, it was more of a hassle, you know, there was nowhere to park it. It was just a hassle. And because it had fantastic public transport, I could for 95% of the time get away without using a car.

Only when I was filming with large amounts of equipment did I have to use a van or a car, you know. And so there was probably at least the first 10 years I was in London, never had a car. It's absolutely fine.

And you know, I got on the bus in London this week at five in the, five in the morning, got in the tube at five in the morning, you know, both things were full at that time. Easy to get around. Now I'm out of London, I need a car, I need a car to do almost everything.

And so I think with autonomous vehicles, it's certainly going to depend on, you know, David, as you say, that suburban environment and that 75% of the world's population that are in cities and urban environments. But yeah, I love having a car out in the sticks. It's useful. But when I'm in a massive city with great public transport, I can do without it.

And I guess a lot of, you know, I can reflect this in my niece and nephews driving age. I don't know any of them that have done their driving test.

Alan King

::

So maybe cars become like horses then, you know, it's, I'm going to guess that, you know, 100 years ago, maybe a little more than years ago, I guess now, you know, a lot of people grew up with horses, riding horses, using them as their transportation. That's how they got around.

And there were probably people then having conversations about cars, saying, I, I don't want to sit in this thing pushing up, you know, I like riding horses. That's what I enjoy doing, you know. But, you know, inevitably the generations that came after them loved a car. Thanks.

It's a lot easier, it's a lot cleaner. It's a lot less messy. You know, you haven't got to walk behind it with a shovel. It's more fun.

It goes faster, you know, so maybe that's, that's where we're at. And maybe cars, you know, become the leisure pursuit for people. And people still have horses and they still have fun with them.

I indeed myself had many horses with my wife for years. But, you know, it's not, it's not, it's no longer a form of getting from A to B.

When you were talking about the underground then, Ben, it just occurred to me that, you know, there's a system that you would imagine by now would have been automated like the Duckling Light Railway. Yet it hasn't been. And what. I wonder why that is. Because it would seem to be, you know, a closed system. It should be easy to do.

I would have thought that's a union issue. Unions. Unions, yeah. Okay. I don't know. I don't know much about that. Do you know, guys know anything about why?

Because when we talk about AI evolution, that might be a good kind of petri dish to say, actually, as governments try to bring in this new world, there's these other societal groups that are going to try and resist that change.

Benjamin Harvey

::

Yeah, I mean, there's a huge incentive to, you know, notoriously, drivers of trains and underground get a good salary, a good pension, you know, so that is, is very protected. And, you know, the DLR works when there's a strike. You know, it's. And they're great. I was on the DLR last week.

I was filming Canary Wharf Monday, Tuesday of this week. And DLR was perfect, you know, but.

Yeah, I don't know if you have any more information than David, but my, my belief that, you know, driverless trains is, technology wise is, you know, done all over the world and has been done for decades. There's no reason that the underground can be more driverless or a lot of suburban train systems as well.

David Brown

::

Yeah, it's, it's the unions, 100%.

And, you know, you see it with Southern trains, I think, you know, they've been striking and everything else because they tried to bring in some driverless trains and, you know, they were like, safety, you know, oh, my God. Somebody, you know, something might happen on a train and we need conductors and we need all this stuff.

And yes, there is a potential public safety element, but I think not from a. I don't think it's from the aspect of something going wrong with the train. It's more about the people on the train. So you're more, you know, if.

I think if people know that there's no conductor and that there's no one on the train to stop them, then you might see, you know, more crime or you might see, you know, particularly if people have been out drinking or partying or something in London and they're going home late at night, then you might have some trouble there.

But, you know, I, but by and large, again, I think a lot of the autonomous technology is actually quite good already and particularly on something like a fixed network. I know in Cambridge, they're looking at, they have private busways, so there are roads just for the buses.

And on the busways is where they've been, you know, testing their autonomous vehicles and things like that. And they're like, yeah, it can totally drive the bus way on its own, perfectly, happily, you know, there's nothing in the way.

There's, you know, it knows exactly where it's going, it's easy to see. So I, again, I think it's. We're just having to get over the regulations. So, you know, the government has to get to the point that it's comfortable.

Somebody's going to have to stick their head above the parapet and say, okay, we need to pass this regulation to let these vehicles on the road and to let them get out there.

I don't have the stats to hand, but I'm pretty sure that the stats are that autonomous vehicles are way safer than humans on a per mile basis on average. And I think that's only going to get better and better as they have more and more miles.

I know in San Francisco, though, they get vandalized, so they've got Waymo on the road, people use them all the time. But then they also get vandalized by people who don't want them you know, taking over taxi jobs.

So there's a societal aspect to this where the technology is probably ready, but people are pushing back, like the drivers, because I think they see that that's going to encroach on actual jobs for. For people who do, like truck drivers. I want to say it was about five or six years ago. There was a.

There was a pilot in the UK where they were using semi trucks, delivery trucks that were driving between warehouses. So it could very happily go from one warehouse, get on, say, the M25.

It could drive around the M25 and get to the other warehouse at the other end, all by itself, completely autonomously. Had no trouble, like, they never had any accidents or any problems with that whatsoever.

It still needed a driver at the end to be able to park it, because parking a truck is a little bit more complicated than a car.

So they had a remote driver basically monitoring that, and the remote driver was able to park it just from a central control, you know, office, and that was it. So, again, I think the technology is there, but there's a societal thing and people are starting.

That's at the coalface where people start to see, actually this is taking jobs away. And that's where people start to get a little bit touchy.

Alan King

::

It's twofold, isn't it? There's the whole economics, the job thing, you know, and do we want to be, you know, kind of automated out of existence almost?

What do we then do as humans, if all these things are just now being done by AI systems? And then I think there's that kind of safety issue as well.

I mean, I suspect that, you know, most planes can probably take off and land themselves fairly well if needed to, but we put a pilot in them because people feel happier about that and assured that, you know, that there's somebody there to look at the system if there's an issue. And even though, statistically speaking, if I get in a car with Ben not picking on you, Ben, not saying your driving is awful, because it isn't.

But statistically speaking, probably an AI system might be a better driver than Ben or I. Indeed. But instinctively, I feel more comfortable sitting next to Ben in a car than just sitting there watching the car driving itself.

And if you move to a world where in the future there's no steering wheel, there is no ability to take control if you need to. And I think that is the vision for a lot of these companies. Certainly it was the Tesla vision at the event a few weeks ago.

There was literally no control. So you are generally just in a vehicle and you are solely reliant. It's that fear that if something does go wrong, what can you do about it?

And presumably not a lot. So I think that will take time. Maybe that's part of. Maybe that's the evolution here. We've grown up with one thing.

And as you said a while ago, Dave, you know, well, maybe the kids don't care because they've grown up in a completely different paradigm, completely different world. And the way they see this stuff, they probably go, why would you get in a car with another human? They're reckless and dangerous.

An AI system is much safer. So perhaps that's where we end up. All right, why don't we park the car as it was, excuse the pun, and let's move nicely into the robots then.

I did see, I think it was on TikTok or somewhere the other day, a sort of clip of a robot sort of standing, looming over a person while they slept, staring at them in a sort of slightly menacing way.

And it did make me sort of think, yeah, actually Elon's vision of these robots and these four size humanoid looking robots, do we really want them wandering around our living space in our houses? Dave, do you fancy one of those staring at you while you sleep?

David Brown

::

No, not particularly. Although I would like it to do my laundry.

Benjamin Harvey

::

It's the mundane, isn't it?

David Brown

::

I'm kind of torn over the robots. There's a robot called Amica that's developed by a company that's down in Cornwall and she's been on a lot of press and stuff like that.

We'll drop a link into the show, notes to a video on YouTube that people can see and the guy will, who's the founder of that company, they're like an animatronics company and they've been building sort of, you know, robots for like Disney and stuff like that for ages. But Amica, they really focused on the facial expressions and things like that.

I saw them at CES and I had a chance to actually talk to Amica. I don't know whether to call it an it or a she though, which is really weird because it looks.

It's kind of a female shape, but obviously, you know, it's just a machine. So I don't. Half the time I say it and half the time I say she. And it was. There's a point to this story.

And the point was, as I was, the guy was giving her the brief for the day. So he's over there saying, right, today the Japanese ambassador is coming around, so speak Japanese.

And here are the points I want you to mention while he's here and the stuff I want you to talk about. And she's having a conversation with him and which was a bit weird anyway. And I was talking to the other founder.

But the weirdest thing about it, like that kind of seemed a bit normal. And you're like, all right, that's interesting.

But you know how as a human, if you're talking to someone in a busy area and something happens in your peripheral vision, you kind of, you know, you keep glancing over to see what's going on? Well, she was doing that.

So I was talking to the guy and she kept, like I could see that she would look over kind of because she was seeing something in her peripheral vision and kind of looking to see. And I was. That was the thing that freaked me out the most. It was that it.

But it was so human and it was that, you know, that she was exhibiting that really natural human behavior of kind of just keeping an eye on what's going on or if she saw something. And like I was like, wow, okay, that's pretty creepy. And I was surprised because I'm a big proponent.

I think, you know, there could be uses for robots. But yeah, when they start act really acting human, it's pretty, it's pretty off putting.

Alan King

::

And do you think that was programmed?

David Brown

::

No, no. She has a face and she can make expressions.

And I just saw a video on Insta maybe last night, maybe a reel on Insta where they sat up, they sat an AI in front of a mirror with looking at its own face.

And then they gave it like a hundred thousand hours of videos on YouTube and all it was doing was analyzing facial expressions and then trying to make those facial expressions for itself. And it now can mimic like a hundred thousand different facial expressions accurately and it looks like the face of Amica.

So I don't know if it's those same guys who did that experiment, but that, that's what their whole, that's what their whole company really focused on was more about trying to bring the facial expressions and like a smile to make it look like a natural smile and like she has teeth and you know, stuff like that. So you can really get emotion.

And that's again, that's kind of the most off putting the other thing just to mention, shout out to them and I apologize, I can't remember the name. When Ben starts talking or someone, I'll go look it up online and mention it in a minute. But you can rent her as well.

So if you have a, if you have your own stand at say a big event or something like that, and you want to have something Unique. You can actually rent Amica. You can train her on your whole company sales pitch and all the background and everything.

And then she can basically stand and have conversations with people and answer questions about your product and all sorts of stuff like that as well. So I, I thought that was pretty, pretty cool.

And I said to them, I said I wouldn't want the liability of, you know, sort of having her on my stand because I imagine the insurance on that is highly expensive. And he said she can take care of herself.

Alan King

::

Interesting. I mean, how much of this though do you think when we're looking at these things is us kind of morphising them a little bit?

And so, you know, you know, when you look at your dog or your cat or something and you kind of imbue an emotion onto it and you kind of go, well, looks happy or it looks sad. Well, we don't really know if it's happy or sad, but do we think we're still a bit at risk of doing that with the robots a little bit as well?

And perhaps you're saying it's glancing or is that just a pre programmed response that it's been told to do and then to create that emotion in you or is it genuinely actually just doing that?

And if you ask the developers, they said, well, we're not sure why it's doing that, but it just is, you know, it's, I guess, yeah, yeah, you know, what's going on in the black box basically? Is it actually, you know, is there stuff it's doing and we're not really sure why it's doing it, but actually it's very human.

Isn't that, isn't that disturbing? You know that I think that's sort of interesting to understand.

And I saw, I sort of think as well, it's interesting that we're very keen seemingly as a society to try and create these things in our own image so much as well, because. And there's no reason why a robot needs to look like a human, right?

There's, you know, maybe two legs wandering around your house isn't the optimal configuration.

You know, maybe you can have a different system to going around and getting upstairs and whatever it might be, you know, and yet it seems to be that, well, we want that kind of familiar shape because in theory we'll be most comfortable with it. But you could also argue by being exactly like us, it could also be the most disturbing version of it.

Benjamin Harvey

::

Yeah, there's the old test, isn't there, going back decades, that the closer a robot is to a human the weirder we find it and the creepier we find it. So I, I mean, I'm interested, David, that you, you think this was more lifelike in a way, and more human. But was it, did you find it creepy?

David Brown

::

Yeah, I did. At the time, I've, I've.

I was surprised by it because I sort of expected it to just be, you know, focused entirely on the conversation at hand and just looking at the person and yeah, okay, she was having a kind of a conversation and it was fine. And you expected that, but I didn't expect kind of the spatial awareness. And it, I don't think it was pre programmed in that. It, it.

She only reacted if there was some sort of a movement. So it's, it's like she had vision. So it wasn't like she was programmed to just every, you know, random time to look around.

It was, if something happened, she would notice it. So. And again, I don't. Maybe that was, you know, something that they put in for her to have awareness. Or maybe it happened on its own.

I've been trying to get Will on my podcast for ages because I'd love to get him on to talk about it because he has, he comments, he's very prolific on LinkedIn and he comments on almost every single robot post that you see coming up. And I imagine he would have some thoughts on this.

But what he usually says is, don't worry about robots anytime soon because they are absolutely so incredibly expensive to maintain that the reliability of them makes them just prohibitively costly. And so, you know, they're, they are still very, very delicate.

You know, things like the joints and all that sort of stuff have a tendency to wear out very quickly and to break. And, you know, you're.

We're still decades away from having something that somebody could have in their home that would be reliable, that wouldn't just break down and would be so prohibitively expensive to try and fix that. He just doesn't see it happening anytime soon.

Alan King

::

Absolutely.

David Brown

::

Engineered Arts is the name of the company, by the way.

Alan King

::

I think Elon, when he, you know, he stood up, stood up in his events, you know, he kind of tried to give the impression that these robots were kind of here now. But it very quickly came out all across social media that, you know, they were being controlled by humans. They were affected effectively.

Remote control, there were people voicing in the background. You know, it was, it was really, you know, smoke and mirrors. But. And interestingly, a while ago I did hear the CEO of what's the Atlas?

The big robot Company. It's gone in my head. Gone blank.

Benjamin Harvey

::

Boston.

Alan King

::

Boston, yeah, Boston Dynamics.

And he was sort of saying on this podcast, I think he was saying essentially that, you know, the next thing for them was to try and get their robots to work out how to use door handles, you know, properly, you know, and the interview sort of says, well, you know, how long will that take? And he said, well, probably a couple of years, you know.

So I think when we see a lot of these demos, they're very sort of constructed and put together and staged and, you know, the idea that these things are going to be wandering around our homes anytime soon, you know, is, as you say, Dave along a long way out. But let's imagine that a long way out. Let's imagine 50 years, 100 years from now. As a society, would we.

Would we really want, you know, the robot, you know, feeding our children's breakfast in the morning instead of us and yeah, how, how subservient, how in the background do we want to become, you know, oh, we've got the robot now and it can just do all the washing and all the dishes and it can feed and dress the kids and it can, you know, we have to do. We have to do something ourselves, don't we? So is this a tech guy's dream? You know, is it. This is Elon's vision.

He's read some sci fi and we should all have robots or, or actually, is this something society wants? Because I'm not convinced that, you know, for most people that.

And he talks about having robots in every home and it's going to be the most successful consumer product, you know, product in the history of the world, more popular than the iPhone. You know, I can't, I just can't see that. I think people want people first.

People buy people first, you know, that the idea of robot literally coming in and be able to do everything that you can do in your home, then what. What are you going to do?

Benjamin Harvey

::

Especially if we're losing jobs to AI anyway, so we might have more time to do some of those things in 10, 20, 30 years, you know, spending more time with our children or spending more time with our wives or partners, you know.

But yeah, I guess ultimately a lot of, you know, technology companies are presenting us with this vision in the future, but I find we're not stopping ourselves and thinking, well, what. What do we want for our future?

You know, there isn't that philosophical or moral discussion around what, where do we want this to go as a society or as a, you know, as Humans. And I think we're so excited by what is possible sometimes that we forget to stop and think, well, do we actually want that?

I think that's an important part of the discussion that seems to often be left out.

David Brown

::

I think another interesting way to think about it is, I can't remember, I had a guest on ages ago, 2023, early 2023, I think, and we were talking about robots and I made the comment that, you know, what was going to end up happening is that robots were basically going to be for rich people, right? So they were going to have robot butlers and all this stuff. And they went, no, it's going to be totally the opposite.

All the rich people are going to have humans doing their stuff for them. They're going to have real pets, they're going to have all that sort of stuff. And it's.

All the people at the bottom end of the scale are going to have to deal with robots. So they're going to interact with robots all the time. And the people who have money and the super rich are the ones who will use humans.

And it's, I think the idea is, it's just an extension of what we see now. So for. I always use the example of cobblers, right? Like all everybody's shoes all used to be handmade. You can still get handmade shoes.

There are still people that make shoes by hand and will custom build them and make them for you, but there are very, very few of them and they're hugely extortionately expensive because it's a trade and it's a specialty and there's very few people that do it. And I think that's where we're going to end up ultimately.

You know, you talk about Alan 50 years ahead, or, you know, 75 or 100 years ahead, what we will end up with is the, you know, 80 or 90% of the tasks, the mundane, everyday tasks, will be done by robots and there will be a very high level specialist group of humans that, that actually still do that stuff. But most people won't. And I don't know what that means for the rest of the people. I don't know what normal people are going to do.

You know, you look at something like cyberpunk and it's basically, you know, people hanging around, you know, cafes and, you know, they're making food and, and I don't know, getting in trouble.

Like, honestly, I don't, I don't really see what, you know, it's quite a dystopian view, but I just know how I know how things are and I know how technology is.

And unless there's some major cataclysmic event that kind of resets everything, it just feels like it's just going to chip away and chip away and chip away and chip away.

And then before we know it, we are going to be in that position where robots are doing most of this stuff and you've got specialist humans who still do human things, but the rest of us are just, I don't know, on the beach.

Alan King

::

But imagine if these robots, you know, if, if they're everywhere and they did achieve AGI, then you've got a real problem, haven't you? Because, you know, not only would they start doing all the creative stuff as well, because hey, that's fun and they enjoy doing that.

They'll form the new robot Beetles and become the new van robot Van Gogh or whatever. But you know, maybe they might decide, oh, do we even need the humans here? You know, so I, I don't know.

I, I sort of, I feel that it's not a good trajectory for society to sort of basically put ourselves in a position where kind of we don't need to do anything anymore.

I feel like human beings do need a purpose and having something to do is quite important, you know, and if we, if we could give that all away, I mean, you know, look, we went through the 50s and 60s and 70s and we invented lots of, you know, very sort of good functional things like washing machines and microwaves and dishwashers and that all helps us speed up tasks. But they're very task specific, aren't they? You know, they're robots, if you like, that do a very specific thing.

But to have a kind of all purpose robot that can do anything and everything, I see that as a, in terms of the evolution of our species is quite a threat actually. And maybe we all end up like, you know, the Wally film, just sitting in a chair, staring at a screen, you know, watching endless garbage.

David Brown

::

Well, maybe we can send the rope. Maybe the first use of, good use of a robot would be to send a robot to Mars.

And then you can have a robot that can actually get up and walk around and do the stuff that a human would do outside of a rover. Right.

So that might be the first way that we make contact, or we send them to the moon or something like that to go and wander around and do stuff and go places that we would go if we could go ourselves. But maybe that would be the first, best test is to send them out there and see what happens and to do some of the more risky stuff.

And I guess we would be remiss if we didn't miss. I mean, you know, the elephant in the room obviously is combat and war.

You know, that is the very first place that those robots will be used, because they will. And they, and they are already not humanoid robots, but certainly the dogs are out in combat already.

And, you know, they're going to go and they're going to replace humans as much as possible. And we've now got drones and everything. So warfare has completely changed so that we don't need physical people on the ground.

Although I know Ukraine and, you know, Russia doing it the old fashioned way.

But even still, like, the numbers involved and everything and the number of casualties for what they've been doing are minuscule compared to what it would have been 100 years ago.

So, you know, I think that's the other risk is that, and that's, this is the underlying fear that everyone has, right, is that the robots are going to be trained up by soldiers or, you know, and they're going to be soldiers and then they're just going to. There's no way a human can compete with a robot. There's a show on Netflix called Killer Robots.

If you search for killer robots on Netflix, you'll find it.

And they basically, they talk about this a little bit and they, they talk about it in the context of fighter pilots and they talk about it in the context of like foot soldiers and the fighter pilots just, just spoiler alert. They trained a, they got an AI to train itself how to fly airplanes.

So in six months, it learned how to fly an F16 to the same level as a, you know, a combat pilot. And then they had it fly against a real pilot. And it was taking them out in like less than two minutes.

Alan King

::

There you go. Six.

David Brown

::

And the reason was just quickly. The reason was because it had no fear of death.

So it learned to employ a combat strategy that a human would never do because it's too risky, which is flying straight at each other. So combat pilots never fly straight at each other because it's too risky to shoot each other down. So your risk of dying is too high.

So they always attack from the side, the back, whatever. And the robot figured out very quickly that humans don't like to attack from the front.

So it just started attacking from the front and taking them out every time.

Alan King

::

That's interesting.

And you know, in a fighter plane, as soon as you take the human out of the equation, you open up a whole world of possibilities in terms of G forces. The biggest limitation in the plane is the human because you've got to protect them.

And as soon as you take them out, the plane can do things that you could never do with a human inside it. So. Well, let's. We'll wrap this up. Just one sort of final question actually, I think to get us to the end is which comes first?

Because Elon stood there and sort of presented these two things that robots in cars and made promises that it's all just around the corner. I don't think any of us believe it's around the corner. But what gets there first?

Do you think we'll have autonomous robots wandering around our houses or. And I'm going to say level five fully self-driving cars that can take me down the windy lanes of Cornwall?

Benjamin Harvey

::

I think cars. Cars first.

David Brown

::

Cars 100%. Being that they're 80% of the way.

Alan King

::

There already, I think I agree. Although I do quite like the idea that if cars don't get there first, the robot can drive the car for us, thus solving the problem.

David Brown

::

There is, yeah, there is always that option.

Alan King

::

Maybe that's what the super rich have. They have a really expensive car that's driven by a really expensive robot, as they show.

David Brown

::

But it's the confluence of all these technologies, right? So you've got like, you've got Boston Dynamics, who's working on the physical movement of all the robots and stuff like that.

And then you have Engineered Arts who's working on facial expressions and really how the robots can, can communicate, right. You more and look and interact more like a human because they're both.

The amica body is very plain, it's very kind of feminine, it's light and it's not big and bulky and heavy. And that's on purpose because people feel less threatened by that.

And then, you know, you've got the AI companies working on the conversation and the LLMs and all that sort of stuff. So they're all still in separate pots at the minute. It's what company is going to be able to bring all of that together into one unit.

And I think that's until we see that. And it's the same with AGI in my opinion. We haven't talked about that today and maybe you'll talk about that on another show later.

But in my mind it's a federated model.

So AGI is just going to be a bunch of smaller AIs that are managed by one AI in the middle who can go off and do tasks, but it'll just seem like one thing.

And until you have the same thing with a robot where you've got the people doing the physical bit, the people doing the emotional bit, the people doing the language bit and all of that comes together, it's never going to happen. So you've got all of that and then you have the regulation on top of it.

And let's not underestimate how long that regulation is going to take before those things are even allowed to be legal in an open environment and insurable.

Alan King

::

I agree. There we are. Ben, any final thoughts?

Benjamin Harvey

::

Now, I guess there's lots of ways, you know, we didn't talk about, you know, companionship or all those things that people might use a robot for in the future, but, you know, that is too vast a subject really to cover in, in the time. But I. Yeah, I just think my main thought at the moment is what kind of a future do we want?

And whether we somehow get together and decide that together before leaving it to the big technical companies.

Alan King

::

Yeah, I think it's really important that we find a way over the coming decades to not just simply allow the tech billionaires to completely dictate what a future looks like, because they read some sci fi books when they were a kid and they think it's cool. So there we are. We'll call that a wrap for today. Thank you. For those listening to AI Evolution, we will be back.

We're aiming to be monthly, I think, at the moment. We will review the frequency of that and see where we go. And certainly, Dave, you mentioned AGI.

I think that's going to be on an episode fairly soon because I think there's a lot to be said around that as well. But thank you very much.

David Brown

::

Thanks, Alan.

Benjamin Harvey

::

Thanks, Alan. Thanks, David. Cheers.

About the Podcast

Show artwork for AI Evolution
AI Evolution
Exploring the Future of Artificial Intelligence

Listen for free

About your hosts

Profile picture for David Brown

David Brown

A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success — whether it’s yours or mine — so send me a message if there's anything I can do to help you.
Profile picture for Alan King

Alan King

Alan King, founder of the AI Network, AI Your Org (aiyourorg.com), and Head of Global Membership Development Strategy at the IMechE, has been fascinated by artificial intelligence (AI) since his teenage years. As an early adopter of AI tools, he has used them to accelerate output and explore their boundaries.

After completing his Master's degree in International Business, King dedicated his early career to working at Hewlett Packard on environmental test systems and Strategic Alliance International, where he managed global campaigns for technology firms, all whilst deepening his knowledge around neural networks and AI systems. Building on this valuable experience, he later joined the IMechE and published "Harnessing the Potential of AI in Organisations", which led to setting up the "AI Your Org" network.

Firmly believing in the transformative power of AI for organizations, King states, “This version of AI at the moment, let’s call it generation one, it's a co-pilot, and it's going to help us do things better, faster, and quicker than ever before.”

Known for his forward-thinking attitude and passion for technology, King says, “We become the editors of the content, and refine and build on what the AI provides us with.” He's excited about the endless potential AI holds for organizations and believes that the integration of human and machine intellect will drive exponential growth and innovation across all industries.

King is eager to see how AI will continue to shape the business landscape, stating, “We are about to enter a period of rapid change, an inflection point like no other.” As AI tools advance, he is confident that their impact on society and organizations will be both transformative and beneficial.