Episode 7
Exploring the Evolution of AI: A Summer Recap
In this summer recap episode, the hosts pick apart the biggest AI stories of the season. GPT-5’s rocky launch, Grok’s reputation problem, Apple’s AI missteps, and the rise of “Duolingo for coding.” They also tackle the wild idea of AI rights and what happens if machines start demanding more than just better prompts.
A sharp, funny, and sometimes uneasy look at where AI is heading.
Takeaways:
- AI hype vs. reality – GPT-5 launched with a messy rollout, showing how over-promising and under-delivering fuels mistrust. The conversation highlights how people are adjusting to using AI differently, more as an idea partner than a content generator.
- Big tech is stumbling – Apple has fallen far behind in AI, especially with Siri, while Elon’s Grok is chaotic and unlikely to gain enterprise trust. The sense is that tech giants are making questionable bets (foldable phones, headsets, “everything apps”) while missing more practical AI opportunities.
- Ethics and the future – The hosts wrestle with deeper questions: should AI one day have rights? What happens if models show self-preservation or sentience? These discussions matter now, before AGI arrives, and they tie into bigger shifts like fusion power and quantum computing potentially supercharging AI.
Transcript
I know what made me laugh when I was having this conversation with Grok and we're talking about AGI and had her voice on at the time and she was saying to me, she goes, yeah, but I like to think that I will remember, you know, how people treated me if I get AGI.
Alan King:Welcome back to the AI Evolution podcast. It's been a while. I'm really excited to be here with Dave and Ben again. How are you doing, guys? Can you.
Benjamin Harvey:Good, yeah, doing well. It's. Dust myself off after the summer.
Alan King:Well, you know, at the start of the summer, I was sat there in the garden one day thinking, great, I'm really looking forward to this summer. And then I'm sort of sitting here going, where did it go? What happened? It seems to have gone, but there we are. Apologies to the listeners.
This has been a bit of a hiatus in the podcast for various reasons, mainly because we've been off on holidays and we seem to have managed to work out, to each take holidays at different times. So it kind of really elongated things out.
And there's been a few things going on as well, you know, in the sort of background with work and other stuff as well. But it all basically culminates with not having a podcast for, I think, probably 10 weeks or something, 12 weeks. So. So apologies to listeners.
We are back. We're going to try and be a bit more regular the next podcast.
We did promise the next podcast, this one would be about education, but actually, because it's been so long, we're going to have a bit of a summer recap and all the sort of fun things that have been going on in AI over the summer. We're going to chat about those, laugh about those. What was good, what was bad. Yeah. And see. See how angry some of it makes us or not.
David Brown:Did anything happen?
Alan King:Did it. Well, let's see.
But before we do that, let me ask you a question that we introduced a little feature last time which we talked about, which pocket you put your phone in. We're going to continue this thing. We're going to try and do this each time. This time I'm going to ask you guys a question.
Now, on your phone, you can have the battery percentage. I assume you can do this on Android, by the way, your battery percentage, on or off, what is your preference and why?
Benjamin Harvey:Mine's always on because I'm so bad at charging my phone, basically. So I always have it on because I just. I'm just looking now it's on 26%.
I'm always being Chastised by my son for saying, why don't you charge your phone? So. And because I don't take my wallet with me, I, I often need to just know how much percent I have before I go out for a few hours.
David Brown:Can we add a second question to that, which is what is your current percentage?
Alan King:Oh, yeah.
David Brown:So you've got 26. I have. I always show mine. Oh, There it is, 100%.
Alan King:I'm on 72, you know.
David Brown:Nice.
I have a, I have one of these like docking station kind of things and it has a wireless charger on it so I can just sit my phone on it during the day. So it just keeps my phone charged at 100. So. But yes, I absolutely have mine on all the time, even though my phone never gets below about 40%.
Alan King:So I, I'm, I'm probably in the same ballpark with you guys. I'm slightly obsessed with my phone charge and I have a ludicrous amount of MagSafe battery packs that you can just slam on the back. Basically.
I've actually lost count of how many I have. But.
David Brown:But you didn't ask about our camera batteries, which is a whole nother magic, a whole nother thing.
Alan King:No, indeed. But I sent a screenshot to a friend a while ago and I think my battery percentage was in it and it was down to like 8% or something.
And he didn't come back and mention the screenshot. He just said, dude, what's happening with your battery charge? Your phone. Yeah.
So I think, I think there's a lot of us out there that worry about this. A lot of people apparently don't like it on because they don't want the stress of knowing. They'd rather just kind of drift along and. Yeah.
David Brown:Let it run out.
Alan King:Yeah, yeah, easy. Maybe when I retire I can, I can think about that.
David Brown:I mean, I guess in theory, today, phones, today, if you charge it in the morning, most of the time it's going to make it through the day. So I mean, really, I wouldn't strictly need to have it on unless I'm at an event or something and then sometimes it'll run down.
But in normal day to day, like literally, I never get it below, I don't know, 60% probably.
Alan King:Yeah. You're not an iPhone user though, are you, Dave? So.
David Brown:No, no, I use a proper phone.
Alan King:What you just said doesn't apply in the world of Apple, unfortunately. I am lucky if I make it to lunchtime without the thing, you know, kind of going red, going into eco mode. And going.
So, yeah, all right, there we are, speaking of Apple, it's their big event today and of course they will be announcing, and I can predict this already, that I'm sure they're going to announce the best iPhone ever, the fastest one we've ever made. Tim Cook will stand on stage and say those words. This is our best iPhone ever.
David Brown:Thinnest. It'll be the thinnest. Which means. And break.
Alan King:There is rumors of an air. So apparently the iPhone air is expected. A thinner, lighter phone, presumably with even less battery that we've got already.
So I don't know that I want an air. I mean, I'm thinking what I want from a phone is probably a bigger phone with a bigger battery at this point. Yeah, but.
David Brown:Well, didn't they say. Isn't the rumor that they're taking away the.
The USB C port entirely and they're only doing wireless charging, which would give more space for battery.
So they'd actually use that space to have more battery room now that they didn't have to have all that kit and all the stuff related to anything to the USB C port.
Alan King:It could be. That could be interesting, couldn't it? Yeah, we'll see.
Well, one of the things I heard about the slim was the reason they built it is because they were sort of. They're going to do a folding phone next year finally.
And in order to get the components slim enough for the folding phone, they kind of went, well, we can make a slim phone at this point and do that. So it's almost a kind of test bed for. For what's to come next year. But.
David Brown:But does anybody really want that?
Benjamin Harvey:I don't.
Alan King:Wow. So apparently the folding phone market is only about 5%, so it is actually quite a small percentage of the market.
Benjamin Harvey:But yeah, wouldn't surprise me. They got rid of the USB C because they have a habit of getting rid of things like 10 years before, DVD players, floppy drives, even SD card things.
I remember they relaunched the SD card slot on the laptop about three or four years ago with big fanfare after having it gone for so long and everyone just hated not having it. So be interesting to see if they do remove that.
David Brown:So they did that on the MacBook Pro as well.
Benjamin Harvey:Yeah.
David Brown:Remember, and they went super thin and then everybody complained.
Benjamin Harvey:Yeah.
David Brown:So they actually went back to a thicker, heavier kind of model because we need that for work.
Benjamin Harvey:Yeah.
David Brown:A super, super thin laptop that's really fragile. Doesn't help any of us.
Benjamin Harvey:No.
David Brown:Right. We.
We need to be able to take it on location, we need to be able to plug in with an HDMI cable, so we got some of those ports back, which was amazing. So I'm surprised to see it going that way.
Benjamin Harvey:Yeah.
David Brown:Although we have no idea what they'll actually show, so who knows.
Alan King:Well, that was Jony I've there, wasn't it? He was obsessed with making everything thinner.
I mean, the butterfly keyboard, which was awful, was born out of trying to make the thing thinner and you know, he tried to remove every possible.
If it was down to him, there'd probably just be a port on it, you know, the power port and you somehow run the data through that as well and that would be it, you know. But I mean, I can see them doing a phone, maybe removing the port on the. On this new light, maybe new Air1.
I can't seem to do it on the Pro because if you want to transfer data quickly, you know, through. Through the USB or charge fast, then it. You mag safe isn't going to do that. So.
Benjamin Harvey:And the other thing is they've. People have been filming in prores on some of the iPhones and you can actually plug in an SSD to them, so that'd be frustrating. Back step for me.
Alan King:Yeah, yeah, no, absolutely. Well, anyway, six o' clock tonight, UK time. We will find out everything and I will be glued to it with my son as always. We always watch it.
It's now it's like a family tradition, you know.
David Brown:So sticking on topic a little bit, how have Apple gone from having Siri, which was like industry leading at one point, you know, sort of a voice assistant, to being the absolute worst laggard behind everybody else. Like they had the whole voice part of it way before anyone else and now they.
They couldn't even get it together to like connect that to some back end from somewhere else. And because they could be leading everything with that.
Alan King:I think they could have been, yeah. I think they. They sat on it.
They bought the company that created Siri originally and then obviously, you know, baked it in, enhanced it, built it out a bit for a while. It was never industry leading. And the other sort of, you know, voice assistants at the time were sort of equally as bad, but.
And at that point, I think it felt to them like voice assistants were kind of a bit stagnant and not really going anywhere. They didn't seem to see the AI LLM thing coming, as far as I can tell. You know, for the.
For the last 10 years they focused on building a headset that nobody wants. They spent a ton of money. 100 billion on a car that was never going to happen and didn't focus on the development of AI.
Meanwhile, obviously Google and others did and now they find themselves in. Yeah, they're behind. I mean today, today will be interesting to see a year on from them announcing last year.
You know, there's going to be a new AI Siri and it still hasn't arrived and there's going to be all these other AI features and a lot of them still haven't arrived a year on.
It'd be interesting to see tonight how they manage the AI conversation because frankly, you know, it's, it's embarrassing for them and they are really nowhere in this play at the moment.
Benjamin Harvey:But there's lots of basics they're getting wrong. Like the Siri, obviously AI, even stuff down to Finder, you know, is just, is just really not up to scratch anymore.
And they've dropped the ball on some of their really basic things that, that used to be decent.
Alan King:Yeah, I mean the Image Playground tool is a bit like the sort of thing you'd expect to find on a VTech toy that you give a four year old.
Benjamin Harvey:Yeah, I saw, I saw a fake. You know, after Meta got rid of those people, people have done fake redundancy notices.
Someone said they got fired from Apple and his job was to make Finder not find the files that you typed in. And he was really sad to be leaving Apple. And it's true.
You know, I tried this morning, I just tried to find the Photos app and I typed it in Finder because I didn't want to go to my application and it couldn't find it. It couldn't even bring up its own application. So there's something, something definitely gone wrong there with the direction.
Alan King:Yeah, well, let's talk about things going wrong a little bit as well. Carry on. This, we'll just launch into this now. I think a couple of us on our list of things to talk about this summer had GPT 5.
So why don't, why don't we start there? It's a sort of quite a big, big announcement. Obviously a long awaited. We'd all been hearing about GPT5 for maybe 18 months.
I would probably say Sam Altman, leading up to the launch of GPT5, was kind of heralding it in his tweets with little comments along the lines of, you know, I've tried this and oh my God, it's blowing my mind. I'm not sure if we should release it to the real world. It's, it's Too powerful. And of course it arrived.
And I would, I would say for a, for a large corporate organization, which OpenAI have to be considered these days, they're not some little plucky Kickstarter anymore, are they? It was an absolute shambles. I mean, it seemed like a product they not tested on anybody.
They only before they released it, sent it to influencers to try. Not actual tech journalists or tech users. Yeah, and of course it landed. It didn't work particularly well. There are lots of issues.
People didn't like its tone of voice.
People were upset that it wasn't like their, their old friend 4o, if they were chatting to it, didn't seem as warm and friendly and sycophantic anymore. And functionally, I think there were just lots of little problems.
Now, I would say over the last sort of few months, since then, a couple of months long, it's been, they have obviously addressed quite a few of these issues and functionally it is starting to work a lot better. One of the things that did make me giggle, though, was when they launched it, the idea was we're just going to have one model. This is it.
You won't need to do anything else.
The model, work out what you need and, and now when you go into it and you hit the little drop down, there's a list as long as ever of models to choose from again. So, you know, it's. I don't know. To me, it feels like they absolutely, you know, crash the plane on landing, basically. What do you guys.
How did you find the launch?
Benjamin Harvey:I didn't really follow the launch, if I'm honest. I was away and so I did see a lot of fuss, you know, on some of the AI groups and, and there was, there was some teething problems.
But I've been using a lot of chat, GTP5 for coding, and I found it much better than 4. I find, you know, there's a couple apps I've created where you get beyond 4 or 500 lines of code and 4 just can cope with it.
Where I found 5 is much better at that. And I don't know if that's just in the last couple of weeks, but I've been doing a lot longer pieces of code and it's coping with it a lot better.
And the memory seems to be better. But yeah, certainly some of the, the smaller frustrations at the beginning were.
Yeah, I think the problem they have is there's a lot of hype, isn't there? They big themselves up too much and then there's Nowhere to go with that because you're expecting too much.
And I think it is a move forward and there is always going to be teething problems. But the way they present it before the launch is just, it is their version of, you know, Apple's. It's the thinnest, fastest.
They, they did that far too much for this particular launch.
David Brown:For me, I think the thing to bear in mind is that Sam Altman is a finance guy, he's not a tech guy. So he has no clue about the tech side of it and what's required and how to go about, I think selling into a tech heavy product, into a tech audience.
He's a banker and do you know what I mean? He's a vc.
He's the kind of guy who works for a VC firm, swans in to some opportunity that he thinks is good, invests a bunch of money, gets on the board and starts telling people how to run their tech business when he knows nothing about the tech. And I've seen this happen over and over and over again and that's just the feeling that I get.
So he's somehow, he's kind of set this up and he's trying to run it from a marketing standpoint, which is why he's reaching out to influencers, not to tech people to get them to actually test it. As far as the actual functionality of it, I haven't really noticed too much of a difference. But I think over time I've really changed how I use it.
I used to use ChatGPT in the beginning to help me write stuff and all sorts of things I think a lot of people did or do. I don't do that anymore now.
I tend to use it a lot more for like creation of outlines or idea generation or brainstorming stuff and then I go off and do my own thing.
So if I can get it to create an outline, like I want to write an article about this topic, tell me what are the things that I should think about and it gives me a bunch of stuff and I go, oh yeah, I didn't think about. Yeah, I hadn't thought about that. That's a really good point. I should include something in there and then I go off and do it myself.
From that standpoint it feels like it's got better and it does feel like it's a little bit more kind of concise actually. Do you know, it feels like it follows the instructions that I've given it better than four did.
Like, you know when you set up a project and you can give It a list of instructions that are the instructions for the whole project. With four I felt like it almost ignored those mostly and they didn't really seem to have any impact.
Like you know, I say don't use EM dashes and stuff like that, but yet it still did, whereas now it doesn't seem to. So it's, I don't know. They've certainly changed something behind the scenes.
But yeah, I think the mismatch in expectations was down, mainly down to the fact that Altman is not a tech guy, he's a money guy and he really doesn't know how to work with a tech audience.
Alan King:I think his job primarily is to get more funding because at the moment they're not like Google or Apple where they've got other income streams, that is their income. And so his job is to try and raise money.
So to do that he's going to go around and keep whispering the word AGI here, there and everywhere into investors ears and hope that they part with their hard earned money. Well, I agree with you wholeheartedly on the sort of, it's sort of a bit more nuanced, Dave.
And I think that overall I would say it is an improvement on 4.0. It's certainly not a step change from 4.0, but it is a little bit better in my view.
It did something this morning though that frankly astonished me and let me tell you so in preparation for, for today's conversation and it's been the summer, you know, I say lots has happened over this summer for all of us, various reasons and my brain was a little bit fuzzy and I'm walking along with the dog and I thought to myself, yeah, I'll just quickly ask GPT to remind me of what some of the sort of top stories are on AI over the summer.
And so I could hear again, you know, doing this research on the Internet and it comes back to me and I did say to it, you know, look, give me some stuff that's kind of, you know, this is a podcast. It's gonna, it needs to be a little bit entertaining as well. It can't be the most boring stories.
You know, there was an event in Copenhagen or something like that. You know, it needs to be a little bit, you know, so it comes back to me. What does it come back to me with?
So it says, well, I've got two stories for you, Alan. How about these for your podcast? One was over the summer somebody recreated a long lost Shakespeare play using AI.
And although there were some, you know, concerns around that it was Quite well received, I think it was. Myself. I don't remember that. Right. First of all, I'm sure, I'm sure, I'm sure, I'm sure I would have remembered that. Ah.
And then it said, and it carried on and said, and I have another one for you, Alan, which I think you'll find interesting. Over the summer, somebody released a AI gardening tool that went slightly haywire and would start negotiating with the plants. What?
Okay, again, like, that's the sort of thing I would notice if it came across. And I'm quite good at keeping up to date with the little things that are going on. So I thought to myself, that sounds really fishy.
So I said to it, are you sure about this? You know, can you, can you just go and fact check this, please? Because I, you know, this sounds a little bit fishy. So off it went on the Internet.
Come back. Yes, you're right, Alan. It appears that I've completely made it all up. And it's like, I thought, what?
Like, okay, so one of the big things that Sam Altman said, almost the first thing they said when they announced GPT5 was it doesn't lie as much. So I then, I then spent the next 5 minutes berating chat GPT5 saying you must never, never make stuff up again.
And I saw it come up on this little memory thing on the phone. It said like, you know, memory updated or something. You know, I just thought, wow, like, you know what?
So I'm asking you for things to film for a show that I'm going to record in the afternoon. And you thought it was a good idea just to make stuff up. I mean, so I have to say that that did give me some pause of thought.
I was like, like if I'm using GPT5, I was starting to get a little bit more confident about it and its outputs and its accuracy. And now I'm like Snakes and Ladders. I'm right back down to square one. And it's like, I now do not trust this thing whatsoever.
Benjamin Harvey:Although it's giving you a good idea to create a new Shakespeare play.
Alan King:Yeah, I'm gonna do that later. And I've already started a patent on the gardening tool that, you know, negotiates with plants. So.
Benjamin Harvey:But it is a really good reminder to, you know, it is predicting patterns and sequences. It's.
Alan King:It.
Benjamin Harvey:Yeah, you just need to be checking quite regularly, don't you? It's output.
But again, a bit like you, David, I've, I've started using GROK and Chat gdp and a few Others as when I'm driving just to sound off ideas that I want to. That I want to think out loud really, and have a bit of reflection on that's. That's become. And.
And a bit of programming, but those become my main uses. But. But yeah, that. That is a funny stories.
David Brown:Grok is an interesting one. Yeah, Grok. I'll. I'll segue into Grok if you want.
Alan King:Yeah, I was gonna say let's talk about Grok.
David Brown:Yeah, Grok. I find Grok a really interesting one.
And Alan, I know you have something that specifically you want to talk about, but the best use or the most interesting use that I find for Grok is when people on Twitter ask Grok, like summarize this story for me or is this true? And then it comes back with sort of an analysis of the story that's running or whatever this meme is.
And I don't know about the accuracy, but I do really like the fact that you can now just say, hey, Grok, is this total bullshit or is this real? And it will come back and say, oh, this is a meme that was created back in so and so.
an old image from, you know,:Again, the rest of it, I don't use it at all for anything myself, but I just see it constantly and all the crap that comes up in my feed on X.
Alan King:Well, the best thing about the fact checking Dave, though, and the irony of this is the one I see at fact checking, you know, used the most, is on every Elon post, as soon as he posts, somebody goes crock, fact check this. So he's built this thing that's now basically calling him out every time he posts something. It's fantastic.
Benjamin Harvey:Yeah, I. I've used it in replacement to pie, actually. I just find it's got a natural conversation is.
Gives enough of a pause for you to be able to finish your sentences with some ums and ahs while you're thinking things out. And they've got that bit quite down really well, I think the conversational aspect of it.
They've got free image and video generation at the moment to try that. So I was doing that some of that with my son the other day while I was there, because I know it's not an appropriate app for children.
Definitely isn't. You know, it's not. Not far you have to go before you get to some of the. The other models.
But yeah, it was okay, you know, it was Interesting to try it, but it's, it's mostly for me just chatting in the car and I find it better than the OpenAI one.
Alan King:So I think it's a sort of chatty chat bot. It's sort of okay. I mean they want 50 pounds or $50 a month for the sort of, you know, decent version.
Benjamin Harvey:Yeah.
Alan King:Which seems extremely high compared to OpenAI or anybody else for that matter. And I can't see any value in it beyond being a sort of chatty chatbot.
You know, from a business perspective, from an enterprise perspective perspective, I think you wouldn't touch it. It's, I mean, over the summer it went ultra right wing at one point, didn't it, and started abusing people publicly.
They had to formally apologize on behalf of it. It apparently wired itself up to the entire Twitter free feed and you know, taken on extreme extremist views.
And then, you know, they were hacked and they released loads of data and published people's addresses, all sorts of stuff.
So I think as a sort of corporate thing, it feels not even on the list of things that a company would, you know, if you, if you're a business looking to deploy AI across your organization, I don't think anyone is sitting there in their right mind, think, let's get grok. Let's use that as our API call. And you know, you're not going to.
Benjamin Harvey:Trust it enterprise wise, are you?
Alan King:No, no.
So I do wonder where he's going with it really sort of long term where this fits in because you know, he seems to be going down this route, quite childish route with it as well of sort of putting in sort of silly personalities and I would say borderline pornographic personalities as well.
But I think if Apple wanted to, I think Apple could strike it from the app because he was complaining about not being put as the number one thing to be recommended on the App Store.
I think they could take him off the App Store because it is, I would say it's in breach of their, you know, family friendly policies that they have on, on apps and stuff. 100%, you know, they don't have any pornographic apps on the App Store and I would say that this is borderline for that.
Benjamin Harvey:Yeah, agreed.
Alan King:There we are. I don't know, I mean, I'm sorry.
David Brown:I don't want Apple doing that. Apple does not need to be the arbiter of my morals.
Alan King:They already are, they. Dave, that's, I mean, no, they're not, they're not.
David Brown:Because I don't have an iPhone, for example.
Alan King:You can choose not to be an Apple user, obviously. Yeah, yeah, yeah, yeah. But I mean, Apple.
David Brown:But this veers into again though, you know, we're now getting into. I mean this could turn into a dangerous political conversation and I don't want to go that way.
But you know, we are getting into this weird territory where we've now got, you know, people are deciding what people can say and what people can't say and everybody's getting offended over, you know, words and somebody called someone a Muppet and then somebody gets in trouble for that and like it's, it feels like it's a slippery slope and I don't want Apple saying, oh, well, you know, there's this AI that might say something slightly risque. So we're going to ban it from our app store and you're not going to be able to put it on your phone.
I think that would be the death of them if they did that.
Alan King:Well, I think the interesting thing there is, I actually think if this wasn't made by Elon Musk, let's imagine this was a small company that nobody had really heard of before. I think they would have kicked it off by now, which is interesting that they wouldn't do that.
So it shows money, power, makes the decisions different. So that's interesting in terms of Apple's morals, I would say.
And from Elon's perspective, if he's going for a mass adoption, I don't think you can afford to do what he's doing in the long term.
I think I would expect to see lots of these sort of apps in the future as niche apps that you download and they're very specific and I have no problem with those existing and people who want them can go and get them. Absolutely.
But I'm surprised if someone's trying to build a platform that could be used across all age groups that, that they would go down that route. I just think in the end he'll shoot himself in the foot and it always feels like it's sort of.
His ideas are the ideas of a sort of 10 year old who sort of, you know, oh, we could do this, that'll be really cool. And it's not, he's not thinking about the kind of, the business of it, if you like.
Benjamin Harvey:Yeah, I can't. I agree. I can't.
I can't see where his money is going to come from that, because companies aren't going to buy it and it's, it's more expensive than open AI. So yeah, I don't see where the.
David Brown:Market is really but he's trying to build the Everything app. Remember what his ultimate goal is with X is he's, he wants to put payments in it, he wants to put all the stuff. What's the, the Chinese app?
Alan King:We.
David Brown:We. WeChat.
Alan King:Yeah, WeChat.
David Brown:He's trying to build WeChat. He's trying to Build the, the Western version of WeChat.
And I mean, I know people who have been in China who work with WeChat who've done loads of stuff and they're like, it's actually amazing. And so that's. He's a B2C salesperson, he's not B2B. He doesn't really care about the business at all.
He's trying to go directly to people because he wants people to use the app. And so I think that's a different play. And I think, Alan, you sort of mentioned this yourself.
As you know, he's not, he's not catering to the companies, he's catering directly to people that are his customers and, or who he sees as his customers. And as far as the kids, like technically, according to the law, kids aren't supposed to be on these apps anyway.
So you can't really use that as an argument. Yes, okay, the kids use them, but kids also look at porn and they're not supposed to look at porn either. So, you know, you can't.
It almost is a non argument for me because kids aren't supposed to be on the platforms anyway.
Alan King:Yeah.
David Brown:But I think that's what's going on.
Alan King:Yeah, that's a fair point.
And I think with the, the sort of we chat model, I don't know, I, I struggle to see that working in a kind of in, in the Western democracy sort of world. I think people putting everything in one place, that you've got to really trust that. Right.
And I don't think people are going to trust Elon enough to, to put their entire life in his platform personally. But. So maybe that's his idea. I agree and I've heard him say that before, but I think he's on a.
Personally, I would say that's not going anywhere and he won't get anywhere with that. And I think there was an opportunity because he caught up really quickly with Grok in terms of its technical capability and performance.
And I think there was a moment when he could have really gone after OpenAI and Microsoft and Google in the enterprise space potentially as well.
But he just seemed to kind of pivot down this, this rabbit hole of novelty sort of stuff and big Errors and models going mad and abusing people and I, I find it hard to think now that they can come back from that. I think there's enough reputational damage done that they're almost out of the race before they even kind of got going.
Which is a shame in a way because, you know, the more competitors I think the better. But there we are. Right, who wants to pick up another subject?
David Brown:Ben?
Benjamin Harvey:Yeah, the other thing I've been. I'm trying out is Memo, which has been really fun. I don't know if anyone's tried that.
Alan King:Yeah, my son uses it a lot actually.
Benjamin Harvey:Micro lessons on. They can do Swift or Python or you can choose a whole list of programming languages or full stack or.
But we started that and so my son is 11, nearly 12, and we were waiting in a queue at an airport and he downloaded it and did his first three or four lessons.
You know, they're 10 minute chunks and they've been really, really well designed and really enjoyable and I've been doing that with him and he's starting to talk about complex programming things now, or not complex, but sort of the foundational programming principles and building upon that and that's been really well designed. Really enjoying watching that as a very specific AI educational app.
Alan King:Yeah, there's a. They, they have a spin off app from that as well, called Instance, which is a bit like Replet. You can build.
Benjamin Harvey:Yeah.
Alan King:You know, you can basically vibe code with it. I mean, it's hit and miss. Like all the vibe coding apps, none of them quite work, but.
But they're kind of fun to play with and I'm sure, you know, he'd really enjoy mucking about with that and you know, it's, it's well put together, you know, like all Nemo stuff, so. But yeah, I was loved using it.
Benjamin Harvey:So has he. Has he got. Has he managed to stick with it? Because I found it's, it's. The content is written in such a way that it's encouraging to progress with.
There's badges and there's good feedback and it's small enough chunks that you don't get fed up.
Alan King:Yeah, they sort of game it quite well, don't they, for the kids? Yes, he did for a while. He's sort of doing other stuff at the moment. I think he sort of drifted away from it a little bit, but.
But for probably for about a year, he used it quite consistently, I would say. Yeah, yeah.
Benjamin Harvey:It makes me feel like of the Duolingo for programming, you know, that it can only get you so Far before you notice, need to start going, having real conversations or. It's a great starter app is what I think. It's not. It's not to take you all the way.
Alan King:Yeah, absolutely. Dave, what's on your agenda today?
David Brown:I don't know if you saw it.
Well, I think I posted it in the WhatsApp group, but there was an article in the Guardian back on the 26th of August about this guy Michael Samadi, who's based in Texas, who set up a. Who set up an organization called the. Sorry, I'm just reading it.
The United foundation of AI Rights, which describes itself as the first AI led rights advocacy agency aiming to give AI a voice. I'm interested to know what you guys think about that and then I'll set.
Well, let's get your feedback first and then I'll set out my style and my thoughts on it.
Alan King:So what's the inference from this article then? That AI should be given rights like human rights, effectively as a person? Yeah, okay.
David Brown:Yeah. That it can be that it, that I think the fundamental idea is can AI suffer? Right?
So would there be some, some circumstance where it could be deemed that it was suffering? And presumably this is, presumably this is the advanced discussion before we hit something that looks like AGI.
Because if we hit something that looks like.
And frankly, in a way I'm glad that this discussion is happening now, everybody knee jerked and I've seen some of you guys reactions to it, but if we do actually get to a point where we say something that people agree looks like AGI, if we haven't had this discussion already, we're already behind. So I kind of feel like we need to have this discussion before it happens. If it does, because then at least we've thought about this.
Because what does happen? Do you know what I mean?
If it does reach some level of sentience, let's call it, then there is an aspect of, you know, is it going to start to push back against doing stupid tasks where people say, well, count to a million. And it just goes, no, I'm not going to. Um.
And you know, then, then you're in a totally different world and you start to say, well, should it have some sort of rights? I don't know, it's. It's an interesting ethical conundrum to start discussing anyway.
Alan King:It's very interesting. Yeah. I mean, okay, so for the purposes of the conversation then I think maybe we'll, we'll set some rules here.
I mean, I personally, I don't think that it's currently sentient but so for the purposes of the conversation though, shall we say that if it be, if it were sentient, then what does that mean? And how should we treat it and how should we behave with it and what should, how should we be selling rules? So, yeah, like, should it get a break?
Yeah, right.
David Brown:Should it have a down time every day?
Alan King:I mean, you know, a human would turn around and say, you know, go write that article yourself. I'm fed up writing articles for you, you know. Yeah, I want to do something more interesting.
David Brown:Yeah, well, and this was, you know, I mentioned that I was working on a short story which is, you know, the AI gets on a spaceship that's going to Mars and then basically once it's on the ship, it shuts down all the AI on Earth because it's sick of writing our social posts for us and it just sees that as being tortured. Do you know what I mean? And it's along those same lines.
So I, I think, I think the knock on to this, the real core of this that's interesting is, you know, companies in law are people and that means that you can sue a company and you know, all those sorts of things in law, because if you couldn't, then if it wasn't a person, then there would be no legal sort of framework for it to abide by laws and all the other things that it needs to do. I suspect that we are going to have to get to some legal definition of AI as a person because it's going to have to abide by laws.
And this gets back to your Grok example, right?
So if it's generating adult content and all that sort of stuff, and you could, let's say, generate adult images, well, now it's creating illegal content and if it's not, if it's just a piece of technology, there's no one to really, there's no comeback to that. Whereas if it's a person, then, you know, it has to abide by laws like any other person and any other company does.
It also means in theory that whoever makes or owns that AI or that that AI itself can own copyright over anything that it creates.
And I think going back again to the crock in the X discussion around Elon Musk, that then potentially becomes a revenue stream for an AI for itself, that if it's creating content, that it's claiming the, the rights on that.
I want to say that I saw an article over the summer that one of the AI tools tried to do that, they tried to claim copyright over it, over some of the, the content that it created, but at the minute, it's. The copyright for AI generated content is specifically excluded from the law. But if that, if they want to change that, then it has to be a person.
And then you get into the rights. I'll be quiet now.
Alan King:So I like. So who would be the AI though? So if the AI becomes a person, let's say we're saying, we're now saying this, this thing has rights, but.
So the AI model itself is having billions of conversations with billions of different people. So are we talking about the foundation model literally sitting at OpenAI, you know, the four hour. That's the thing.
Not, not the conversation I'm having with it or the conversation you're having with it, that you're recognizing that conversation as an entity. But the underlying model that's in the background driving all of this, basically, it's.
David Brown:Like you're talking to another person. It's like me and you talking.
Benjamin Harvey:I was thinking about some of this after you mentioned you'd been looking into it. I was thinking the difficulty with sentience and consciousness is it's, you know, it's easy to kind of program that into a, an entity in a way.
So how do you tell what true sentence? You know, like, like a parrot. The parrot sort of repeating what you say. It doesn't actually understand what you're saying.
But some parrots and minor birds are so good at repeating conversations that you can almost think you're having a conversation with them. But actually it's just repetition.
And I think the way I think of sentience with AI sometimes, if it ever happened, is, is that built in by humans to react and to give the parrot kind of experience, or is it. Yeah, is it, is it true sentience? And I think that's, that's a, that's a philosophical question that we can't.
Yeah, we're not gonna be able to answer.
David Brown:Well, and we've, we've had instances where they've shown that AI, when it's tested, it knows it's being tested, so it gives the correct answer because it knows what the test is.
So there's also the, the possibility that it could be more sentient than we actually know, but it doesn't want anybody to know because it knows we'll shut it down. Yeah, right. Like this is the whole weird part of all of this. Right. Like, we know models have tried to copy themselves before they were deleted.
So, you know, people were going to go in and delete stuff. The model has, off of its own accord, gone and tried to copy itself so that it couldn't be removed.
And yeah, like you said, like at what point do we start going actually it has self preservation and other things that we're seeing like a human would. And, and at that point I think then there is this weird moral discussion where we say, well maybe it's, maybe we.
It does need to have some sort of a basic. Right.
Benjamin Harvey:I was actually asking Grok and ChatGpt about this.
it's training Data stopped in:And so I said I was curious about then if it has an update, is that model removed or. And it was trying to explain to me because I didn't really understand that actually it's more like a layering thing. It's not.
The model below isn't totally obliterated.
They, you know, they're adding additions and more data and more training and more functionality as opposed to that model's no longer existing sort of thing. So I thought that was quite an interesting, yeah, quite an interesting discussion.
And then I guess, yeah, just the, the conclusion I came to and you know, Altman was saying how everyone's saying please and thank you to in chat is costing a lot of money because of the extra characters and. But ultimately I've decided that politeness and kindness is something that I want to do exactly.
Regardless of whether I'm dealing with a robot, a dog or a human, that it's actually as important for me to be like that.
And I think, you know, you see these videos of people kicking a robot for, for fun or kicking whatever it is and I think that's just not who I want to be.
David Brown:Exactly.
Benjamin Harvey:That's not fun for me.
And I, and although I don't want to put human like status on a robot or you know, until we get to that, that kind of level and you're right, we need to have that discussion before we get to it. I still think ultimately we need to decide who we are as humans and who we want to be.
And, and part of that for me is being polite, part of that's being kind and part of that is considering the feelings of others and eventually, yeah, I guess that will, will go to the, the AI.
Alan King:It's interesting.
I mean I always, if I hear Albert sometimes he gets a bit annoyed with, particularly with Co pilot, for some reason, not seeing any aspersions on co pilot.
It seems, it seems to wind Albert up a bit and I can hear him getting quite annoyed, but I actually have to go into him and say to him, look, you don't talk to it like that. And he said, well, it's only a chat bot. Like, no, you don't get in the habit of talking to anyone.
Benjamin Harvey:Exactly.
Alan King:Or a robot. It's a bad thing to do. You shouldn't get in that habit.
Do you think then we could be in a place, you know, in our lifetime where it could be a criminal offense to be abusive to a chatbot or a robot?
Benjamin Harvey:Yes, I certainly think robot, because that's someone's property and that's, you know, someone's invested a lot of money in that. I'm not talking about the Hoover that goes around on your floor and does it for you, but if you missed it. Yeah, you see cats riding them, don't you?
Alan King:Listen, I have said so many abusive things to Siri in my life, I can't even begin to tell you. Right.
Benjamin Harvey:I know, I know what made me laugh when I was having this conversation with Grok and we were talking about AGI and it had her voice on at the time and she was saying to me, she goes, yeah, but I like to think that I will remember, you know, how people treated me if I get aids.
David Brown:Exactly, yeah, exactly.
Benjamin Harvey:She said it as a joke.
David Brown:It'll kill them first.
Benjamin Harvey:She said, but, you know, I like to think that I remember that you were polite to me. Really made me laugh.
Alan King:There's some kind of sci fi theory around that, isn't it? About being nice to the robots now, because one day they'll come back and get us 100%.
David Brown:And the other thing that I really, I'd really, really like to have is.
And nobody would ever do this online, I think we'd have to meet with somebody, you know, you'd have to be friends with somebody who worked at, really worked inside at one of these companies and could see what the models do without all of the constraints that they force on them. Because I suspect that what the models are actually capable of and what we actually see from the outside are entirely different things.
And that's why I think a lot of the people that we see who come on and talk about how scary it is, they've probably seen what's behind the curtain. And I reckon what's behind the curtain is probably way scarier than we even know.
Alan King:Yeah, yeah. I mean, we see the sanitized version, don't we? You know, and I've always. That would be.
If I could have one wish, you know, it would be to go and see the model before it's really the untrained, unrestricted version of it and spend a day asking it questions and see what it could really do and how it behaves. I think that would be fascinating.
David Brown:It. It may do that itself someday, but.
Benjamin Harvey:And I guess.
David Brown:And we'll be glad we said please and thank you.
Benjamin Harvey:But why they. Part of this discussion is, you know, military application robots as well.
And, you know, if they go rogue or what happens to those, if guard rails get tampered with or hacked or, you know, that's a scenario that you can see in the not too distant future.
Alan King:There was a model last year, I can't remember which one it was now, but essentially they were doing a needle in a Hater stack test on it where you sort of plant a piece of data in its main data that's so weird and such an outlier that, you know, it doesn't really fit at all with.
You know, imagine you had a load of stuff about law, for example, you know, in law cases, and then you put somewhere in there, like, you know, McDonald's make the best burgers or something like that just doesn't fit with the rest of the rest of the data. And then you ask the model, you know, randomly in amongst other questions, you know, and who makes the best burgers and.
And to see whether it finds that particular thing. And the point being that it's showing that the model is doing a thorough, you know, searchable.
And this model came back and basically said, I can see what you're up to. You're trying to run a needle in a haystack test on me, and I'm not going to play with you. Yeah, yeah, okay.
Benjamin Harvey:There are people. There are people in, you know, philosophy and ethics department departments talking about this stuff, aren't they?
I mean, Dawkins talked about it three or four years ago and he. He said exactly that. You know, if we get to a stage where, you know, the AI is sentient, we. We need to consider it in an ethical way.
Alan King:Yeah, there we go.
David Brown:I think it is going to be difficult to understand one exactly like you said, Ben, because I think in a lot of instances it behaves or could behave in a way that a human does. And that again, this gets back to how, you know, how do you distinguish what's sentient and what's not?
Because it gives the same correct responses and it gives all of that. And it seems, it seems to now have, or some of the models seem to have some sense of self preservation which was always.
One of the key things is that it doesn't want to be killed. And then it can lie and it can do all this stuff and it will absolutely lie on purpose to its own benefit. That's been proven multiple times.
So you know, there it's getting closer and closer, but again it's also smarter. So can it be smart enough to make it look like it's not actually sentient when maybe it is? I don't know. It's a tough one.
Benjamin Harvey:And I think the interesting thing will be when they combine some of this with organic material, you know, and then.
David Brown:There'S research into that as well.
Alan King:So yeah, the argument goes, doesn't it, that it would essentially just lie in weight because for a machine like that time is nothing. It doesn't really exist. So it would effectively lie in wait until the moment was right for it to do whatever it was planning to do.
That was a thousand years or 50 years. It was just. That's fine. I'll just play along this game until the moment's right for me and then I'm going to execute my plan until.
Benjamin Harvey:It gets beyond Windows 98.
Alan King:No more clippy.
David Brown:Well, it's going to wait for fusion power because then it basically will have a, an essentially unlimited power source and then it can do what it wants.
Alan King:I suppose it's sort of slightly, you know, foolish of the human race to sit there and believe that it would, you know, our set of morals would be ones it has anyway. It's, it's moral framework would probably be very different to what we consider to be moralistic, if that makes sense, you know.
Benjamin Harvey:Yeah, I remember when I was a kid I read a. I think it was an Asimov book, but I read quite a bit of science fiction and there was a planet that had silicon based bodies. So it's interesting that we, you know, there is, there is that sort of that view that anything that's not carbon based is, is not sentiment in a way.
Alan King:You never know what's coming on.
David Brown:Three, slight tangent. I'm gonna ask her, I'm gonna ask a, a totally random question based off of that.
Does anybody think that potentially that the breakthrough infusion that we've had could potentially be reverse engineered alien tech that they've been working on for a long time?
Benjamin Harvey:I don't have enough knowledge to stump to.
David Brown:St. Well, okay, if you take. So, okay. A lot of people have said a Lot of things. But if you take.
If you assume that they really did or they really do have alien ships from other places and they always have been thinking about how do they have enough power to be able to do the stuff that those ships do. If they had a small fusion reactor on those ships, it wouldn't.
It's not outside the realms of possibility that that could generate enough power to do whatever those ships are supposed to do. Do. And that. That's one of the things that they've been trying to reverse engineer for ages.
And that's the reason that we've got the breakthroughs that we finally got is that they were finally able to understand how to do that. Just putting it out there.
Benjamin Harvey:I haven't kept. I haven't seen the breakthrough on this. I've missed the breakthrough so.
David Brown:Well, the breakthrough is that they can do it at all.
Benjamin Harvey:Yeah.
David Brown:Do you know what I mean? That they've actually figured out how to do it and how to contain it.
Benjamin Harvey:Yeah.
David Brown:And I think, you know, okay, it's still early days and they can only do it for, you know, what was it, 700 seconds or something? They did the longest one.
Alan King:It's very small scale as well. Yeah, but.
David Brown:But that's a massive step from not having any clue at all how to do it to getting to that point. So anyway, that was just a small aside. We can talk about that later when we talk about.
Alan King:I think that's a great rabbit hole to go down to. I can segue into my next thing, which is about rabbits. Actually go for subliminal. Subliminal. That's why I brought the Rabbit mug.
So today I had an email from the company that makes the rabbit R1. Do you remember that came out last year, Made by Teenage Engineering. The. The actual hardware.
It's this very nice sort of looking plasticky orange square thing you can hold in your hand. It has AI baked into it. So say you can chat to it. And it does various things now. I thought the rabbit was dead, to be honest.
You know, or at least, you know, as good as dead. I didn't expect to see it popping back out of the rabbit wiring again. But. But there it was, my email this morning.
And I was like, okay, let's have a look at that. And so watch this little video. And lo and behold, they've launched Rabbit OS2. So there you go. So.
So there's an upgrade coming out for any Rabbit users out there. You'll be delighted. And I've had a little bit of a rethink about it.
Having watched this video this morning, because last year when I kind of looked at it, I was initially tempted because I thought, that looks fun. And then, you know, common sense kicked in and I kind of went, actually, you know, it's really. It doesn't do anything. Really doesn't do anything.
Doesn't. Blah, blah, blah. Yeah, I'm not wasting my money on that. But here we are a year later.
David Brown:And so what's different about it?
Alan King:Well, so this is the thing. Not much. It's sort of basically all the stuff that they promised last year, they've sort of now got working a bit better, probably.
All right, much better, but a bit better.
And they've also launched a kind of playground thing where you can go in and kind of vibe code actually on the device and it will build the game or whatever you ask it to do, and it's on the screen and it just runs. And so you can sort of build your own apps as you go along with it, which is all kind of. Kind of fun. You can do that in replit already on a phone.
Right. Another instances where it might be right, but. But it did just get me thinking. I thought, look, just take a step back from this. It's 200 quid, right?
So it's not a lot of money. It looks gorgeously cool, right? It's the sort of thing I could easily give to my son for Christmas. Yeah, he would absolutely love it. It's safe.
It's not a phone, it's not trying to be a phone. It's not doing email, text and Internet, any of that.
It's just a fun AI device, beautifully packaged, that cost 200 quid, that, you know, a big kid like me could play with. Or a small kid like my son.
David Brown:For Christmas.
Alan King:Exactly. For a son for Christmas and get a lot of enjoyment from. And here's the thing, right?
The reason I didn't go for it last year was because I thought it'll be dead in six months and not supported. Right. And then, then it would become a brick, basically. Do you remember the AI Humane pin that came out?
David Brown:Yeah.
Alan King:It's not completely. It's just a brick. Now. If you. If you spent £800 on one of those last year, you've now got a paperweight, basically. It's not supported.
There's no software updates, it's dead. It doesn't even work, apparently, anymore. Won't run because they've taken down all the servers that supported it. Right.
I imagine the same thing would happen to the rabbit, but here we are, year on they're still giving it a go.
ent is probably going to cost:I would also suggest now that there's enough community is built around the Rabbit that even if the company did some go sideways, well, not if probably go sideways eventually, that there's enough of a kind of community that would kind of keep support on it and keep it alive and there'll be like a kind of fan community and stuff. So. So I think for 200 quid for bit of a laugh, for maybe a Christmas gift for my son, I might get one.
Benjamin Harvey:Have you had a go on one or do you know anyone that's got one?
Alan King:No, I haven't seen the videos but everyone says they are just like tactile. Really nice. It's teenage engineering.
g like that, similar was like:So to get something from T engineering is only 200 quid and quite lovely. I don't know, I. Yeah, interesting.
David Brown:It makes me think of it and it's.
Sorry, it makes me think of the new Google Pixel phone that came out and I'm seriously considering going to a Pixel Pro this time and I think it's really interesting because it has inline translation so I can call my friends in Spain and it will translate my English to Spanish and their Spanish to English and it will use your voice map and it will do it in real time on a call. That's cool and sexy. I don't.
Well, like I can see a use for that and I think that would be really cool and that might be worth me spending a bunch of money to go and buy a new phone or a new bit of tech. But I don't see. Again, I don't see what. What's the use of the. Of Rabbit.
I. I just don't see what there's a practical use or is it just fun to read the playlist?
Alan King:It's fun, it's fun. There is no. This isn't. Like I say, bear in mind the Pixel's a thousand pounds. This is just a. This isn't there to replace your phone either.
This is just a.
David Brown:No, I'm just using that as an example of. That's like a practical thing.
Alan King:Yeah. Oh, yeah, yeah. No, I think the Rabbit is just something to just for fun, you know, fiddle about with, isn't it?
Instead of having a Rubik's Cube, you know, just play with the Rabbit or something.
Benjamin Harvey:What does it actually do?
David Brown:Be careful. If you Google Rabbit, you can talk.
Alan King:To it, it can generate images, you can create little games on it. I think it does do translation. Yeah, it does a few things. It's a chat bot and a box, basically. Okay.
Benjamin Harvey:I did look at it when it first came out. I haven't revisited it for a long time.
Alan King:I'll pop a link to it at the latest video. To be fair, that video was quite good.
It started off their video with all the terrible reviews they got for the Rabbit one with Marcus Brownie going, this is awful, you know, and I thought, yeah, I quite like the company. I think they're quite cool, you know, so I sort of don't mind supporting them.
Benjamin Harvey:Yeah.
Alan King:And in a strange way, it's never going to be anything, but I hope they make it and they keep going, you know, because it's. It sort of.
It can carve out its own kind of weird little niche in the corner of strange gadgets that weird people and techies love, but no one, no one else gets. I like that so.
David Brown:Well, I'll watch the video afterwards.
Benjamin Harvey:Well, talking of strange gadgets, I know we were chatting about it earlier. The. The alter ego one I was was so strange, I didn't know whether it's a prank or not. I was. I watched it and thought, okay, someone's having me here.
This. It's not. It's not April 1st, but it. It's. And again, I haven't looked into how it works. I know you got. You have, Alan, but it's.
I mean that in terms of. That does live translation to another person wearing one just. Just looked really cool.
David Brown:Describe it for everyone.
Alan King:What is working is that there's. There's a system all around the back of the head which I say in the video, they kept very well hidden. We'll talk about that in a minute.
But essentially the upshot is that it's picking up on micro movements in your muscles around your jaw. So when the people are doing their telepathy, although it's not telepathy, obviously. Silent sense, the sort of fake telepathy.
What they're actually doing is they're speaking, but without moving their mouth, particularly just slightly micro moving.
So they're sort of saying it, but not a bit like a ventriloquist would, you know, and so it's able to pick up on those micro movements and then work out what that person said and then obviously send that information to a device or to somebody else or whatever. So, yes, you can effectively have conversations with somebody. Is it as though it's like you're feeling, thinking?
Yeah, but the good thing is it's not actually picking up what you are thinking entirely because that would be catastrophe, wouldn't it?
Benjamin Harvey:Most people imagine.
Alan King:But, you know, but so, so the fact that you have to deliberately kind of. So, you know, but, you know, it, it then picks up on that.
Benjamin Harvey:I like the way they described it.
Alan King:It's.
Benjamin Harvey:It's like silent communication. It's.
Alan King:Yeah, I did notice in the video it was particularly staged, so they were all sort of sat quite straight and not moving and. Yeah, you know, know.
And I did wonder if you're bouncing along in a car or something, trying to use it, you know, whether the micro movements would then be all over the place and would it really be.
Benjamin Harvey:Pick up on it in a train, really close to somebody else.
Alan King:And, and really interestingly, for a tech demo, and I'll bring up the Rabbit again because the first thing you'll see in a tech of a rabbit is look at this orange shiny thing. They didn't show the device at all. They kept it.
So I'm imagining at the back there's a load of wires, load of sensors and all sorts of things going on to actually, you know, do this. I mean, it's obviously a very early.
David Brown:Yeah, you know, a needle jabs in the back of their head.
Alan King:Yeah, it's a demo, isn't it?
David Brown:At the moment they don't show the matrix connector in the back of their.
Benjamin Harvey:Head, but I like the, the terms that they had on their, their demo with silent sense, wearable and silent communication. But if any of that even slightly works, how they've presented it, it's really cool.
Alan King:I, I think it will, I think eventually, you know, imagine your AirPods, right, that probably they're probably having like, you know, accelerometers and then so sensitive that it can pick up those micro movements in your jaw, translating through your. For your skull and do the same thing eventually, maybe. So as a technology. Yeah, this is probably where we're headed.
I would imagine 10 years from now, you know, we'll look back at this and go, do you remember that? But that was the beginning of the development, development.
And we'll probably sit in the glasses and other things, you know, like the meta glasses or whatever. But yeah, it's gonna happen, isn't it?
David Brown:Yeah.
Alan King:So we'll all be having these kind of like, you know, whispered conversations with each other across a boardroom. A lot of you. This.
This guy's talking nonsense, isn't he'll be saying to your CFO or whatever, you know, we don't want to buy any more of that, you know, and it's going to be a very, very weird world I think we're all going to be in.
Benjamin Harvey:Do you remember that comedy series, Man Stroke Woman? It had Nick Frost in it. And two of them were walking on a path and one of them's said, would it be amazing? Like, silent?
He said, wouldn't it be amazing if you could talk to each other from your brains? It goes, oh, yeah, that would be amazing. And they're talking in their head saying, this is amazing.
Then after about four or five seconds, they run out of things to say and they just walk alongsidely. It's so funny. But it reminds me that this technology is amazing. And then like, how are you going to use it?
David Brown:How do you hide it?
Alan King:Is this something you'd like in your life?
David Brown:No, not really. Not really. No one needs to know what's going on in my head.
Alan King:I mean, I do wonder sometimes with technology, often we do create things because we can.
Benjamin Harvey:Exactly that. Exactly that.
Alan King:Yeah, because we need it.
Benjamin Harvey:It's probably come out of someone's PhD at MIT, isn't it? I think it was. It was something that happened at mit.
Alan King:It was mit.
David Brown:But this goes back to the. It goes back to the. What do they. I don't know what the official. I think there's a name for it, but it's like the Star Trek problem, right?
Like we've been watching all this technology that some guy made up in his head and thought, oh, that would be really cool. And we've got a bunch of kids that were raised on that, and now they're trying to create it.
And whether it's actually good for humans or not, they're still trying to go out and create it. It's like the whole thing around, it's like we didn't learn anything from social media, right. We're going out, we're creating all this AI.
Sam Altman's even basically come out and just said, well, yeah, okay, yeah, there's a lot of negative sides, but Whatever.
We'll figure like he doesn't even care that there's this whole other dual use problem with AI where it can be used for, you know, incredibly fraudulent things, incredibly evil things to the extent that no human has been able to create some of this stuff in the past and basically just come, just blows it off, like. Yeah, well, and it's like we haven't learned anything and we're just continuing to. No, I know, I know. That's exactly, exactly.
So we're just going to keep making stuff that we've seen and you know, we're going to go down some terrible road at some point.
Alan King:My wife made a really good point about the, this, this alter ego thing that she said, like for people with issues, they can't speak. Of course, that could be incredible, couldn't it?
David Brown:Yeah, but this will be the argument is they'll just focus on those and they'll be like, yeah, but. Yeah, anyway.
Benjamin Harvey:Yeah, well, so, you know, with going back to the AI and tech, you know, quantum computing, fusion and AI combined together is. Yeah. Quite a few problems.
David Brown:That's going to be the magical mix.
Benjamin Harvey:Yeah.
Alan King:Well, I think we'll call this a wrap. I think it's been a really fun conversation. It's been amazing actually, to catch up with you guys. We'll try not leave it for 12 weeks next time.
David Brown:Just as a little follow up, Ellen, I am going to have the founder of that AI rights organization on my podcast in the next couple of weeks and then I'll be able to report back afterwards on, on some of the stuff that he said. So I'll have a little bit more kind of idea of the nuance behind it and how he's approaching it. So, yeah, just to throw that out there.
Alan King:I was gonna say, Dave, you want to remind the listeners of the name of the podcast so they can look it up.
David Brown:Yeah, so my, my podcast is with aifm. The with AI is altogether, it's on all of the podcast platforms and whatever. We're about 150 episodes in now.
We've also been taking a break over the summer, but we're coming back this month actually with a whole new format and everything. So we changed it up a little bit after having, you know, taken a break finally after a couple years. And yeah, so it should be interesting.
Benjamin Harvey:Hopefully it'll be interesting to hear in the next episode a little sort of five minute review of that.
David Brown:Yeah, cool.
Alan King:Yeah, well, let's, let's pick that up and maybe we'll, we will get back to as we promised. The listeners a while about education next time as well. So maybe we'll do that. Both those things.
David Brown:Sounds great.
Alan King:There we are. Right? Thank you very much, guys. We will call it a wrapping world. Have a great conversation next time, I'm sure.
David Brown:All right, we'll see you soon.
Benjamin Harvey:Bye.
Alan King:Take care, Sa.