Episode 6

When AI is Speaking, Do We Deserve to Know?

Should you always be told if you’re talking to an AI? This episode delves into the recent scandal surrounding an Australian radio station's use of an AI host without proper disclosure. David Brown, Ben Harvey, and the team explore the blurred lines between function and deception, asking where we should draw the line on transparency.

They debate AI’s growing role in areas like healthcare, law, and media, questioning whether humans will remain the trusted source in high-stakes situations. The episode also tackles a controversial topic: Is AI-generated content really worse than Photoshopped images we’ve accepted for decades?

Packed with thoughtful discussion, practical examples, and a few laughs along the way, this episode offers a grounded look at the ethical grey areas in our AI-driven world.

Top 3 Takeaways:

  1. AI Disclosure Depends on Context: The hosts debated whether AI-generated content should be disclosed. David argued disclosure isn’t always necessary, especially for functional tasks like booking appointments. Others felt that in contexts involving trust, such as news, politics, or medicine, disclosure is vital to prevent misinformation and maintain credibility.
  2. AI’s Future Role in Expertise and Decision-Making: The conversation explored how AI could surpass human capabilities in fields like medicine, law, and journalism. In 2050, people might trust AI more than humans for diagnoses, legal defence, and even judgments, given AI’s superior access to data and consistency.
  3. Image Manipulation and the Double Standard: The group highlighted a gap in public scrutiny: AI-generated images must be flagged, but heavily edited photos from traditional tools like Photoshop are often not. They questioned whether the real issue is how something was altered or why, arguing for consistent transparency standards.
Transcript
Speaker A:

If the humanity of the presenter matters to the meaning of the content, disclosure matters.

Speaker A:

If not, it's optional.

Speaker B:

Well, hello and welcome to AI Evolution, the podcast where we look at what's happening in the AI world today.

Speaker B:

But try and think about what it means for the future and where this is all headed.

Speaker B:

As usual with me, we've got Dave Brown and Ben Harvey.

Speaker B:

Hello, guys.

Speaker A:

Morning.

Speaker B:

How you all doing?

Speaker B:

And it's an.

Speaker B:

I don't know about you guys, but it's a nice sunny day here in Wales as well, so I'm feeling pretty chipper and I'm quite excited to cover the subject we're going to cover today, which is essentially, should we know if AI is talking to us, you know, if it's fake, we might not even realize, but should we be told?

Speaker B:

And Dave, I think, got himself entangled into a lengthy LinkedIn argument during the week, or conversation, shall we say?

Speaker B:

And this, this became the subject we thought, well, let's, let's cover that in the podcast.

Speaker B:

So here we are.

Speaker B:

But before we do that, we'll have our usual little run around the room as the things that are catching our eye this week in AI.

Speaker B:

But before all of that, I just want to throw a sort of new segment in, actually, which is sort of funny little tech ticks, if you like, that we all have.

Speaker B:

So one for me is I have to have my iPhone in my left hand pocket, which is my non dominant hand.

Speaker B:

And I got thinking about this and thought, does everybody do that?

Speaker B:

Is it being a non dominant hand, a thing?

Speaker B:

Or is that just me being peculiar?

Speaker B:

But if I put my phone in my other pocket, it's like I start freaking out.

Speaker B:

This is, I cannot be in this situation.

Speaker B:

So I wanted to throw it to these guys.

Speaker B:

Well, what do you do?

Speaker B:

Are you as weird as I am or is this a normal thing?

Speaker C:

I'm completely chaotic.

Speaker C:

It just goes where I, you know, whatever.

Speaker C:

Pocket's free in back pocket, left pocket.

Speaker C:

And I'm often searching for it, but I'm left handed.

Speaker C:

So some of the user interface stuff irritates me.

Speaker C:

Because you want to send something, it's always on the other side of the phone.

Speaker C:

If you're holding it with your left hand, you can accidentally press an advert quite regularly, but they're just little frustrations.

Speaker C:

But no pocket wise, I can go in any pocket tracks with me.

Speaker B:

Ben, because you're such a free spirit, I know you very well, you know, that makes sense to me.

Speaker C:

That's the polite way of saying it.

Speaker A:

That's his creative side Coming his creative.

Speaker C:

Side, just constantly looking for it.

Speaker B:

What about you, Dave?

Speaker A:

Interesting.

Speaker B:

Left, right, dominant, non dominant.

Speaker A:

I'm.

Speaker A:

I'm offhand pocket always.

Speaker A:

So I'm with you.

Speaker A:

I.

Speaker A:

And the reason is, is because my dominant hand pocket, which is my right pocket, is where I keep my change, and it's also where I keep my.

Speaker A:

My earbuds as well.

Speaker A:

And so for those two reasons, I guess the main reason is, is that I'm constantly accessing that pocket with my hand.

Speaker A:

And if the phone's in there, it just gets in the way of everything.

Speaker A:

Plus, because I have my earbuds in there, then, you know, that's also another little case that's in there, and it would just be overloaded on one side.

Speaker A:

My left pocket is where I keep my house keys, so I don't need those throughout the day generally.

Speaker A:

So I put the phone in there because it just makes sense.

Speaker A:

And it's also, there's more space in that one.

Speaker A:

The only caveat being is, as you guys know, I ride a motorcycle all the time.

Speaker A:

So the other pocket that I use is my jacket pocket.

Speaker A:

And I have, like a little special pocket that's designed to put a mobile in.

Speaker A:

So if I have my jacket on or a jacket on, then I always keep my phone in my jacket pocket.

Speaker A:

And then if I don't have a jacket, it always goes in my offhand pocket.

Speaker B:

Yeah, there we are, you see.

Speaker B:

So we're all a little bit different.

Speaker B:

I.

Speaker B:

I do keep my Swiss army knife in my AirPods in my other pocket, but I reckon I don't use them as often as my phone.

Speaker B:

So.

Speaker B:

But the weird thing is, when I get my phone out, the first thing I do is move it from my left hand to the other hand.

Speaker A:

Exactly.

Speaker B:

You know, it makes no sense at all.

Speaker B:

But.

Speaker B:

But there we are.

Speaker B:

That's.

Speaker B:

That's what we do.

Speaker A:

And I never put it in my back pocket because I know so many people that have broken phones by sitting on them, and I'm just like, I can't.

Speaker A:

I just.

Speaker A:

I can't.

Speaker A:

And my.

Speaker A:

I know a few people who've had it when they go to the toilet, they'll go to pull their.

Speaker A:

Their trousers down and then their phone falls out of their pocket and it goes, right.

Speaker A:

And I'm like, no, I'm not doing that.

Speaker A:

So no back pocket for me at all.

Speaker B:

I mean, you know, to me, it's madness.

Speaker B:

I see people on the street and I just want to go up and say to them, what are you doing?

Speaker B:

Like, literally, what are you doing.

Speaker B:

Someone could just pull it out your pocket, it'd be half hanging out.

Speaker C:

That's me.

Speaker A:

Exactly.

Speaker B:

Please come and steal me.

Speaker A:

Oh, yeah, there's that as well.

Speaker A:

Yeah.

Speaker B:

And remember, Ben, gate, you know, sit on it, you know.

Speaker A:

That's right.

Speaker C:

It's been really annoying for me is I've got my little fingers broken from a football injury a couple weeks back and if you have jeans on, then you go in to get your.

Speaker C:

It really hurts.

Speaker A:

Oh, it hurts.

Speaker C:

So, yeah, I've kind of probably become slightly more left pocket just as a result of that.

Speaker A:

All right.

Speaker B:

Okay.

Speaker B:

So thinking about what's been catching your eye this week, I'll go first here, then I think actually today, and then I think Ben and then I think David, you're going to pick up the last one.

Speaker B:

So the thing that caught my eye this week was Perplexity, who sometimes it can be a bit controversial.

Speaker B:

They've had a lot of accusations about them just basically stealing everyone's information and making stuff up and all that kind of stuff.

Speaker B:

But they launched a voice mode, effectively for the app on the iPhone, I presume the Android as well, but obviously not had a chance to try that.

Speaker B:

And it's rather good.

Speaker B:

I mean, it's good, it's chatty and it works quite well.

Speaker B:

But.

Speaker B:

But the thing is, it's a little bit of an action model as well.

Speaker B:

It can do things.

Speaker B:

If you ask it to play, you know, some music, it will open up a music and start playing it for you.

Speaker B:

And that's something that GPT doesn't do or any of the others.

Speaker B:

So.

Speaker B:

So it does have a, you know, some.

Speaker B:

Some actual functionality in your phone and so does the things that you would expect Siri to be able to do.

Speaker B:

But often Siri fails that it can put calendar invites in.

Speaker B:

If you've got, you know, the.

Speaker B:

The app on your phone, it can order an Uber for you.

Speaker B:

If you've got Open Table on as an app on your phone, it can book a thing in a restaurant for you.

Speaker B:

And all just by talking to it through Perplexity.

Speaker B:

So I've been playing about with it as is my son.

Speaker B:

And yeah, I think we're quite impressed to the extent that I've almost deactivated Siri on my phone because Perplexity does a better job.

Speaker B:

And I've made Perplexity my action button thing.

Speaker B:

So when I push that now, it opens voice mode in Perplexity.

Speaker B:

So it's effectively become my Siri.

Speaker B:

And it does make you think, how on earth have Apple allowed this to happen.

Speaker B:

How have they got to a place where a third party company is able to develop a non native third party app that works better than their native voice assistant?

Speaker B:

I mean, yeah, car crash is all.

Speaker A:

We can say really, but maybe the reason that they allow it is because they know that it works better and they have to have some solution because they tried to do Apple Intelligence or whatever stupid name they came up with and that didn't work.

Speaker A:

Because the bit of that that's surprising for me, Alan, is that they allow it to talk to other apps because I thought that was something that was that Apple's talked about for ages as security wise is that one app wasn't able to talk to another but they've obviously opened up their security to them.

Speaker A:

So to me that says that they've admitted defeat and that they realized that Perplexity's tool works better than Siri so they've just let it happen.

Speaker A:

Now the bigger question is do they just buy Perplexity?

Speaker B:

Well, I think a lot of people are starting to think that maybe they should.

Speaker B:

And the other brilliant part about it is because Apple have shortcuts.

Speaker B:

You can build a shortcut that it runs if you say, hey Perplexity and it opens the voice so suddenly the thing is behaving exactly like Siri.

Speaker B:

It has all the functionality and convenience of Siri but actually works and does things.

Speaker B:

Yeah, I mean, I think, you know, if you're, if you're Apple looking at this and you're an Apple shareholder looking at you thinking how is this company not being able to deploy their own system, you know, and do it in a way, I mean when you look at things like Deep SEQ and stuff like that, how has it taken Apple so long to get to where they need to be?

Speaker B:

They're the most powerful company in the world financially probably, you know, certainly in the top three.

Speaker B:

They ought but to do this.

Speaker B:

And yet they seem to be frozen in time and you know, two years by and nothing.

Speaker A:

Tim Cook, two words.

Speaker A:

Tim Cook.

Speaker B:

I think Tim, yeah, I think definitely, you know, he's very much a man who likes to think about supply chain and all that kind of stuff.

Speaker B:

But also I think the people they've had working on Siri, potentially they had.

Speaker B:

Is it John, Andrea from.

Speaker B:

I think it was from Google originally.

Speaker B:

And yeah, they've just not done the job, have they?

Speaker B:

And they've tried to launch stuff last year at WDC with Apple Intelligence that they've not really been able to actually deliver.

Speaker B:

I did hear a rumor that a lot of the stuff at WDC they announced the Siri team hadn't even heard of, weren't aware of.

Speaker A:

Yeah, but Tim's an operations guy, right?

Speaker A:

So like you said, he's like about supply chain and all that sort of stuff.

Speaker A:

They don't have anyone left in Apple.

Speaker A:

I think that's, that's the visionary and I think now they've, they've done their period with an operations guy in charge but somebody needs to look at that and they need to find somebody who's a massive visionary forward thinker who now can take Apple on that next creative thing.

Speaker A:

And unfortunately they are way, way late to the game.

Speaker A:

You know, if they had somebody who was thinking about this stuff, they could have been leading.

Speaker A:

But yeah, unfortunately that's, they make great hardware and you know, the MacBook Pro I think is still the absolute single best piece of computer hardware that exists in the world today.

Speaker C:

Great.

Speaker A:

But, but other than that it's just the same thing, you know, they're, they're not being creative, they're not coming out, they're not leading on anything anymore and you know, again the hardware is great and whatever but they need the vision.

Speaker B:

So they've just brought the, effectively they've brought almost lock, stock and barrel the Vision Pro team to come and work on Siri.

Speaker B:

So that sort of speaks volumes a bit, doesn't it?

Speaker B:

Because you've got a product there that's completely broken, it doesn't work and they've suddenly worked out we need to fix this like urgently because actually, guess what, AI is really important and the thing that they were saying was going to be their vision of the future, they've taken their best team off it, you know, so where does that go?

Speaker B:

You know, I mean, so you just.

Speaker B:

Yeah, I think they're all over the place.

Speaker B:

Place.

Speaker B:

And it's going to be an interesting few years for Apple.

Speaker C:

Yeah, they spent too much on the Vision Pro and the car and not enough on AI and what's actually going to be used?

Speaker B:

100 billion, apparently.

Speaker B:

Vision Pro.

Speaker B:

Right Ben, what's been catching your eye this week?

Speaker C:

Nothing particularly new, just as a, you know, we've got production company and we've been trialing using AI to create animation and just seeing how far we can push that with an after effects specialist.

Speaker C:

But, but we've also just been trying to create digital assets through Dall E, a little bit through Firefly and yeah, just trying that.

Speaker C:

And it's been quite a steep learning curve to.

Speaker C:

We don't want it to create an animation for us.

Speaker C:

We want it to create assets that we can then move.

Speaker C:

So that's been a.

Speaker C:

Yeah, that's been interesting to see how effective that is.

Speaker C:

And actually what we found was using the image generator in Illustrator was the most useful for us and not actually going through Firefly or some of the.

Speaker C:

Or even Dall.

Speaker C:

Ecause it was vectors to start with.

Speaker C:

So that's been quite an interesting process.

Speaker C:

Still using the assistant in Grok, I'm enjoying how that chats.

Speaker C:

I know I mentioned that last time and just.

Speaker C:

Yeah, the new model on Chat GTP is great at creating images as well, and particularly text in images.

Speaker B:

Did you know you can develop your own personality for it now?

Speaker C:

Yeah, it's amazing.

Speaker C:

I haven't done that.

Speaker C:

I haven't had the time.

Speaker B:

But yeah, I did customize a Magnum one, actually, which was.

Speaker B:

Yeah, of course, to begin with.

Speaker B:

It didn't want to be Magnum particularly, actually, and I had to sort of say, look, you need to pretend you're Magnum.

Speaker B:

And then it started to work.

Speaker C:

But, yeah, it's funny how quickly the, you know, the quirky ones that they throw in there, the angry one and the unhinged one and the sexy one and all those, you try them all for five minutes and then that's it.

Speaker C:

You know, I haven't gone back to any of that.

Speaker C:

There's no point.

Speaker C:

But the actual main assistant I find quite useful for just chatting through ideas.

Speaker B:

Right, Dave, what's been catching your eye?

Speaker A:

Cool.

Speaker A:

Yeah, we'll get to that one in a minute.

Speaker A:

I do have one that came up this morning which I think is really interesting and I shared it in a group, but it's about fingerprinting in AI text and it was the fact that somebody's worked out that, particularly ChatGPT, I think, is what they were really talking about, adds all sorts of interesting things, like invisible spaces.

Speaker B:

Oh, yes, I know.

Speaker A:

I heard that using Unicode and it was on Instagram.

Speaker A:

I'll put a link to it in the show notes if people want to see it.

Speaker A:

Or maybe I can just play it here and then people can see it.

Speaker A:

But I thought it was really interesting.

Speaker A:

The person who did the post basically thought that it wasn't intentional.

Speaker A:

They think that it's something that the AI is doing itself to make it easier for it to break up chunks of information.

Speaker A:

So it's using these invisible Unicode characters to help itself in the models, which I also find interesting.

Speaker A:

That it, if that is the case, that obviously AI is doing stuff behind the scenes that nobody really realises that it's doing.

Speaker B:

But it makes sense.

Speaker B:

I think it makes sense because OpenAI wouldn't be incentivized to do this.

Speaker A:

Yeah, exactly.

Speaker B:

They want people to use it and not be called out for it.

Speaker A:

Well, it's that, but it's also.

Speaker A:

It's extra information that's in there that they need to process.

Speaker A:

So I wouldn't think that they would put extra stuff in because it's know every single token takes energy and everything else.

Speaker A:

So adding anything to it, I think would just make it more expensive for them to do so.

Speaker A:

I agree.

Speaker A:

I think the AI's probably done it on its own or worked out, you know, how to do that.

Speaker A:

But the other thing that was quite funny is that again, the whole context of the post was here's a way to remove all of that stuff so that nobody will know that it's AI, right?

Speaker A:

And then it's, you know, at the minute it's a little bit clunky and you have to go through a few steps to basically put it into some sort where you can actually see all the hidden ASCII characters and then you can go in and you can manually remove them all.

Speaker A:

I'm going to write a Little app using ChatGPT to actually manually go in and remove all of that stuff.

Speaker A:

So once I get to the point that I have some content that I want to use, I can literally just copy paste it into an app and it will return it without all of the hidden characters in it.

Speaker A:

I think that's.

Speaker A:

That'll be a much easier way to do that.

Speaker B:

It would could.

Speaker B:

So you could build it.

Speaker B:

Maybe you could just build it like a GPT inside chat with GPT and just tell it that, yeah, I'm going to paste in whatever text I paste in.

Speaker B:

Your job is to replicate it exactly, but remove any Unicode spaces or unusual characters.

Speaker A:

See, I don't know if that would work because it probably puts its own Unicode characters in the answer anyway.

Speaker B:

But you could say to it, you must not use Unicode.

Speaker A:

But I think it would still.

Speaker A:

Yeah, maybe.

Speaker A:

I don't know.

Speaker A:

Anyway, it's an interesting little test, but I thought it was really interesting on two points.

Speaker A:

One, that it's doing it and that maybe that's how a lot of the tools that are the AI text discovery tools work.

Speaker A:

So they're looking for those things in the background.

Speaker A:

And second, that obviously it hasn't taken very long for anyone to come up with a solution to say, hey, this is how it's working and here's how you get around it.

Speaker A:

Yeah, so that's a whole nother.

Speaker A:

I think AI and copyright detection and all of that.

Speaker A:

Someone had to do it, but it's okay.

Speaker A:

Yeah.

Speaker A:

So anyway, that was interesting.

Speaker A:

So I think, yeah, maybe we'll have a future discussion on that.

Speaker A:

I'm actually talking to someone on another podcast this afternoon and doing an interview from a guy who runs a company.

Speaker A:

And they can.

Speaker A:

Not only can they detect if AI has been used in audio, video or imagery, but they can also pretty much tell which tool did it as well by the digital fingerprint about how it was done, which I find really, really interesting.

Speaker A:

So that's actually.

Speaker B:

This leads perfectly, Dave, into what we're going to talk about, really, doesn't it?

Speaker B:

Because it's whether it's fake or not.

Speaker B:

And do we know?

Speaker C:

Yeah.

Speaker A:

So nice segue.

Speaker A:

Thank you very much.

Speaker A:

People listening may have seen the story that came out, I think it was last week, where a Sydney radio station had an entire show that was AI so the presenter was AI and they did not declare that it was an AI show.

Speaker A:

It was only later that apparently some people suspected that it was AI but no one kind of could prove it, but then somebody did and.

Speaker A:

And it turned into a massive story.

Speaker A:

It's been all over sort of LinkedIn and social media if you follow those sorts of topics.

Speaker A:

You know, obviously we all follow AI and I personally, I will set out my stall now in the conversation.

Speaker A:

I don't really care.

Speaker A:

I don't think the AI needs to be identified.

Speaker A:

I never understood that.

Speaker A:

And this goes all the way back to a couple of years ago when Google Assistant, do you remember they did the demo on stage and they demoed Google Assistant, like booking a dinner reservation.

Speaker A:

Now, that never materialized with Google in any meaningful way, at least as far as I know.

Speaker A:

But there was a lot of discussion at the time then about, oh, well, should it identify?

Speaker A:

So if you call your favorite restaurant or your favorite salon and you want to book a.

Speaker A:

You want to book a table or you want to book an appointment, if they have an AI tool that can handle that, that can take your call.

Speaker A:

So you never get a busy call, you never get put through to voicemail.

Speaker A:

It always will instantly take your call.

Speaker A:

And it can book your dinner reservation or your hair appointment for you.

Speaker A:

Who cares if it's AI?

Speaker A:

Do I need to know that I'm talking to an AI?

Speaker A:

No, I honestly, I couldn't care less.

Speaker A:

As long as they take the call and I get my appointment or I get my table, I don't care.

Speaker A:

But there are a lot of people out there who absolutely are vehemently arguing that no, if it's AI, you absolutely have to know.

Speaker A:

And I'm not going to talk to an AI because I don't want to and I just don't understand that.

Speaker A:

So I thought that this story was particularly relevant because it is in more of a public, it's more in the public realm.

Speaker A:

So.

Speaker A:

Yeah, so that's, that's kind of the, that's the article I wanted to talk about and I wanted to get you guys thoughts on it as well.

Speaker C:

I mean, I've got a slightly more pessimistic view on that, I think, I think it depends on what your data is.

Speaker C:

So.

Speaker C:

Totally agree.

Speaker C:

If you're, you've got an assistant making calls for you or you know, doing daily work for you, they are absolutely, you know, you don't want to get into that EU Cookies thing where every website you have to go to this just so, so dull and irritating.

Speaker C:

And I think if we went down that, that route with AI, that would get really, really irritating really quickly.

Speaker C:

But obviously there's, there's a huge ability for AI to undermine democracy, I think.

Speaker C:

So you've got, Was it those robocalls that had Joe Biden's voice on a couple of years back?

Speaker C:

You know, there's, there's a huge ability to manipulate people on a mass scale now and it's quite cheap to think with that trust and misinformation.

Speaker C:

It is important to have some kind of standard, particularly when it comes to the political sphere, I think, and news sphere.

Speaker C:

I think there is probably some legislation around saying what is AI and what isn't AI?

Speaker C:

But I don't think that's across the board.

Speaker C:

I think for a lot of things, you know, I worry more that AI generated content will become really repetitive, like on LinkedIn or that kind of daily interaction as people have pushed with time.

Speaker C:

Do they just, you know, outsource some of their creative thinking?

Speaker C:

And I think that's a worry as well.

Speaker C:

But I think, yeah, across the board, I think some kind of disclaimer gets very repetitive and dull and in, in the way of work.

Speaker B:

I think the, yeah, the disclaimer thing, 100%, you know, if every time you see an AI image and there's a plaster across it, this was generated by AI, it's going to get very boring very quickly.

Speaker B:

I mean, imagine you, you're out in the car and every time you drive past a billboard, because let's face it, those sort of images and billboards and things are going to be AI generated if they're not already.

Speaker B:

Would you really.

Speaker B:

Do you really need to see that?

Speaker B:

You know, I suppose that the question becomes, is if the person in the image is a real person, so let's say it was a, you know, famous celebrity, but it was an AI generated image of them and they hadn't given permission for that, then maybe, perhaps it should say it's AI generated.

Speaker B:

But if they have given permission, given permission to that, maybe then we don't need to know that because they're effectively saying, I'm happy for this to be a representation of me and this is how the world can see me.

Speaker B:

So there's a lot of sort of things to work out here.

Speaker B:

I think society, I think, has to go through a bit of a shift because at the moment it's all new and it's all novel and we don't know if we like it or not and we're not sure about it.

Speaker B:

And is it real and is it fake?

Speaker B:

You know, makes me think of the TV show.

Speaker B:

Is it cake?

Speaker B:

Actually?

Speaker B:

Yeah, but is it real?

Speaker B:

Is it fake?

Speaker B:

And, but I think very quickly, as this stuff becomes increasingly ubiquitous in our life, it's used for everything in, everywhere.

Speaker B:

It becomes impractical.

Speaker B:

But when you get to think like a radio station where perhaps there's a relationship, a classic relationship between the listener and, you know, the station, maybe, maybe there is a requirement to say it, but, but, you know, that's, that's really up for debate now.

Speaker B:

What, what is the sliding scale here?

Speaker B:

Because you might make an argument if you've got a GP or a doctor and you're talking to some kind of online conversation with a doctor and there appears to be a doctor in front of you, but actually it's an avatar and it's generated and it's an AI that sits behind it, I think most people today would probably say, I'd want to know that piece of information, you know, that this isn't actually a real person.

Speaker B:

It may be very capable, but I still would like to know that maybe knowing that the DJ is not real is somewhere further down that sliding scale.

Speaker B:

But it might still be on that scale, but it might be that in 20 years time, you know, as a society we've changed enough that it's acceptable for the DJs to be on that scale, but still not the doctors, you know.

Speaker B:

Yeah, and I think, I think there's definitely.

Speaker B:

We kind of have to get used to the idea a little bit as we, as we, as we go along with this stuff rather than just immediately launching it.

Speaker B:

Without telling anyone.

Speaker C:

There are other radio stations that have done this and just said that we're experimenting with AI and there's been absolutely no backlash, there's been no problem.

Speaker C:

So I think, you know, you're right.

Speaker C:

As this becomes more ubiquitous, people will just get used to it.

Speaker C:

But I do think trust is broken if you don't tell people that a whole hour long or two hour long sort of radio program is generated by AI.

Speaker A:

See what's interesting.

Speaker A:

So I agree, I think things where you have open ended discussions if you're using an AI, I think that becomes important to identify, I guess a little bit of a pushback or another discussion point on that is.

Speaker A:

So what's the difference between that and a radio station news team reading news that they got from somewhere else?

Speaker A:

So they all get the AP feed or the Reuters feed or the press association feed or whatever and they all read exactly the same news story that was not written by them and they're just parroting back something that some other reporter wrote somewhere else in the world.

Speaker A:

And you get that across tens of thousands of radio stations or even TV news and newspapers and local papers and everything.

Speaker A:

They just literally take that article verbatim and they just spit it out.

Speaker A:

So do we now need to start declaring that I didn't actually write this article, I'm actually reading this that I got from ap because in my mind it's basically the same difference.

Speaker C:

Yeah, but although that kind of news or weather or sports often comes on the hour for three minutes, it's a roundup where you're playing classic music or you know, it's not a news generated program like Radio 4 where people are actually sitting there doing disgusting live things.

Speaker C:

So the main purpose of that station often is something else and they're just updating your traffic or there's no way the economies of scales could work for a small radio station to produce that kind of content themselves.

Speaker C:

So I think that's, I put that in a slightly different category than you know, as you say like an hour long or two hour long discussion that's generated by AI.

Speaker C:

It's a very different sort of.

Speaker C:

Yeah.

Speaker C:

Proposition to me.

Speaker C:

I think.

Speaker B:

Well, I think we could get there with that stuff, you know, I mean I, I think, yeah, the weather, the traffic, that sort of stuff seems to be a very obvious thing, doesn't it?

Speaker B:

For to be, you know, using AI.

Speaker B:

Why wouldn't you?

Speaker B:

Although, you know, if you listen to radio too, there's somebody called Sally Traffic I think isn't there, who's always seems to be part of the show as much, as much as reading the traffic, you and joins in all the discussions.

Speaker B:

So, you know, so maybe you do lose something out of the way.

Speaker B:

Maybe the show just becomes slightly duller because you haven't got these characters anymore.

Speaker B:

Although, look, you know, 20 years from now, 30 years from now, maybe you can just train the characters.

Speaker B:

You know, these AIs can become the characters.

Speaker B:

Why can't they?

Speaker B:

And so I, I went in a bit of a kind of thought cycle with this this morning.

Speaker B:

I want to throw this idea at you, okay.

Speaker B:

Because I think it, you know, it sort of looks as where AI evolution looks a bit further ahead.

Speaker B:

Let's say it's 50 years now since, since GPT was launched and we've got super intelligent AIs and they're incredibly good, right?

Speaker B:

And in 50 years time, most of the time when you go to see a doctor, you are talking to an AI, because that's what everyone does and that's what everyone's uses.

Speaker B:

And maybe the warning is no longer on the AIs.

Speaker B:

Maybe now you get a warning.

Speaker B:

If you're going to be dealt with by a human because they're not as competent, then that could be the future.

Speaker B:

Right?

Speaker B:

Maybe that's where we end up with this conversation eventually.

Speaker B:

It's the human warnings.

Speaker B:

We need to say warning.

Speaker B:

This photo was taken by a computer.

Speaker B:

This show is being presented by a human because it's now not as good as the AI.

Speaker B:

And I think that's possible.

Speaker A:

Yeah, it's an interesting wrinkle to it.

Speaker A:

I hadn't gone that far.

Speaker A:

So this is why I wanted to chat with you about it.

Speaker A:

No, that's 100%.

Speaker A:

I think I could definitely see us getting to that point.

Speaker A:

I think some of the strengths of AI, even though a lot of people would argue that the content is kind of samey and a bit boring and it has bias.

Speaker A:

And I don't want to go down the track on bias because everybody talks about bias all the time.

Speaker A:

But my only pushback on that is, is humans have bias.

Speaker A:

Every single human has personal bias based on your past, your experiences.

Speaker A:

All three of us have different biases because we were raised in different places by different people, we've had different experiences and all that.

Speaker A:

Everyone has bias.

Speaker A:

So AI having bias is no different than any human having bias.

Speaker A:

So I don't buy the bias argument at all.

Speaker A:

But yeah, yeah.

Speaker B:

Think about the doctor.

Speaker B:

You know, if you go and see your gp, how much information have they got access to, exactly?

Speaker B:

A limited amount of information that They've memorized and then what they've got in front of them, if they can Google it, whatever, and their experiences that they've had, you know, in, in their time as a doctor, which could be a number of years, it could be a long period, it could be a short period, but an AI system, say in 50 years time, you know, super intelligent system which has access to the world's medical information holistically, its intelligence is way beyond the sum of all humans that have ever lived.

Speaker B:

You know, that's the thing you want to look at your case, isn't it, and make that diagnosis?

Speaker B:

Not.

Speaker A:

Yeah, because it's, it's, it's looking at probability.

Speaker A:

That's all doctors are doing.

Speaker A:

They're looking at your symptoms and they're going, okay, it's probably this, based on the data that I have.

Speaker A:

And the more data they have, the more likely they are to be able to give an accurate, you know, sort of guess at what they think it is.

Speaker A:

I would be 100 comfortable.

Speaker A:

And I know a lot of people will freak out when I say this, but I'd be absolutely 100% comfortable giving all of my medical records into an AI and every time I have a blood test, every time I go and see my doctor for Anything, X rays, CT scan, like, whatever, I would be more than happy to feed all that into an AI and say, is there anything in there that you see that you think you might, you know, that we might need to address?

Speaker A:

Because I suspect that because it has such an enormous data set from around the world, like you said, the sum total of its knowledge and its ability to predict based on all of that data that it's got is going to be way better than any individual GP that I see in Lamberhurst.

Speaker A:

I can tell you it's just a fact.

Speaker A:

And yeah, I would be totally comfortable with that.

Speaker B:

Okay, so it's:

Speaker B:

You've been arrested for a crime you didn't commit.

Speaker B:

Okay, who do you want?

Speaker B:

Do you want the AI lawyer?

Speaker B:

Do you want the human lawyer?

Speaker B:

And more importantly, do you need that to be disclosed to you?

Speaker A:

I want both.

Speaker A:

Yeah, I want both.

Speaker C:

Human lawyer that has expertise in AI.

Speaker A:

Yeah.

Speaker A:

And I want, I want the AI to be able to analyze the data and the evidence and the stuff that they have.

Speaker A:

Right.

Speaker A:

Because I think that forensic evidence becomes more and more important, so you start adding all that information into it and everything else.

Speaker A:

Plus, it's got all of my location information, it's got all the details right.

Speaker A:

Like:

Speaker A:

And I would think that it would be able to pull out if I genuinely was innocent.

Speaker A:

I think it would be able to pull out a pretty convincing argument that I didn't do it based on data.

Speaker A:

But then I also think that you probably would still want the human, because you need the story and the emotional side to come out, that maybe AI will be good at doing that as well.

Speaker A:

But I also feel that that's still a distinctly human thing in that humans might be able to do that more than an AI could.

Speaker A:

But for the data analysis and the whole, the sheer hard fact of what happened in analyzing the data and the information that's available, I would definitely want AI to look at that.

Speaker C:

And also they think a lot of that lower tier jobs will go to AI in the legal process, don't they?

Speaker A:

Yeah, all the discovery, it'd be amazing for discovery.

Speaker A:

And they're using it for that already.

Speaker A:

Yeah.

Speaker A:

So if you take a discovery process, I mean, you know, you see some of these cases and they have 10 of those file boxes full of documents and, and you know, reports and everything else.

Speaker A:

And now they just scan all of that stuff into AI and then they can just start to query it and ask questions instead of having to read all the stuff themselves.

Speaker A:

And you know, they're finding, they're finding more information and they're able to get more data out of it.

Speaker A:

Because you don't have a, a junior lawyer sitting reading through 10,000 pages of documents.

Speaker B:

Yeah, I mean, I think for paralegals they probably, you know, there's probably quite a concern there, isn't there?

Speaker B:

So, so you end up in court and you've got a choice now.

Speaker B:

You can have the AI judge or the human judge.

Speaker C:

Depends which model it was using.

Speaker B:

gent version that open air in:

Speaker A:

I think it depends on the type of crime.

Speaker B:

Okay.

Speaker A:

That would be my gut instinct, maybe.

Speaker B:

Where you're being tried, because if you're in a country where you felt that the judge could have bias, you know, you articulated, Dave, humans can have a lot of bias, Right?

Speaker A:

Yeah.

Speaker B:

But maybe you'd feel an AI might not have as much bias.

Speaker B:

Maybe it's fairer potentially.

Speaker A:

Yeah, I mean, I generally believe that anyway, even now.

Speaker A:

And I don't really see that changing again because everything gets sort of, it's like the empathy Test.

Speaker A:

Right.

Speaker A:

They've tested results where they've asked questions to humans and they've asked questions to AI and they've had then humans grade the responses and they didn't tell them what was what they just said, you know, grade the empathy, you know, that's being shown here on a, on a scale like 1 to 10, whatever.

Speaker A:

And, and the AI wins all the time, which totally makes sense because it's pulling responses that it's seen from people around these issues from all over the world and it's kind of, it's munging them together and it is giving you kind of the average response that, that it sees.

Speaker A:

But I, but that's going to be better than any individual person, always.

Speaker B:

I think you're right.

Speaker B:

If you ask an AI a question, and I've noticed over time that you tend to get back a fairly balanced, almost, you could say middle of the road, but kind of, you know, it's not extreme.

Speaker B:

You rarely get an extreme position back, do you?

Speaker B:

But if you ask the same question to 20 people down the pub or whatever.

Speaker A:

Yeah.

Speaker B:

You know, particularly after a couple of beers, you're probably going to get some fairly extreme responses there, aren't you?

Speaker B:

Which you would sort of never get from the AI.

Speaker A:

So is that a positive?

Speaker B:

Well, in terms of being tried by somebody, yeah.

Speaker B:

Because.

Speaker A:

Particularly around political discourse.

Speaker A:

Right, because you mentioned this earlier, but maybe we could do with a little bit of that not so extreme positioning on any given topic.

Speaker A:

Right.

Speaker A:

If you take the bell curve, right, and you drop the ends off on either side and you kind of end up with the, you know, the slightly left or the slightly right centering.

Speaker A:

No matter what country you're in, it doesn't matter.

Speaker A:

You know, you, you end up, and you take those extremes off, right, and you just go, look, we're not even, we're ignoring both sides of the extreme argument and we're just going to take that more centrist kind of answer.

Speaker A:

Maybe that, maybe that's a much better, I don't know, position to be in and a much better way to talk about some of these topics.

Speaker A:

So again, it could be helpful in that, that you're not going to the extreme all the time.

Speaker A:

Maybe that's the answer.

Speaker C:

I guess, interestingly, in human history that what you consider that bell curve shifts, doesn't it?

Speaker C:

So it's not, it's not a.

Speaker A:

Yeah.

Speaker A:

Like the US is almost one standard deviation to the right of the uk so even your average person in the US I think is much more conservative than the average person that you get in the uk, even someone who would be considered dead in the middle.

Speaker A:

So I think where the curve sits on that spectrum is different in every single country.

Speaker A:

But you still want to cut the edges off, I think, within that country, so that you then it maybe, I don't know, maybe it would help bring all of that stuff back to the middle.

Speaker B:

So.

Speaker B:

So, you know, we think about social media over the past decade, depending on what your starting point is, the algorithms and social media have kind of pushed people either further left or further right politically, probably.

Speaker B:

It's probably reasonable to say that then maybe over the next decade or two, if, if indeed, and if it's true, OpenAI are building a social media platform themselves, if it's more middle of the road and people start seeing that narrative all the time, maybe as you say, starts to draw them back into the center a bit more.

Speaker B:

Be interesting to see, wouldn't it?

Speaker B:

Sort of unintended consequences of technology playing out.

Speaker B:

I think the edges are great in terms of creativity and stuff like that.

Speaker B:

I think you need to be on the edges.

Speaker B:

You want.

Speaker B:

If you want to create great music or great art, then you probably need the models to be operating in outside of the center of the bell curve.

Speaker B:

Probably middle of the road isn't going to get you there, right.

Speaker B:

You just end up with wallpaper, music or, you know, very bland paint by numbers.

Speaker B:

But when it comes to being assessed medically, you probably don't want to be in the edges, do you?

Speaker B:

Exactly.

Speaker B:

Let's be in the 99.9%, you know, probability case.

Speaker B:

This is, this is.

Speaker B:

And so, so where would I stand on this?

Speaker B:

Would I want to know?

Speaker B:

I.

Speaker B:

I'd want to know today 100% if I was being seen by a medical AI.

Speaker B:

I think by:

Speaker B:

And I think it would be very competent and I might be very worried if it wasn't an AI doing it.

Speaker B:

I think that's.

Speaker B:

But I think that's the journey as a society.

Speaker B:

We have to go on.

Speaker B:

We at the moment, you know, I think a lot of people are very kind of AI averse, aren't they?

Speaker B:

Like, you know, the robots taking over.

Speaker B:

We don't want that.

Speaker B:

But I think as it becomes used by everybody, people just get used to it.

Speaker B:

You know, it's a bit like when a very trivial example of this.

Speaker B:

But when the AirPods first came out from Apple, everyone took the piss out of them for about six months when, you know, they look, they look ridiculous.

Speaker B:

You know, no one thinks that now.

Speaker A:

They still do.

Speaker B:

Got used to it.

Speaker C:

Used to, you know.

Speaker B:

Yeah.

Speaker B:

I remember my brother vehemently ripping the piss out of me for having these things in my ears, you know, you look like a lunatic.

Speaker B:

What are you doing?

Speaker B:

But very quickly it became normalized, isn't it?

Speaker B:

And I think that's what.

Speaker B:

That's what will happen with the AI stuff is it becomes embedded in everything we do.

Speaker B:

When you start walking into McDonald's and, you know, you're orderly, just doing it with an AI, you know, talking to you or whatever, you know, kind of a Mac and cheese and whatever it might be.

Speaker A:

I saw a funny thing about thinking about that and talking to it.

Speaker A:

I saw a really interesting article the other day, and it talked about the fact that the reason that McDonald's uses kiosks now instead of you talking to someone at the counter is because actually people order 30% more when they don't have to talk to somebody than they do when they speak to someone.

Speaker A:

Because it's almost like there's this psychological thing that they don't want to.

Speaker A:

They don't want to be seen.

Speaker A:

Like they're having too much or they're being gluttonous or whatever, and they feel free to just order whatever it is that they want to order because no one has to really see it.

Speaker A:

And I thought that was a really interesting little psychological wrinkle in that.

Speaker A:

So.

Speaker A:

So that might be a case where a store like McDonald's or fast food or something would not want people to talk to something.

Speaker A:

They actually want them to press the buttons.

Speaker A:

Because people actually buy more when they don't.

Speaker A:

They don't feel self conscious talking to someone interesting.

Speaker B:

They'll see the kind of menu of things that are available to them.

Speaker B:

Well, on the screen, which, if you're talking to somebody, you kind of need to know already what it is you want.

Speaker B:

Right.

Speaker B:

But sometimes I go onto the screen and I go, oh, quite fat.

Speaker B:

Oh, I might have a donut as well, you know.

Speaker A:

Exactly.

Speaker A:

Yeah.

Speaker B:

Right.

Speaker A:

Yeah, 30%.

Speaker C:

I saw the next step of this.

Speaker C:

I was filming in Amsterdam last week, and there was actually a restaurant which was like a big.

Speaker C:

It had just racks of food in little hot sort of pockets.

Speaker A:

Love it.

Speaker A:

I love those.

Speaker A:

In Amsterdam.

Speaker A:

Yeah.

Speaker B:

Oh, my gosh.

Speaker C:

You saw people coming in and choosing a burger.

Speaker C:

Didn't even have to put a screen in.

Speaker C:

Just tap and go with a cheeseburger.

Speaker A:

I mean, I love it.

Speaker A:

I love those.

Speaker A:

Those things.

Speaker A:

w them back in the like early:

Speaker C:

Yeah.

Speaker A:

And yeah, I used to get hot dogs anyway.

Speaker A:

Sorry, that's a total aside but yeah.

Speaker C:

But just on the back of the menu thing that's interesting.

Speaker C:

I wonder if you eat more just because you can.

Speaker A:

Yeah, probably.

Speaker C:

Yeah.

Speaker B:

No one's buy more in a supermarket if you know you're going to use the self checkout probably.

Speaker A:

Yeah, I bet you do.

Speaker A:

That's an interesting case study.

Speaker A:

They probably have some data on that.

Speaker A:

But that, that would, that would be an interesting correlating data set.

Speaker A:

I guess it never occurred to me.

Speaker B:

This kind of, you know, public shaming of lying.

Speaker B:

So you, you kind of moderate yourself because you don't want to be seen as gluttonous.

Speaker A:

Yeah, and maybe I.

Speaker A:

Look, I don't, I'm not making any judgments but I would suspect that it might be those people that are maybe slightly overweight or something that are a little bit more self conscious about that and then they don't want to be seen as getting too much food kind of thing.

Speaker A:

So.

Speaker A:

Yeah, interesting.

Speaker A:

So let me ask one other question because I'm also conscious of time.

Speaker A:

We have a hard stop today.

Speaker A:

So if we have to identify AI, does that then mean we now need to take a step backwards and we need to start identifying existing videos and existing imagery that has been manipulated using things like Photoshop or if you think about something like the Google phone, you now can take a picture and then you can just edit people out of the back of the picture or you can add stuff to the picture.

Speaker A:

So now that's been modified.

Speaker A:

So do you now have to declare that any single image or any single video has been modified and to what extent and when do you start doing that?

Speaker A:

Because personally I think a lot of that is actually worse because particularly photo, you know, when you do photo shoots and you do things like magazine covers and stuff, the things that they do with people to modify, they change their face shape, they change their skin, they change everything.

Speaker A:

And it's literally, it's not an accurate representation of what was happening at the time.

Speaker A:

But that's put forward as this is the real image.

Speaker A:

And I think that's been probably worse for society than an AI voice talking.

Speaker A:

So I wonder if you start getting into disclosure that you've now opened Pandora's box because you now have to disclose everything.

Speaker B:

So is it a context thing?

Speaker B:

Is it.

Speaker B:

If it's going to be on the COVID of Vogue, it doesn't really matter.

Speaker B:

But if it's going to be on the COVID the, the BBC News at Six or something, you know, or Channel 4 News.

Speaker B:

They have to say that this image has been altered because you're presenting something perhaps a more serious or factual.

Speaker A:

I would say the COVID of Vogue is equally as important.

Speaker C:

Yeah.

Speaker C:

And the drip, drip effect on, you know, take women's fashion magazines, for example.

Speaker C:

The amount of photoshopping of those for decades now.

Speaker C:

And I don't know if you guys remember that dub advert about 15 years ago where it had a woman sitting in a chair without makeup and then two or three different lights came on and then the make, it was all done in high speed and then it showed the advert on the billboard at the end.

Speaker C:

And you know, from the start to the finish, it wasn't the same person at the end.

Speaker C:

It was actually a really clever advert, you know.

Speaker C:

And I know, you know, some will say that cynical on Dove they're trying to sell still through.

Speaker C:

Through being, you know, looking at their own industry.

Speaker C:

But I agree, I think it's.

Speaker C:

It has had a massive impact on the world and the standards of beauty and what, you know, what girls and women expect of themselves.

Speaker C:

I think it's had a huge impact on the mental health of the world, in a way, and I think I'm surprised that we've never really faced that.

Speaker C:

But you're right.

Speaker C:

Through this AI discussion, we might have to actually face that.

Speaker B:

What was the case?

Speaker B:

It wasn't that long ago.

Speaker B:

About a year ago, the royals and somebody had.

Speaker B:

They edited their photo in Photoshop or something.

Speaker B:

There was a huge uproar about it, wasn't there?

Speaker C:

I think it wasn't it Kate, when she was unwell.

Speaker B:

Yes.

Speaker C:

In the garden of Kensington Palace.

Speaker B:

Yeah.

Speaker B:

And they came out, they did like a family photo or something.

Speaker C:

Yeah.

Speaker B:

And then it transpired that there'd been some kind of AI editing going on.

Speaker B:

And this was.

Speaker B:

This was like, you know, front page news.

Speaker B:

It was.

Speaker C:

Yeah, yeah.

Speaker A:

But then.

Speaker A:

And then you get back into what's AI and what's not.

Speaker A:

Right.

Speaker A:

Because now everybody's calling everything AI, even though necessarily isn't a.

Speaker A:

Like, do you know what I mean?

Speaker A:

So it is.

Speaker A:

I think we are.

Speaker A:

I do think that this is going to turn into a much broader discussion.

Speaker A:

And if I work for an AI company, I would be pushing.

Speaker A:

This is the narrative I would use.

Speaker A:

I would say, okay, you want us to declare everything that's been done with AI, but we now want everybody that alters any image in any way to have to declare their images being altered as well.

Speaker B:

And I think any alteration is a problem, Dave, because imagine your Channel 4 news.

Speaker B:

And all you've done is removed a fence.

Speaker B:

But post from the background of a picture.

Speaker A:

Just why would you.

Speaker A:

Why do you need to do that?

Speaker B:

Yeah, well, you don't.

Speaker B:

But maybe exactly.

Speaker B:

Statically nicer.

Speaker A:

I know, but, but again, so you.

Speaker A:

But you've altered the image.

Speaker A:

That isn't what that actually looked like at the time.

Speaker C:

There are different contexts, aren't there?

Speaker C:

For news, I think, but it has to be.

Speaker A:

I think it has to be a black and white line.

Speaker A:

You can't get into subjective, because as soon as you make it subjective, then you're like, oh, well, the fence post isn't important, but maybe that fence post had a sign on it that was important.

Speaker A:

And so then you go, well, the fin.

Speaker A:

The fence post, it looked ugly.

Speaker A:

And you go, yeah, but it had something on it that was relevant to the story.

Speaker A:

And that person goes, yeah, no, but I didn't see that as like, you can't do that.

Speaker A:

It's either it was.

Speaker A:

It's exactly as it was taken or it's been modified.

Speaker A:

Now I think there are some things like if you change contrast or you change, like there are some changes where you haven't factually changed it potentially, so you're adjusting the exposure and you're maybe, you know, use a sharpening tool or something like that.

Speaker A:

So I think there are some image adjustments that could be.

Speaker A:

You could say, okay, that's fine, that's not considered an adjust.

Speaker A:

You know, that's not changing the image, but structurally kind of changing what's actually in the image.

Speaker A:

I think we're just going to have to get to a point where we go, this image has been altered.

Speaker A:

And if maybe you have to put a little thing in the, in the metadata of the image to say, we removed a fence post.

Speaker B:

Okay, you got me thinking, right?

Speaker B:

Which is more important then?

Speaker B:

No, which is more important?

Speaker B:

Okay, if you got a wholly generated AI image with no alterations, but it was just.

Speaker B:

The whole thing was just generated by AI, is it more important to know that that was AI generated than an actual image which is being change?

Speaker B:

So something that actually exists, but then you're changing the perception of that to the person, because almost that feels worse to me than the wholly generated image, which isn't trying to be deceitful in any way.

Speaker B:

It just is what it is.

Speaker C:

Yeah, I know, I agree.

Speaker C:

And I think again, like what you said earlier slightly depends on context.

Speaker C:

If you're, you know, creating fake content on, you know, Israel, Palestine or, you know, the Ukraine war, then that's Hugely manipulative and has a massive impact for the, the world.

Speaker C:

And you know, particularly in short form where people aren't really invested in it, they're just scrolling for hours a day.

Speaker C:

You know, you have a huge responsibility to not manipulate those things.

Speaker C:

But you know, if like you say, David, you know, changing the contrast or lightening the shadows a little bit to give an aesthetic is a very different thing than creating manipulation.

Speaker B:

So how do you think governments or tech industry even begin to think about how they can manage this?

Speaker B:

Because you know, as you say, politically there's a potential here for all sorts of kind of manipulation and corruption.

Speaker B:

How, how are we going to do this as a society?

Speaker B:

How are we going to create almost like a kind of a universal set of rules that everyone adopts and sticks and actually sticks to, you know, that feels difficult.

Speaker A:

So I had a chat a year and a half ago with a lovely lady named Suki Fuller.

Speaker A:

And Suki at the time, her position on it was, and I, I still agree with this.

Speaker A:

I think the only thing globally that we have agreement on with, literally I think every single country, except maybe one or two, is your, is your global human rights bill.

Speaker A:

And I suspect that it's going to be tied to that in some way.

Speaker A:

So they're going to tie it into the human rights information to say that it's a basic human right that humans have accurate representation of events that have been recorded or photographed and that if it hasn't, or that image has been manipulated in any way, that it needs to be flagged somehow.

Speaker A:

And I think that's probably the way they're going to do it.

Speaker A:

That would be the only way.

Speaker A:

I can see that pretty much every country on the world would at least have a base level.

Speaker A:

Now.

Speaker A:

Each country will build their own laws on top of that, but I think that's probably the only way, the only way it will happen globally.

Speaker A:

And I do think we need something and frankly, if you just put a little watermark down on the bottom left hand corner that said AI or something, I, I don't think anybody, you know, aesthetically I don't think anybody would care.

Speaker A:

And I, you know, I think it would be quite simple.

Speaker A:

And if you, you know, and there's, there's some tools to do this already, it's just not everybody uses them.

Speaker A:

And of course, you know, you've got bad actors out there, whether they're state supported or they're individuals or they're fringe groups or they're whatever that will desperately try and create content that doesn't have that in there or they will try and make it out to be real.

Speaker A:

But I think you just need to have, you need to have as strong a penalties as possible and then they need to crack down on people who do that hard in the beginning and really, really make those rules bite.

Speaker A:

But until then it's a free for all.

Speaker A:

And don't believe anything you ever see online.

Speaker B:

Yeah, I think we are in the post truth world at this point, aren't we?

Speaker A:

We are 100%.

Speaker C:

It's like what Abraham Lincoln said, isn't it?

Speaker C:

Don't believe everything you read on the Internet.

Speaker A:

Exactly.

Speaker A:

I think, interestingly so I put all of this stuff into ChatGPT, I put several of the articles, I put my personal feelings about it, I put everything in and I kind of asked ChatGPT to summarize the arguments positive and you know, for and against disclosure.

Speaker A:

I asked it its own opinion, if it had an opinion as an AI, what was its opinion?

Speaker A:

And then I asked it to summarize the discussion and here's the summary that it came up with, which I think is really good and it might be a good place to end.

Speaker A:

It said if the humanity of the presenter matters to the meaning of the content, disclosure matters.

Speaker A:

If not, it's optional.

Speaker A:

Yeah, and I think that sums it up pretty well.

Speaker B:

And proof that AI is smarter than us.

Speaker A:

Exactly.

Speaker C:

It's a lovely.

Speaker B:

So we should, we should just let it go anyway.

Speaker A:

Should I?

Speaker A:

Maybe what I'll do again if anyone's interested.

Speaker A:

And for you guys, I didn't share this with you ahead of time, but I'll take all of the notes that it pulled together and I'll share those with you because it makes some pretty interesting talking points for and against and it's way more than we have time to delve into, but it did a really, really good job and it kept the other thing that was really interesting.

Speaker A:

Sorry, I know we're right on time.

Speaker A:

It kept asking me, oh, do you want to now add some information about this particular topic or this way to view it?

Speaker A:

And this is something new that I've noticed in the last, I don't know, maybe several weeks where it really now prompts you to dig more into the topic that you're discussing?

Speaker A:

And that's been really interesting because it kept bringing up stuff and I was like, oh yeah, actually that'd be really interesting.

Speaker A:

Oh, do you want some really thought provoking questions?

Speaker A:

Do you want some challenging questions like yeah, yeah, give it to me, give it to me.

Speaker A:

So yeah, it's come up with a with a very good workthrough of that.

Speaker A:

So I'm happy to share that with you guys as well.

Speaker B:

Yeah, please do.

Speaker B:

That's really interesting.

Speaker B:

Yeah.

Speaker B:

Well, there we are.

Speaker B:

Well, I think, as you say, brings us to nicely to to the end of the episode.

Speaker B:

For those that have made it this far, the listeners, just a quick one to say, I think next time we're going to try and look at AI in schools and whether we should be, you know, teaching our kids more about AI they are in China, and I think other countries are starting to follow suit.

Speaker B:

So I think that'll be something to dig into.

Speaker B:

But once again, thanks to Dave, thanks to Ben, it's always fun.

Speaker B:

We'll catch you next time.

Speaker B:

Sam sa.

Speaker B:

Sam Sa.

Speaker B:

Sam Sa.

About the Podcast

Show artwork for AI Evolution
AI Evolution
Exploring the Future of Artificial Intelligence

Listen for free

About your hosts

Profile picture for David Brown

David Brown

A technology entrepreneur with over 25 years' experience in corporate enterprise, working with public sector organisations and startups in the technology, digital media, data analytics, and adtech industries. I am deeply passionate about transforming innovative technology into commercial opportunities, ensuring my customers succeed using innovative, data-driven decision-making tools.

I'm a keen believer that the best way to become successful is to help others be successful. Success is not a zero-sum game; I believe what goes around comes around.

I enjoy seeing success, whether it’s yours or mine, so send me a message if there's anything I can do to help you.
Profile picture for Alan King

Alan King

Alan King, founder of the AI Network, AI Your Org (aiyourorg.com), and Head of Global Membership Development Strategy at the IMechE, has been fascinated by artificial intelligence (AI) since his teenage years. As an early adopter of AI tools, he has used them to accelerate output and explore their boundaries.

After completing his Master's degree in International Business, King dedicated his early career to working at Hewlett Packard on environmental test systems and Strategic Alliance International, where he managed global campaigns for technology firms, all whilst deepening his knowledge around neural networks and AI systems. Building on this valuable experience, he later joined the IMechE and published "Harnessing the Potential of AI in Organisations", which led to setting up the "AI Your Org" network.

Firmly believing in the transformative power of AI for organizations, King states, “This version of AI at the moment, let’s call it generation one, it's a co-pilot, and it's going to help us do things better, faster, and quicker than ever before.”

Known for his forward-thinking attitude and passion for technology, King says, “We become the editors of the content, and refine and build on what the AI provides us with.” He's excited about the endless potential AI holds for organizations and believes that the integration of human and machine intellect will drive exponential growth and innovation across all industries.

King is eager to see how AI will continue to shape the business landscape, stating, “We are about to enter a period of rapid change, an inflection point like no other.” As AI tools advance, he is confident that their impact on society and organizations will be both transformative and beneficial.