
Sign up to save your podcasts
Or
Speaker 1: Lee Today I'm interviewing Kate Crocker. And Kate is our AI problem solver and dark patterns consultant. Kate is an Australian legal design writer, SEO copywriter, and a former lawyer as a former lawyer turned legal design writer. Kate has mastered the art of blending complex legal concepts with a user friendly design. Now, Kate's also an expert in dark patterns, those tricky little design tactics on the web.
You probably Kate and is on a mission to make websites ethical, transparent and user friendly. Kate skills don't stop at legal fees though. As an AI prompt engineer, she crafts the perfect AI responses and empowers teams to shape their digital presence with clarity and accountability. Her passion for ethical design keeps our AI solutions human centered and accessible. Kate can distill even the densest legal jargon into something your grandmother could understand, all while making it a SEO friendly. Kate, welcome to the podcast.
Speaker 2: Kate Crocker I'm South Australian, just like you. Former lawyer, and, trained legal designer. I came across doc patents as part of my logo design training and quickly realized it's a very important subset of logo design not to be overlooked. And more recently, I jumped on the AI, avalanche. Towards the end of, at the beginning of 2023, really, when it became obvious it was going to fundamentally affect, businesses like copywriting, which is what I was doing at that time and still do.
00:02:03:12 - 00:02:04:09 Speaker 2 So.
00:02:04:11 - 00:02:15:04 Speaker 1 All right, I've got some questions for you, if you don't mind, Kate. The first question is, what are doc patterns? And I presume we're not talking about my paisley waistcoats.
00:02:15:06 - 00:02:45:03 Speaker 2 Not your paisley waistcoats? No. No, we're talking about user interfaces. So, doc patterns can appear on websites, apps, social media. So they're, they're design tricks, sort of, specifically aimed at manipulating you as the user to do something that you didn't intend to do. So it might be that, countdown clock was used to get you to purchase something.
00:02:45:05 - 00:03:18:17 Speaker 2 And as soon as you made the purchase, that clock reset and started counting down again. And so the deadline to purchase was false. It could be a, something that you're automatically opted into when you subscribe or, or a subscription that is, that you are opted into, and then you can't find a way to unsubscribe. Or it could be, something mysteriously appearing in your shopping basket that you never put in there in the hope that you just you won't notice and you end up buying it.
00:03:18:19 - 00:03:25:24 Speaker 2 So all sorts of little tricks. There's a huge range of, categories of dark patterns.
00:03:26:01 - 00:03:37:24 Speaker 1 So the one that says nefarious and I've, I've come across, all of those myself. And it's very frustrating. No, I agree. Yeah. But what what's why is it a problem?
00:03:39:00 - 00:04:07:23 Speaker 2 It creates all sorts of difficulties for, consumers. So it's a problem because a lot of dark patterns are designed to trick us into handing over our personal information. And as we all know, data protection is, a bigger and bigger issue. We've all, encountered scams online. We in Australia had the Optus and Medibank data breaches.
00:04:07:23 - 00:04:53:11 Speaker 2 So if you can remember lining up to get a new driver's license. Yeah. So all sorts of, all sorts of issues are now arising with our ability to protect our personal information online. So in 2023, online scams cost Australians roughly $480 million. And that figure is increasing exponentially. So it is absolutely huge. But dark patterns are also, they're not transparent and they exploit, cognitive biases such as decision fatigue or where you just choose the first thing because you don't know what else to choose.
00:04:53:13 - 00:05:19:20 Speaker 2 So all the patterns that your brain will just automatically want to follow to reduce your mental load when you are completely overloaded with information or choice. So so you're bit you're acting in ways that you haven't intended to. You're spending money that you didn't really think you'd be spending. Vulnerable people are particularly susceptible to exploitation with dark patterns.
00:05:19:20 - 00:05:47:15 Speaker 2 So children and the elderly, so yeah. So we're talking about, decision making difficulties, financial loss, interference with in independent decision making, compromise privacy and also physical effects like frustration, anxiety and stress, which shouldn't be, so the effects of, of those should not be underestimated. Actually can have a quite a debilitating impact on a lot of people.
00:05:47:15 - 00:05:47:22 Speaker 2 Yeah.
00:05:48:11 - 00:05:52:08 Speaker 1 What's the implication then, with all these dark patterns for AI?
00:05:52:10 - 00:06:16:07 Speaker 2 So with generative AI, the first implication is that AI is just so new, and we're all sort of still learning how to use it and have navigate it and what it means. But what is emerging is that the more we use generative AI, the bigger the risk of dark patterns, because it can easily create dark patterns, and often they're very hard to detect.
00:06:16:08 - 00:06:45:14 Speaker 2 So you need to understand what to look for. So if you're creating something with generative AI, you need to know what dark patterns look like. So you can check the output. And if you are, a consumer or, or just a user who may hand over some personal information, then it's good to know about dark patterns so you can try and work out whether or not that's what you're dealing with.
00:06:45:18 - 00:06:50:06 Speaker 1 So how do you avoid dark pattern risks when using AI?
00:06:50:08 - 00:07:23:06 Speaker 2 So I think you need to sort of understand how dark patterns can emerge when you're using AI first. So what we're really talking about is, risks of deepfakes. So, we've all come across, fake fake images or voices, or synthetic voices. So deep fakes that probably can quite easily manipulate users with false information and undermine autonomy and privacy.
00:07:24:00 - 00:07:51:04 Speaker 2 So I so the recommendation there is that you would use AI filters to detect and block the deepfakes. You'd put watermarks in or metadata in to try and. Yeah, to try and make it, more, more genuine. And you definitely be conducting regular reviews of, yeah. What how how that form of AI is being used.
00:07:52:11 - 00:08:25:03 Speaker 2 You know, when it comes to synthetic media or the new Google Notebook alarm function that produces a podcast episode is absolutely incredible. It is. If you've not used it, I recommend you just, go and have a go, because the voices sound so natural. It's not what we're used to hearing with synthetic voices. But it just goes to show how quickly that technology has improved.
00:08:25:23 - 00:08:55:12 Speaker 2 And yeah, it's a real reminder that you have to be very much on your guard when online. Other examples. Bots, of course. So there is risks that bots may, impersonate humans. Which is also unethical. Any kind of disinformation. So you need to check that the information sources are, in fact, from authoritative sites.
00:08:55:23 - 00:09:25:11 Speaker 2 And I would add in particular statistical information there. I have, a lot of experience with statistics in particular being made up on, by, any AI that I've been using haven't actually managed to iron out that little glitch yet, but I'm working on it. And that, of course, leads to hallucinations where the AI can just absolutely make up anything.
00:09:25:11 - 00:09:55:12 Speaker 2 So you do need to double check your output. And in fact, you do need to be mindful of how you prompt the AI as well, so that, it's not just given complete, round of the internet, you actually want to want it to be very focused about what it's looking at online. So a few of the concerns are that I can analyze huge amounts of consumer data very, very quickly.
00:09:55:14 - 00:10:29:05 Speaker 2 And so you can personalize stock patterns rapidly even while you're online doing something. So it can, sort of alter algorithms and, and harvest your data. It can it can store and analyze information about your profile as he going around browsing online. So it's, it is it is a bit like the Wild West out there with AI.
00:10:29:07 - 00:10:46:06 Speaker 1 Here in Australia. We have some data privacy laws, perhaps, better than some other countries, but certainly, worse than other countries. We are sort of semi protected that we but only semi.
00:10:46:08 - 00:11:15:01 Speaker 2 We, we're sort of in the midst of, a major change at the moment with a major change of the privacy laws being contemplated. There are, we've been we were waiting for some announcements back in September, and we're still sort of waiting with bated breath to find out exactly what's going to happen. So the expectation is that, jack patterns will be specifically regulated by privacy laws.
00:11:15:01 - 00:11:47:03 Speaker 2 At the moment. There's general regulation, around, collection of personal information. But, you know, more specific laws will do more with consent and prohibiting, information collection in certain circumstances, regardless of whether a person's consented, because one of the issues is dark patterns is, of course, that you consents not informed if you're being manipulated and not acting with authority.
00:11:47:05 - 00:12:18:01 Speaker 2 So we have seen in the EU the introduction of the AI act over there, which bans AI generated dark patterns. And that's those bands are going to be staged over the next couple of years. So into 2026. And it does depend on the type of activity. But yeah, it's it will also apply to non-EU countries who are doing business in the EU.
00:12:18:03 - 00:12:23:18 Speaker 2 So it does potentially also apply to some Australian businesses.
00:12:24:06 - 00:13:02:04 Speaker 1 Because I write a lot. There's an interesting sort of discussion going on with the writing community at the moment in that on the one hand, the writing community is being asked to be transparent if it's used AI. And certainly Amazon is asking you, very clearly, have you used AI for anything? And the other point is that a lot of the writers in that community are saying, no, we're not going to talk because the public at the moment has such a bad, idea about AI that we can just walk up to, you know, Claude or ChatGPT and say, write me a novel.
00:13:02:13 - 00:13:26:02 Speaker 1 And, you know, outcomes or novel, that, you know, writers don't want to disclose that. Or if they do, they just say, oh, look, you know, they edit it my, my text. But I wrote it all. Yeah. So, you know, that's a balancing act because the public I don't think the public is ready for writers to be honest about how they're writing stuff.
00:13:26:04 - 00:14:10:17 Speaker 2 Well, yeah, I think the, there is a, a more of an umbrella issue there, which has to do with copyright because it's not clear yet how copyright laws are going to apply to AI generated, content. So, some of the recommendations, that you do identify when you have used AI, or if it's 100% human generated for that reason that if you do later need to enforce your copyright over something, then, you know, you can point to that as, and, you know, at, at contemporaneous statement of, how you produced that content at the time.
00:14:10:19 - 00:14:37:00 Speaker 2 So it is a very it is a very difficult area at the moment. It's the same for copyright is everything's in flux. Starting to see sort of codes of ethics, come out in various forms, including by copyright is disclosing with clients how they are using AI. It is an absolutely incredible brainstorming tool for copywriters.
00:14:37:02 - 00:15:07:05 Speaker 2 It is not, generally speaking, it's not the sharpest tool for writing. But it it also depends on how you prompt it and the, knowledge base that you're giving it. So yeah, it's, it's it's an interesting area. It's tricky times. And it's, it's difficult for our clients as well because, if we're all sort of walking a tightrope at the same time, it feels like,
00:15:07:07 - 00:15:28:02 Speaker 1 I mean, certainly there's a challenge with, as you say, with copywriting because, you know, ways we've said in the past, it might take us 3 or 4 days to turn this around and put it in your, your company's voice. You know, with, as you say, with the right, you know, background material, with the right prompting, they can turn it around in about a minute and a half.
00:15:28:04 - 00:15:51:18 Speaker 2 Yeah, well, it's the legwork. The legwork is where the gold is for copyright is. So all the things that we've, had to wade through that have been so time consuming, like pulling all the information out of a brief that we. So that, you know, that we can collate somehow or, pulling, you know, information together from different sources.
00:15:52:09 - 00:16:16:05 Speaker 2 And, and making it, something that we can make sense of so we can then write or just simply try to understand, what the benefit is of a feature when we're writing sales copy, all those kind of things. Incredible. Well, I it is it is an incredible tool for brainstorming, and there are different ways of doing it.
00:16:16:05 - 00:16:29:08 Speaker 2 So, if you think laterally enough, you can actually get solutions that do seem quite creative and find little golden nuggets that you'd never considered.
00:16:29:10 - 00:16:45:15 Speaker 1 Okay. Are there any other ethical considerations? When we're using AI not just as copywriters, but just as, as business people, are there, ethical considerations that we can we should consider that we haven't talked about so far.
00:16:45:17 - 00:17:14:06 Speaker 2 Yeah. So I think when it when it comes to things like social proof, like with, testimonials of, testimonials on websites or, reviews, that kind of thing. There's there's a big risk of dark patterns there. It's very easy to, make up a testimonial. Especially if you're not if especially if you're not giving any details about the person who is purporting to have given you the testimonial.
00:17:14:08 - 00:17:47:06 Speaker 2 So you need to be very careful about that kind of thing. We know that I can very easily generate fake reviews and other forms of social proof. So and even, potentially using deep fakes to do that. So it's, it's very you need to be very careful about how you do that. I like the idea of using, Google reviews, screenshots of the Google reviews, screenshots of Facebook reviews.
00:17:47:15 - 00:18:16:00 Speaker 2 Anything like that, that, that that carries quite a bit of weight to it. So is the photo of the person, say where they're from? If you can convince them to give you a video testimonial, even better. But that's, just about the most compelling. So those things, you know, when it comes to exploiting cognitive biases, we're hearing a lot about dynamic pricing these days.
00:18:16:00 - 00:18:41:11 Speaker 2 So dynamic pricing is something that is very much driven by AI and also dynamic inventory. So when you get those awful messages when you're on a website, it's it's a big thing in, women's fashion websites. But saying that a person looked at looked at this article, you know, back from the Gold Coast, looked at this article five minutes ago and that kind of thing.
00:18:41:13 - 00:19:20:08 Speaker 2 So, you know, they hate that those kind of things can be AI generated. And, you know, nudging the user towards a particular, act that the business once wants from the user. So, yeah, that's, you know, things like data harvesting with privacy or even, manipulating you to provide consent, a really, one way of, one form of manipulation, which is a, a big dark pattern is just to throw a wall of data, a wall of words at you.
00:19:20:10 - 00:19:42:24 Speaker 2 So in order to proceed with something, you need to consent to all their terms and conditions. People give blind consent because it's written in legalese. So they don't want to read it that don't understand. But they know that if they tick the yes box, then they'll get through to the next screen. So that's a very big issue.
00:19:44:13 - 00:20:03:14 Speaker 2 Yeah. And also, I think, as I said before, the ability for AI to sort of, collect data about you as you travel around the internet and just build a profile about you. That can be then used in various ways. As we know, that's how scammers often, target people as well.
00:20:03:16 - 00:20:11:15 Speaker 1 Any last thoughts? About dark patterns and AI and, the risks involved?
00:20:11:19 - 00:20:37:22 Speaker 2 Well, I think it's at the moment in Australia, we're in kind of a sweet spot with, with what's happening in dark patterns because the laws that if we don't have specific laws against dark patterns yet. So we have general laws, we've got general consumer and privacy laws that will regulate dark patterns, but we don't have the specifics.
00:20:37:24 - 00:21:08:18 Speaker 2 So while the I mean, while the government is working that out and we know it's coming, it's just last week the Albanese government was talking about, introducing draft laws in early 2025. Opt to opt out subscription like subscription dark patterns and, various other insipid practices. So we know these laws are coming there.
00:21:09:03 - 00:21:42:20 Speaker 2 There are there are all sorts reports and consultations going on. But right now they're day like right now, the laws aren't specific. And so it represents an opportunity for businesses to actually take a look and, and work out what needs to happen, work out how to it, like assess what what what strategy should be put in place and also what needs doing straight away.
00:21:42:20 - 00:22:08:07 Speaker 2 What can wait. So we have the luxury of a bit of time at the moment. All that is said with the overlay overlay of the European privacy laws and AI and digital laws, which actually may already apply to a lot of Australian businesses. Who who attract customers or clients in the UK or website visitors. Sorry.
00:22:08:07 - 00:22:40:24 Speaker 2 In the EU and the UK. So but those laws with the with the enforcement of those laws is tending to be a little more focused on the really big players at the moment, on the huge mega companies that will not always be the case. So we've just we've just got this time at the moment where we can we can get to work and learn more about, these deceptive practices, understand more about ethical design as well.
00:22:41:01 - 00:23:18:01 Speaker 2 I think consumers are increasingly, wanting ethical practices, ethical designs, wanting to, you know, lead more, you know, ethical, sustainable lives. And that is going to come across in, you know, in, in online behavior. So I think it is also for companies, a really huge marketing advantage in, in embracing more ethical practices and, and, and getting rid of as many dark patterns as possible of websites.
00:23:18:03 - 00:23:21:18 Speaker 1 Excellent. Well, thank you for that, Kate. We really appreciate that.
00:23:21:19 - 00:23:23:02 Speaker 2 Thank you very much, Lee.
Speaker 1: Lee Today I'm interviewing Kate Crocker. And Kate is our AI problem solver and dark patterns consultant. Kate is an Australian legal design writer, SEO copywriter, and a former lawyer as a former lawyer turned legal design writer. Kate has mastered the art of blending complex legal concepts with a user friendly design. Now, Kate's also an expert in dark patterns, those tricky little design tactics on the web.
You probably Kate and is on a mission to make websites ethical, transparent and user friendly. Kate skills don't stop at legal fees though. As an AI prompt engineer, she crafts the perfect AI responses and empowers teams to shape their digital presence with clarity and accountability. Her passion for ethical design keeps our AI solutions human centered and accessible. Kate can distill even the densest legal jargon into something your grandmother could understand, all while making it a SEO friendly. Kate, welcome to the podcast.
Speaker 2: Kate Crocker I'm South Australian, just like you. Former lawyer, and, trained legal designer. I came across doc patents as part of my logo design training and quickly realized it's a very important subset of logo design not to be overlooked. And more recently, I jumped on the AI, avalanche. Towards the end of, at the beginning of 2023, really, when it became obvious it was going to fundamentally affect, businesses like copywriting, which is what I was doing at that time and still do.
00:02:03:12 - 00:02:04:09 Speaker 2 So.
00:02:04:11 - 00:02:15:04 Speaker 1 All right, I've got some questions for you, if you don't mind, Kate. The first question is, what are doc patterns? And I presume we're not talking about my paisley waistcoats.
00:02:15:06 - 00:02:45:03 Speaker 2 Not your paisley waistcoats? No. No, we're talking about user interfaces. So, doc patterns can appear on websites, apps, social media. So they're, they're design tricks, sort of, specifically aimed at manipulating you as the user to do something that you didn't intend to do. So it might be that, countdown clock was used to get you to purchase something.
00:02:45:05 - 00:03:18:17 Speaker 2 And as soon as you made the purchase, that clock reset and started counting down again. And so the deadline to purchase was false. It could be a, something that you're automatically opted into when you subscribe or, or a subscription that is, that you are opted into, and then you can't find a way to unsubscribe. Or it could be, something mysteriously appearing in your shopping basket that you never put in there in the hope that you just you won't notice and you end up buying it.
00:03:18:19 - 00:03:25:24 Speaker 2 So all sorts of little tricks. There's a huge range of, categories of dark patterns.
00:03:26:01 - 00:03:37:24 Speaker 1 So the one that says nefarious and I've, I've come across, all of those myself. And it's very frustrating. No, I agree. Yeah. But what what's why is it a problem?
00:03:39:00 - 00:04:07:23 Speaker 2 It creates all sorts of difficulties for, consumers. So it's a problem because a lot of dark patterns are designed to trick us into handing over our personal information. And as we all know, data protection is, a bigger and bigger issue. We've all, encountered scams online. We in Australia had the Optus and Medibank data breaches.
00:04:07:23 - 00:04:53:11 Speaker 2 So if you can remember lining up to get a new driver's license. Yeah. So all sorts of, all sorts of issues are now arising with our ability to protect our personal information online. So in 2023, online scams cost Australians roughly $480 million. And that figure is increasing exponentially. So it is absolutely huge. But dark patterns are also, they're not transparent and they exploit, cognitive biases such as decision fatigue or where you just choose the first thing because you don't know what else to choose.
00:04:53:13 - 00:05:19:20 Speaker 2 So all the patterns that your brain will just automatically want to follow to reduce your mental load when you are completely overloaded with information or choice. So so you're bit you're acting in ways that you haven't intended to. You're spending money that you didn't really think you'd be spending. Vulnerable people are particularly susceptible to exploitation with dark patterns.
00:05:19:20 - 00:05:47:15 Speaker 2 So children and the elderly, so yeah. So we're talking about, decision making difficulties, financial loss, interference with in independent decision making, compromise privacy and also physical effects like frustration, anxiety and stress, which shouldn't be, so the effects of, of those should not be underestimated. Actually can have a quite a debilitating impact on a lot of people.
00:05:47:15 - 00:05:47:22 Speaker 2 Yeah.
00:05:48:11 - 00:05:52:08 Speaker 1 What's the implication then, with all these dark patterns for AI?
00:05:52:10 - 00:06:16:07 Speaker 2 So with generative AI, the first implication is that AI is just so new, and we're all sort of still learning how to use it and have navigate it and what it means. But what is emerging is that the more we use generative AI, the bigger the risk of dark patterns, because it can easily create dark patterns, and often they're very hard to detect.
00:06:16:08 - 00:06:45:14 Speaker 2 So you need to understand what to look for. So if you're creating something with generative AI, you need to know what dark patterns look like. So you can check the output. And if you are, a consumer or, or just a user who may hand over some personal information, then it's good to know about dark patterns so you can try and work out whether or not that's what you're dealing with.
00:06:45:18 - 00:06:50:06 Speaker 1 So how do you avoid dark pattern risks when using AI?
00:06:50:08 - 00:07:23:06 Speaker 2 So I think you need to sort of understand how dark patterns can emerge when you're using AI first. So what we're really talking about is, risks of deepfakes. So, we've all come across, fake fake images or voices, or synthetic voices. So deep fakes that probably can quite easily manipulate users with false information and undermine autonomy and privacy.
00:07:24:00 - 00:07:51:04 Speaker 2 So I so the recommendation there is that you would use AI filters to detect and block the deepfakes. You'd put watermarks in or metadata in to try and. Yeah, to try and make it, more, more genuine. And you definitely be conducting regular reviews of, yeah. What how how that form of AI is being used.
00:07:52:11 - 00:08:25:03 Speaker 2 You know, when it comes to synthetic media or the new Google Notebook alarm function that produces a podcast episode is absolutely incredible. It is. If you've not used it, I recommend you just, go and have a go, because the voices sound so natural. It's not what we're used to hearing with synthetic voices. But it just goes to show how quickly that technology has improved.
00:08:25:23 - 00:08:55:12 Speaker 2 And yeah, it's a real reminder that you have to be very much on your guard when online. Other examples. Bots, of course. So there is risks that bots may, impersonate humans. Which is also unethical. Any kind of disinformation. So you need to check that the information sources are, in fact, from authoritative sites.
00:08:55:23 - 00:09:25:11 Speaker 2 And I would add in particular statistical information there. I have, a lot of experience with statistics in particular being made up on, by, any AI that I've been using haven't actually managed to iron out that little glitch yet, but I'm working on it. And that, of course, leads to hallucinations where the AI can just absolutely make up anything.
00:09:25:11 - 00:09:55:12 Speaker 2 So you do need to double check your output. And in fact, you do need to be mindful of how you prompt the AI as well, so that, it's not just given complete, round of the internet, you actually want to want it to be very focused about what it's looking at online. So a few of the concerns are that I can analyze huge amounts of consumer data very, very quickly.
00:09:55:14 - 00:10:29:05 Speaker 2 And so you can personalize stock patterns rapidly even while you're online doing something. So it can, sort of alter algorithms and, and harvest your data. It can it can store and analyze information about your profile as he going around browsing online. So it's, it is it is a bit like the Wild West out there with AI.
00:10:29:07 - 00:10:46:06 Speaker 1 Here in Australia. We have some data privacy laws, perhaps, better than some other countries, but certainly, worse than other countries. We are sort of semi protected that we but only semi.
00:10:46:08 - 00:11:15:01 Speaker 2 We, we're sort of in the midst of, a major change at the moment with a major change of the privacy laws being contemplated. There are, we've been we were waiting for some announcements back in September, and we're still sort of waiting with bated breath to find out exactly what's going to happen. So the expectation is that, jack patterns will be specifically regulated by privacy laws.
00:11:15:01 - 00:11:47:03 Speaker 2 At the moment. There's general regulation, around, collection of personal information. But, you know, more specific laws will do more with consent and prohibiting, information collection in certain circumstances, regardless of whether a person's consented, because one of the issues is dark patterns is, of course, that you consents not informed if you're being manipulated and not acting with authority.
00:11:47:05 - 00:12:18:01 Speaker 2 So we have seen in the EU the introduction of the AI act over there, which bans AI generated dark patterns. And that's those bands are going to be staged over the next couple of years. So into 2026. And it does depend on the type of activity. But yeah, it's it will also apply to non-EU countries who are doing business in the EU.
00:12:18:03 - 00:12:23:18 Speaker 2 So it does potentially also apply to some Australian businesses.
00:12:24:06 - 00:13:02:04 Speaker 1 Because I write a lot. There's an interesting sort of discussion going on with the writing community at the moment in that on the one hand, the writing community is being asked to be transparent if it's used AI. And certainly Amazon is asking you, very clearly, have you used AI for anything? And the other point is that a lot of the writers in that community are saying, no, we're not going to talk because the public at the moment has such a bad, idea about AI that we can just walk up to, you know, Claude or ChatGPT and say, write me a novel.
00:13:02:13 - 00:13:26:02 Speaker 1 And, you know, outcomes or novel, that, you know, writers don't want to disclose that. Or if they do, they just say, oh, look, you know, they edit it my, my text. But I wrote it all. Yeah. So, you know, that's a balancing act because the public I don't think the public is ready for writers to be honest about how they're writing stuff.
00:13:26:04 - 00:14:10:17 Speaker 2 Well, yeah, I think the, there is a, a more of an umbrella issue there, which has to do with copyright because it's not clear yet how copyright laws are going to apply to AI generated, content. So, some of the recommendations, that you do identify when you have used AI, or if it's 100% human generated for that reason that if you do later need to enforce your copyright over something, then, you know, you can point to that as, and, you know, at, at contemporaneous statement of, how you produced that content at the time.
00:14:10:19 - 00:14:37:00 Speaker 2 So it is a very it is a very difficult area at the moment. It's the same for copyright is everything's in flux. Starting to see sort of codes of ethics, come out in various forms, including by copyright is disclosing with clients how they are using AI. It is an absolutely incredible brainstorming tool for copywriters.
00:14:37:02 - 00:15:07:05 Speaker 2 It is not, generally speaking, it's not the sharpest tool for writing. But it it also depends on how you prompt it and the, knowledge base that you're giving it. So yeah, it's, it's it's an interesting area. It's tricky times. And it's, it's difficult for our clients as well because, if we're all sort of walking a tightrope at the same time, it feels like,
00:15:07:07 - 00:15:28:02 Speaker 1 I mean, certainly there's a challenge with, as you say, with copywriting because, you know, ways we've said in the past, it might take us 3 or 4 days to turn this around and put it in your, your company's voice. You know, with, as you say, with the right, you know, background material, with the right prompting, they can turn it around in about a minute and a half.
00:15:28:04 - 00:15:51:18 Speaker 2 Yeah, well, it's the legwork. The legwork is where the gold is for copyright is. So all the things that we've, had to wade through that have been so time consuming, like pulling all the information out of a brief that we. So that, you know, that we can collate somehow or, pulling, you know, information together from different sources.
00:15:52:09 - 00:16:16:05 Speaker 2 And, and making it, something that we can make sense of so we can then write or just simply try to understand, what the benefit is of a feature when we're writing sales copy, all those kind of things. Incredible. Well, I it is it is an incredible tool for brainstorming, and there are different ways of doing it.
00:16:16:05 - 00:16:29:08 Speaker 2 So, if you think laterally enough, you can actually get solutions that do seem quite creative and find little golden nuggets that you'd never considered.
00:16:29:10 - 00:16:45:15 Speaker 1 Okay. Are there any other ethical considerations? When we're using AI not just as copywriters, but just as, as business people, are there, ethical considerations that we can we should consider that we haven't talked about so far.
00:16:45:17 - 00:17:14:06 Speaker 2 Yeah. So I think when it when it comes to things like social proof, like with, testimonials of, testimonials on websites or, reviews, that kind of thing. There's there's a big risk of dark patterns there. It's very easy to, make up a testimonial. Especially if you're not if especially if you're not giving any details about the person who is purporting to have given you the testimonial.
00:17:14:08 - 00:17:47:06 Speaker 2 So you need to be very careful about that kind of thing. We know that I can very easily generate fake reviews and other forms of social proof. So and even, potentially using deep fakes to do that. So it's, it's very you need to be very careful about how you do that. I like the idea of using, Google reviews, screenshots of the Google reviews, screenshots of Facebook reviews.
00:17:47:15 - 00:18:16:00 Speaker 2 Anything like that, that, that that carries quite a bit of weight to it. So is the photo of the person, say where they're from? If you can convince them to give you a video testimonial, even better. But that's, just about the most compelling. So those things, you know, when it comes to exploiting cognitive biases, we're hearing a lot about dynamic pricing these days.
00:18:16:00 - 00:18:41:11 Speaker 2 So dynamic pricing is something that is very much driven by AI and also dynamic inventory. So when you get those awful messages when you're on a website, it's it's a big thing in, women's fashion websites. But saying that a person looked at looked at this article, you know, back from the Gold Coast, looked at this article five minutes ago and that kind of thing.
00:18:41:13 - 00:19:20:08 Speaker 2 So, you know, they hate that those kind of things can be AI generated. And, you know, nudging the user towards a particular, act that the business once wants from the user. So, yeah, that's, you know, things like data harvesting with privacy or even, manipulating you to provide consent, a really, one way of, one form of manipulation, which is a, a big dark pattern is just to throw a wall of data, a wall of words at you.
00:19:20:10 - 00:19:42:24 Speaker 2 So in order to proceed with something, you need to consent to all their terms and conditions. People give blind consent because it's written in legalese. So they don't want to read it that don't understand. But they know that if they tick the yes box, then they'll get through to the next screen. So that's a very big issue.
00:19:44:13 - 00:20:03:14 Speaker 2 Yeah. And also, I think, as I said before, the ability for AI to sort of, collect data about you as you travel around the internet and just build a profile about you. That can be then used in various ways. As we know, that's how scammers often, target people as well.
00:20:03:16 - 00:20:11:15 Speaker 1 Any last thoughts? About dark patterns and AI and, the risks involved?
00:20:11:19 - 00:20:37:22 Speaker 2 Well, I think it's at the moment in Australia, we're in kind of a sweet spot with, with what's happening in dark patterns because the laws that if we don't have specific laws against dark patterns yet. So we have general laws, we've got general consumer and privacy laws that will regulate dark patterns, but we don't have the specifics.
00:20:37:24 - 00:21:08:18 Speaker 2 So while the I mean, while the government is working that out and we know it's coming, it's just last week the Albanese government was talking about, introducing draft laws in early 2025. Opt to opt out subscription like subscription dark patterns and, various other insipid practices. So we know these laws are coming there.
00:21:09:03 - 00:21:42:20 Speaker 2 There are there are all sorts reports and consultations going on. But right now they're day like right now, the laws aren't specific. And so it represents an opportunity for businesses to actually take a look and, and work out what needs to happen, work out how to it, like assess what what what strategy should be put in place and also what needs doing straight away.
00:21:42:20 - 00:22:08:07 Speaker 2 What can wait. So we have the luxury of a bit of time at the moment. All that is said with the overlay overlay of the European privacy laws and AI and digital laws, which actually may already apply to a lot of Australian businesses. Who who attract customers or clients in the UK or website visitors. Sorry.
00:22:08:07 - 00:22:40:24 Speaker 2 In the EU and the UK. So but those laws with the with the enforcement of those laws is tending to be a little more focused on the really big players at the moment, on the huge mega companies that will not always be the case. So we've just we've just got this time at the moment where we can we can get to work and learn more about, these deceptive practices, understand more about ethical design as well.
00:22:41:01 - 00:23:18:01 Speaker 2 I think consumers are increasingly, wanting ethical practices, ethical designs, wanting to, you know, lead more, you know, ethical, sustainable lives. And that is going to come across in, you know, in, in online behavior. So I think it is also for companies, a really huge marketing advantage in, in embracing more ethical practices and, and, and getting rid of as many dark patterns as possible of websites.
00:23:18:03 - 00:23:21:18 Speaker 1 Excellent. Well, thank you for that, Kate. We really appreciate that.
00:23:21:19 - 00:23:23:02 Speaker 2 Thank you very much, Lee.