
Sign up to save your podcasts
Or


Welcome to ClickAI Radio. In this episode I have a conversation with some AI experts on how AI ethics affect your business.
Grant
Okay, welcome, everybody to another episode of clique AI radio. Well in the house today I have got a return visitor very excited to have him and a brand new person. I'm excited to introduce you to Carlos Anchia. I've been practicing that I get that right, Carlos.
Carlos
Sounds great. Good to see you again.
Grant
Even old dogs can learn new tricks. There we go. All right, and Elizabeth Spears. Now I got that one easily. Right, Elizabeth?
Elizabeth
You did it? Yeah. I'm really happy to be here.
Grant
This is exciting for me to have them here with me today. They are co founders of plain sight AI. And a few episodes ago, I had an opportunity to speak with Carlos and he laid the foundation around the origin story of AI for vision, some of the techniques and the problems they're solving. And I started to nerd out on all of the benefits. In fact, you know what, Carlos, I need to tell you, since our last conversation, I actually circled back to your team and had a demo of what you guys are doing. And yeah, I think it was very impressive, very impressive, you know, a guy like me, where I've coded this stuff.
And I was like, Oh, wow, you just took a lot of a lot of pain out of the process. You know, one of the pains that I saw come out of the process was the reduction in time, right that how long it would take for me to cycle another model through it. That was incredible, right? I can't remember the actual quantification of time, but it was at least a 50% of not even 80% reduction of cycle time is what I saw come through, there's even model versioning sort of techniques, there's just, you know, there's another really cool technique in there that I saw. And it had to do with this ability to augment or, or approximate test data, right, this ability to say, but but without creating more test data, it could approximate and create that for you. So now, your whole testing side just got a lot easier without building up, you know, those massive test cases and test basis for for doing the stuff So, alright, very impressive product set. And let's see, Elizabeth can you explain that?
Elizabeth
That's right, Chief Product Officer. So basically, kind of the strategy around what we're building and how we build it. And the order in which we build it is is kind of under my purview.
Grant
Okay, very good. Awesome. Well, it's so great to have both of you here today. So after I spoke with Carlos, last time, after we finished the recording, I said, You know what, I want to talk to you about ethics about AI ethics. And so as you heard in my previous podcast, I sort of laid the foundation for this conversation. And it's not the only areas of ethics around AI, but it's a place to start. And so we want to build on this. And we're gonna talk about these sort of four or five different areas just to begin the conversation. And I think this could translate certainly into other conversations as well. But to do that, could could one or both of you spend a little time giving the foundation of what is AI ethics as a relates to computer vision itself? What are some of the challenges or problems or misunderstandings that you see in this specific area of AI?
Carlos
Sure, I can take that one. So I think really, when we're talking around ethics, we're bowling any sort of technology, we're talking around how that technology is implemented, and the use of that, right and what's acceptable. So in the case of this technology, we're talking around computer vision and artificial intelligence, and how those things go into society. And it's really through its intended use on how we evaluate the technology. And I think really, computer vision continues to provide value to allow us to get through this digital transformation piece. As a technology, right? And, you know, once we start with, yes, this is a valuable technology, the conversation really now shifts to how do we use that technology for good, some cases bad, right? Where this is where that conversation arises around, you know, having the space to share what we believe is good or bad, or the right uses or the wrong usage just right. And it's a very, very gray area, when we look to address technology and advancement in technology against a black and white good or bad kind of a situation, we get into a lot of issues where, you know, there's a lot of controversy around some of these things, right, which is really, you know, as we started discussing it after the last podcast, it was really, you know, man, I really need to have a good podcast around this, because there's a lot to it. And you clearly said there was a previous one.
And now there's this one, you know, I hope that there's a series of these so we can continue to express value and just have a free conversation around ethics in artificial intelligence. But really, what I'm trying to do is set the context, right. So like technology works great from the just the science of the application of that technology. And if you think of something like super controversial facial recognition, now, absolutely. I don't want people to look at my face when I'm standing on a corner. But if there's, you know, child abduction cases, yes, please use all the facial recognition you can I want that to succeed really well. And we've learned that technology works. So it's not the solution itself. It's how we're applying that solution. Right. And there's there's a lot of new ones to that. And, you know, Elizabeth can help shed a little bit of light here, because this is something that we evaluate on a constant basis and have really free discussions around.
Grant
Yeah, I would imagine you have to even as you take your platform into your into your customer base, even understanding what their use cases are imagined, at times, you might have to give a little guidance on the best way to apply our uses. What have you seen with this, Elizabeth?
Elizabeth
Yeah, you know, there's, it's interesting what Carlos is saying there's a lot of the same themes for evaluating the ethics and technology in general are similar ones that come up with when AI is applied. So things like fraud or bias is actually more can be more uniquely AI. But that absolutely exists in other technologies. And then inaccuracy and how that how that comes up in AI, and then things like consent and privacy. So a lot of the themes and we can we can talk about how AI applies to these, but a lot of the themes that come up, are really similar. And so one of the things that we try to do for our customers is, especially kind of your listener base, that's that small, medium businesses is take a lot of that complexity out of the, like, Hey, I just want to apply, you know, I just want to solve this one problem with AI, what are all of these concerns that I, you know, I may or may not know about. So, we try to do things like build, build things into the platform that make it so something like bias. So, for example, is usually comes down to data balance. So if we provide tools that really clearly show your data balance, then it helps people make unbiased models, right, and be confident that they're going to be using AI ethically,
Grant
So that I'm sure you're aware of this Harrisburg University in Pennsylvania case where they ended up using AI to predict criminality using image processing, right. And, of course, that it failed, right? Because it you know, looking at a an image of someone and saying, Oh, that person is a criminal, or that person's not a criminal. That's using some powerful technology, but in ways that, of course, has some strong problems or challenges around that. How do you help prevent something like this? Or how do you guide people to use this, this kind of tool or technology and in ways that are beneficial?
Elizabeth
Yeah. What's interesting about this one is that the same technology that causes the problem can also help solve the problem. So so when you're looking at your corpus of data, you can use AI to find places where you have data in balance and and just to kind of re explain the what happened in that case, right. So they had a data imbalance where it was Miss identifying Different races that they had less data for. So, you know, a less controversial example is if we're talking about fruit, right, so if we have a dataset that has 20, oranges, two bananas and 20, apples, then it's just going to be worse than identifying bananas, right? So one of the things that that can be done is you apply AI to automatically look at your data balance and say, and surface those issues where it can say, hey, you have less of this thing, you probably want to label more of that thing.
Grant
So I'll try to manage the data set better in terms of proper representation. And try finding, finding bias is a real challenge for organizations. And I think one of the things that your platform would allow or unable to do is if you can take off the pain of all of the machinery of just getting through this and and free up organizations time to be more strict in terms of evaluating and finally, oh, taking the time to do those kinds of things, I think you might have an opportunity to improve in that area meeting customers might be able to improve in there, would that be a fair takeaway?
Elizabeth
Yeah, it's something that that we're really passionate about trying to provide tools around. And, and we're kind of prioritizing these these tools. The other one is, is that has to do with your data is as well is finding inaccuracies in, in your models. So the one example is X ray machines. So they did. They've basically they had an inaccuracy in a model that was finding a correlation, I think it was for it was for the disease detection. So it was finding a correlation just when the X ray was mobile, versus when they went into kind of a hospital to get the X ray. And so, you know, these models are in many cases, really just very strong pattern detectors, right. And so one of the things that can really help to, you know, to prevent something like that is to make it easy to slice and dice your data in as many ways as possible, and then run models that way. And make sure that you aren't finding the same correlation, or the same sort of accuracy with a different data set, or a different running of the model in a different data set. So said, in other words, you would be able to say, I'm going to run all of the portable X-ray machines versus all of the hospital ones, and see if I'm getting the same correlation as I am with, you know, cancer versus not cancer, or whatever they were looking for.
Grant
A quick question for you on this. So in my experience with AI, I have found sort of two things to consider. One is the questions that I'm trying to get answered guides me in terms of, you know, how I prepare the model, right? I'm gonna first lean towards certain things, obviously, if I want to know that this is a banana, right, or an apple or what have you. So the kind of question that when I answer leads me to how I prepare the model, which means it leads me to the data that I select. And the question is, is do I do I? Should I spend the time really putting together the strong set of questions? And rather, rather than do that, just gather my data? And then and then execute that data, the build a model? And then Ben, try to answer some questions out of that, you see what I'm saying that way, maybe I'm not going to introduce any bias into it.
Elizabeth
So we we encourage a very clear sort of understanding of the questions that you want to answer, right? Because that helps you do a few things, it helps you craft a model that's really going to answer that question, as opposed to accidentally answering some other questions, right. But it also helps you right size the technology so far, for example, if you're doing if you're trying to answer the question of how many people are entering this building, because you want to understand, you know, limits of how many people can be in the building or, you know, COVID restrictions or whatever it is, that that solution doesn't need to have facial recognition, right. So to answer that question, you don't need you know, lots of other technologies included in there. And so yeah, So, so defining those questions ahead of time can really help in sort of a more ethical use of the technology.
Grant
So one of the first jobs we would then have a small medium business do would be get clarity around those questions that actually can help us take some of the bias out. Is that a fair takeaway from what you share?
Elizabeth
Exactly like the questions you're trying to answer. And the questions you aren't not trying to answer can also be helpful.
Grant
Oh, very good. Okay. All right. So all right, the opposite of that as well. All right. So while we could keep talking about bias, let's switch to something that is that I think comes right out of the movie iRobot, right. It's robot rights. Is this, is this a fluke? Or what, you know, is this for real? I mean, what do you think? Is there really an ethical thing to worry about here? Or what? What are your thoughts?
Elizabeth
You know, in most of the cases that I've seen, it's really more like, it comes down to just property, like treating property correctly, you know, like don't kick the robots because it's private property. So not really around sort of the robot rights but you know, some already established rules be in for the most part, I see this as kind of a Hollywood problem, more than a practical problem.
Grant
Maybe it makes good Will Smith movies. But other than that, yeah, fighting for rights, right. Now that seems like it's way out there in terms of terms of connection reality. Okay, so we can tell our listeners, don't worry about that for right now. Did you add something back there?
Carlos
Just an interesting point on the robot rights, right. While while it's far in the future, I think for robot rights, we are seeing a little bit of that now today. Right? When like Tesla AI day, when they came out, they decided that the robot shouldn't run too fast that the robot shouldn't be too strong. I think it's a bit. It's a bit interesting that, you know, we're also protecting the human race from for us building, you know, AI for bad and robots for bad in this case. So I think it's, it's, it's on both sides of that coin. And those are, those are product decisions that were made around. Let's make sure we can run that thing later. So I think I think as we continue to explore robots AI, the the use of that together, this topic will be very important, but I think it's far far away.
Grant
I'm wondering is that also blends into the next sort of ethical subtopic we talked about, which is threat to human dignity. And it might even crossed into that a little bit, right, which is, are we developing AI in a way that's going to help? protect the dignity of human certainly in health care situations? That certainly becomes important, right? You probably heard on the previous podcasts that I did, I played a little snippet from Google's duplex technology that was three year old technology, and those people had no idea. They're talking and interacting with AI. And so there's that aspect of this. So where's the line on this? When? When is it that someone needs to know that what you're interacting with is actually not human? And then does this actually mean there's a deeper problem that we're trying to solve in the industry, which is one of identity, we've got to actually create a way to know what it is that we're interacting with. And we have strong identity? Can you speak to that? Yeah,
Elizabeth
I think I think the there's two things that kind of come into play here. And the first is transparency, and the second is consent. So in this case, it really comes down to transparency, like it would be very simple in that example, for that bot to say, Hey, I'm a bot on behalf of, you know, Grant Larsen, and I'm trying to schedule a hair appointment, right, and then going from there. And that makes it a much more transparent and easy interaction. So I think in a lot of cases, really paying attention to transparency and consent can go a long way.
Grant
Yeah, absolutely. All right, that that that makes a lot of sense. It seems like we can get around some of these pieces fairly, fairly simply. All right, Carlos, any other thoughts on that one?
Carlos
The only thing there and then touches on the stuff you guys were talking about on the bias piece, right? We're really talking about visibility and introspection into the process. Right. And with bias, you have that in place, right? We can detect when you know there's a misrepresentation of classes within the the model. In some cases, there's human bias that you can get that right but it's it's having that visibility in the same case with the threat to human data. With that visibility comes the introspection where you can make those decisions. You see more about the problem.
Grant
Mm hmm. Yeah, yeah. So if we were to to be able to determine we have a bad actor, if there's not transparency, that would be a way that we could help protect the dignity of humans through this. Alright. That's reasonable. All right. So let's move on to again, sounds Hollywood ish, but I'm not sure it is weaponization of AI. Right? What are the ethics around this? I'll just throw that one on the table. Where do you what do you want to take that? Carlos, you wanna start with that one?
Carlos
Sure. I mean, so weaponization and and I think when we talk about AI and, and the advancements of it, you quickly go to weaponization. But really, weaponization has two different pieces to it, right? It's obviously it depends on which side of that fence you're on, on whether you view that technology is beneficial or detrimental. But in some cases, that AI that same technology that is helping a pilot navigate, it also helps for a guided missile system or something like that. So we really have to balance and it goes back to use cases, and how we apply that technology as a people. But you know, weaponization, the rise against the machines, these kind of questions. While they're kind of out there. They're affecting society today. And we have to be able to have productive conversation around what we believe is good and bad around this while still allowing technology to succeed. So there's a lot of advancements in the weaponization and AI in that space, but it's really, I think we have to take it on a case by case basis, and not like a blanket statement, we can't use technology in these ways.
Grant
Interesting thoughts? What are your thoughts there? Elizabeth?
Elizabeth
Yeah, you know, I it makes me think of sort of turning it on its head is, is when is it? You know, when is it unethical not to use AI, right. And so, some of those questions come up when we are talking about weaponization, you can also be talking about saving human lives and making it safer for them to do some of these operations. And and that same question can come up in some of like, the medical use cases, right? So here in the US, we have a lot of challenges around being able to use AI in medical use cases, and there's, and there's some where you can have really good human oversight of the cases, you can have sort of reproducibility of those models, they can be as explainable as possible. But it's still really, really difficult to get FDA approval there. So I, again, I think there's two sides to that coin.
Grant
And, yeah, it's it's an interesting conversation have stuff wrong, because like, in that medical case, you talked about, you could see the value of using the same kind of technology that would be used to identify a human target, and then attack it, you could take that same capability, and instead use it in a search and rescue sort of scenario, right? Where you're flying something overhead, and you're trying to find, you know, pictures or images of people that might be lost out there. Same kind of thing, right, so, so where how, go ahead, you're gonna say something was, but I can see.
Elizabeth
And there's even simpler cases in medical, where it's like, you know, there's a shortage of radiologists right now in the US and, and you can use, you can use AI to be able to triage some of that imaging. So, because right now, people are having to, in some cases, wait a really long time to get their sort of imaging reviewed. And so can can, can, and should AI help there. There's also another one along those same lines, where, with things like CT scans, you can use what's called super resolution or de noising the image. And basically, you can use much less radiation in the first place to take the imaging and then use AI on top of it to be able to essentially enhance the image. So again, you know, ultimately exposing the patient to less less radiation. So yeah, there's it's pretty interesting when when we can and can't use it. Mm hmm.
Carlos
Yeah. And I think just to add a little bit to the one we can and can't right, so, advancements through drug discovery have largely been driven through AI in the same fashion weaponization of various all drugs or other types of drugs have also benefited from Ai. So, I mean, from a society's perspective, you know, you really have to Evaluate not only greater good, but that that ultimate use case like, where where do you want to make a stance around that technology piece. And understanding both sides really provides that discussion space that's needed, you have to be able to ask really honest questions to problems that are, you know, what you can see in the future.
Grant
So is the safeguard through all of this topic around ethics? Is the safeguard, basically, the moral compass that's found in the humans themselves? Or do we need to have less, you know, legislative or policy bodies? Right, that puts us together? Or is it a blending? What do you what's your take?
Elizabeth
Um, it's interesting, the UK just came out with a national AI strategy. And they are basically trying to build an entire AI assurance industry. And, and their approach is, so they want to make sure that they're make, they're keeping it so that you can be innovative in the space, right? They don't want to make it so regulatory, that you can't innovate. But they also want to make sure that there's consumer trust in in AI. So they're putting together from a, you know, a national perspective, a guidelines and tests and, and ways to give consumers confidence in whether a model is you know, reproducible, accurate, etc, etc, while at the same time not stifling innovation, because they know, you know, how important that AI is for a essentially a country's way to compete and and the opportunities for GDP that it provides as well.
Grant
Hmm, absolutely. Yeah, I can go ahead, Carlos.
Carlos
No, I think it's his it's your question. Left alone, should we got kind of govern ourselves? I think, I think we've proven that we can't do that as a people, right. So we need to have some sort of regulatory, and committee around the review of these things. But it has to be in the light of, you know, wanting to provide a better experience higher quality, deliver value, right. And I think I think when you start with, how do we get the technology adopted and in place and deployed in a fashion where society can benefit, you start making your decisions around, you know, what the good pieces are, and you'll start your start really starting to see the outliers around Hey, wait a second, that doesn't kind of conform to the guidelines that we wanted to get this implemented with?
Elizabeth
And I think also to your question, I think it's happening at all a lot of levels. Right. So there's, you know, state regulation around privacy and use of AI and facial recognition. And, and there's, you know, some the FDA is putting together some regulation, and then also individual companies, right, so people like Microsoft, etc, have have big groups around, you know, ethics and how AI should be used for, you know, them as a company. So I think it's happening at all levels.
Grant
Yeah, like we said, that is a people we need to have some level of governing bodies around this to, and of course, that's never the end all protection, for sure. But it is, it is a step in the right direction to to help monitoring and governance. Okay, so last question, right? This is gonna sound a little bit tangential, if I could use that word tangential. It's given the state of AI where it is today. Is it artificial intelligence? Or is it augmented intelligence?
Carlos
I can go with that. So I think it's a little bit of both. So I think the result is, is to augment our intelligence, right? We're really trying to make better decisions. Some of those are automated, some of those are not we're really trying to inform a higher quality decision. And yes, it's being applied in an artificial intelligence manner, because that's the technology that we're applying, but it's really to augment our lives. Right. And, and we're using it in a variety of use cases. We've talked about a lot of them here. But there's 1000s of use cases in AI that we don't even see today that are very easy. Something as simple as searching on the internet. That's helping a lot from you know, misspelling things and, you know, not not identifying exactly what you want and recommendation engines come and say, you know, I think I'm looking for this instead. It's like, Absolutely, thanks for saving me the frustration. We're really augmenting Life in that point.
Grant
The reason why I asked that as part of this ethics piece is one of the things I noticed. And as I work with the organizations, there's a misunderstanding of how far and what AI can do at times. And and there's this misunderstanding of therefore, what's my responsibility in this. And my argument is, it's augmented intelligence in terms of its outcome, and therefore, we can't absolve the outcomes and pass that off to AI and say, Oh, well, it told me to do this, just in the same breath, we can't absolve and say, We're not responsible for the use cases either. And the way in which we use it, so we own as a human human race, we own the responsibility to pick an apply the right use cases, to even be able to challenge the AI, insights and outcomes from that, and then to take the ownership of that in what the impacts are. Agree, disagree.
Carlos
Yeah, I would really agree with that. And if you if you think about how it's implemented, in many cases, right now, the best use of AI is with human oversight, right. So it's sort of, you know, AI is maybe making initial decision, and then the human is reviewing that, or, you know, making a judgment call based on that input. So it's, it's sort of helping human decisioning instead of replacing human decisioning. And I think that's a pretty important kind of guiding principle where, where, wherever that may be necessary, we should do it. There's one, you know, the Zillow case that that happened recently, where they were using machine learning to automatically buy houses, and it was not. There was not enough human oversight in that, and I think they ended up losing something like $500 million in the case, right. So it's not really an ethics thing, but but it's just an example where in a lot of these cases, the best scenario is to have aI paired with human oversight. Yeah, yeah. Great, I think.
Grant
Yeah, no, go right ahead. Yeah.
Elizabeth
You mentioned you mentioned being able to challenge the AI, right, and that that piece is really important, in most of the cases, especially in the one that he just mentioned, around that. That Zillow case, right, without the challenging piece, you don't have a path to improvement, you just kind of assume the role, and you get into deep trouble, like you saw there. But that challenging piece is really where innovation starts, you need to be able to get back and question kind of, you know, is this exactly what I want? And if it's not, how do I change it? Right. And that's how we drive kind of innovation in the space?
Grant
Well, and I would say that that comes full circle to the platform, I saw that your organization's developing, which is to reduce the time and effort it takes to be able to cycle on that right to build the model, get the outcome, evaluate, oh, challenge, make adjustments, but don't make the effort to recast and rebuild that model such that it becomes unaffordable or too much time, I need to be able to iterate on that quickly. And I think as a platform you developed and others that I've seen, you know, continue to reduce that I think it makes it easier for us to do that in it from a from a financially responsible and beneficial perspective. 100%
Elizabeth
Yeah, one of the one of the features that you mentioned was the versioning. And that really ties into a guiding principle of ethical use as well, which is reproducibility. So if you are, if you want to use a model, you need to be able to reduce, reproduce it reliably. And so you're getting the same kind of outputs. And so that's one of the features that we've put in there to help people that versioning feature to help people, you know, comply with that type of a regulation.
Grant
I've built enough AI models to know it's tough to go back to a particular version of an AI model and have reproducibility accountability. I mean, there's a whole bunch of LEDs on that. That's exceedingly valuable. That's right. Yeah. Okay, any any final comments for me there? Yeah.
Carlos
I think for my side, I'm really interested to see where we go as people with ethics in AI. I think we've touched on the transparency and visibility required to have these conversations around ethics and our ethical use of AI. But really, in this case, we're gonna start seeing more and more use cases and solutions in their lives or we're gonna butt up against these ethical questions, and being able to have an open forum where we can discuss this. That's really up to us. To provide we have to provide the space to have these conversations, and in some cases, arguments around the use of the technology. And I'm really looking forward to, you know, what comes out of that, you know, how long does it take for us to get to that space where, you know, we're advancing in technology and addressing issues while we advanced the technology.
Grant
Excellent. Thanks, Carlos. Elizabeth.
Elizabeth
Yeah, so for me as a as a product person in particular, I'm really interested in these the the societal conversation that we're having, and the regulations that are starting to be put together and kind of the guidelines from larger companies and companies like ours that are, you know, contributing to this thought leadership. And so what's really interesting for me is being able to take those, that larger conversation and that larger knowledge base and distill it down into simple tools for people like small and medium businesses that can then feel confident using AI and these things are just built in sort of protecting them from making some mistakes. So I'm really interested to see sort of how that evolves and how we can productize it to make it simple for people.
Grant
Yeah, yeah. Bingo. Exactly. Okay, everyone. I'd like to thank Carlos Elizabeth, for joining me here today. Wonderful conversation that I enjoyed that a lot. Thanks, everyone for listening. And until next time, get some AI with your ethics.
Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.
By Grant Larsen5
11 ratings
Welcome to ClickAI Radio. In this episode I have a conversation with some AI experts on how AI ethics affect your business.
Grant
Okay, welcome, everybody to another episode of clique AI radio. Well in the house today I have got a return visitor very excited to have him and a brand new person. I'm excited to introduce you to Carlos Anchia. I've been practicing that I get that right, Carlos.
Carlos
Sounds great. Good to see you again.
Grant
Even old dogs can learn new tricks. There we go. All right, and Elizabeth Spears. Now I got that one easily. Right, Elizabeth?
Elizabeth
You did it? Yeah. I'm really happy to be here.
Grant
This is exciting for me to have them here with me today. They are co founders of plain sight AI. And a few episodes ago, I had an opportunity to speak with Carlos and he laid the foundation around the origin story of AI for vision, some of the techniques and the problems they're solving. And I started to nerd out on all of the benefits. In fact, you know what, Carlos, I need to tell you, since our last conversation, I actually circled back to your team and had a demo of what you guys are doing. And yeah, I think it was very impressive, very impressive, you know, a guy like me, where I've coded this stuff.
And I was like, Oh, wow, you just took a lot of a lot of pain out of the process. You know, one of the pains that I saw come out of the process was the reduction in time, right that how long it would take for me to cycle another model through it. That was incredible, right? I can't remember the actual quantification of time, but it was at least a 50% of not even 80% reduction of cycle time is what I saw come through, there's even model versioning sort of techniques, there's just, you know, there's another really cool technique in there that I saw. And it had to do with this ability to augment or, or approximate test data, right, this ability to say, but but without creating more test data, it could approximate and create that for you. So now, your whole testing side just got a lot easier without building up, you know, those massive test cases and test basis for for doing the stuff So, alright, very impressive product set. And let's see, Elizabeth can you explain that?
Elizabeth
That's right, Chief Product Officer. So basically, kind of the strategy around what we're building and how we build it. And the order in which we build it is is kind of under my purview.
Grant
Okay, very good. Awesome. Well, it's so great to have both of you here today. So after I spoke with Carlos, last time, after we finished the recording, I said, You know what, I want to talk to you about ethics about AI ethics. And so as you heard in my previous podcast, I sort of laid the foundation for this conversation. And it's not the only areas of ethics around AI, but it's a place to start. And so we want to build on this. And we're gonna talk about these sort of four or five different areas just to begin the conversation. And I think this could translate certainly into other conversations as well. But to do that, could could one or both of you spend a little time giving the foundation of what is AI ethics as a relates to computer vision itself? What are some of the challenges or problems or misunderstandings that you see in this specific area of AI?
Carlos
Sure, I can take that one. So I think really, when we're talking around ethics, we're bowling any sort of technology, we're talking around how that technology is implemented, and the use of that, right and what's acceptable. So in the case of this technology, we're talking around computer vision and artificial intelligence, and how those things go into society. And it's really through its intended use on how we evaluate the technology. And I think really, computer vision continues to provide value to allow us to get through this digital transformation piece. As a technology, right? And, you know, once we start with, yes, this is a valuable technology, the conversation really now shifts to how do we use that technology for good, some cases bad, right? Where this is where that conversation arises around, you know, having the space to share what we believe is good or bad, or the right uses or the wrong usage just right. And it's a very, very gray area, when we look to address technology and advancement in technology against a black and white good or bad kind of a situation, we get into a lot of issues where, you know, there's a lot of controversy around some of these things, right, which is really, you know, as we started discussing it after the last podcast, it was really, you know, man, I really need to have a good podcast around this, because there's a lot to it. And you clearly said there was a previous one.
And now there's this one, you know, I hope that there's a series of these so we can continue to express value and just have a free conversation around ethics in artificial intelligence. But really, what I'm trying to do is set the context, right. So like technology works great from the just the science of the application of that technology. And if you think of something like super controversial facial recognition, now, absolutely. I don't want people to look at my face when I'm standing on a corner. But if there's, you know, child abduction cases, yes, please use all the facial recognition you can I want that to succeed really well. And we've learned that technology works. So it's not the solution itself. It's how we're applying that solution. Right. And there's there's a lot of new ones to that. And, you know, Elizabeth can help shed a little bit of light here, because this is something that we evaluate on a constant basis and have really free discussions around.
Grant
Yeah, I would imagine you have to even as you take your platform into your into your customer base, even understanding what their use cases are imagined, at times, you might have to give a little guidance on the best way to apply our uses. What have you seen with this, Elizabeth?
Elizabeth
Yeah, you know, there's, it's interesting what Carlos is saying there's a lot of the same themes for evaluating the ethics and technology in general are similar ones that come up with when AI is applied. So things like fraud or bias is actually more can be more uniquely AI. But that absolutely exists in other technologies. And then inaccuracy and how that how that comes up in AI, and then things like consent and privacy. So a lot of the themes and we can we can talk about how AI applies to these, but a lot of the themes that come up, are really similar. And so one of the things that we try to do for our customers is, especially kind of your listener base, that's that small, medium businesses is take a lot of that complexity out of the, like, Hey, I just want to apply, you know, I just want to solve this one problem with AI, what are all of these concerns that I, you know, I may or may not know about. So, we try to do things like build, build things into the platform that make it so something like bias. So, for example, is usually comes down to data balance. So if we provide tools that really clearly show your data balance, then it helps people make unbiased models, right, and be confident that they're going to be using AI ethically,
Grant
So that I'm sure you're aware of this Harrisburg University in Pennsylvania case where they ended up using AI to predict criminality using image processing, right. And, of course, that it failed, right? Because it you know, looking at a an image of someone and saying, Oh, that person is a criminal, or that person's not a criminal. That's using some powerful technology, but in ways that, of course, has some strong problems or challenges around that. How do you help prevent something like this? Or how do you guide people to use this, this kind of tool or technology and in ways that are beneficial?
Elizabeth
Yeah. What's interesting about this one is that the same technology that causes the problem can also help solve the problem. So so when you're looking at your corpus of data, you can use AI to find places where you have data in balance and and just to kind of re explain the what happened in that case, right. So they had a data imbalance where it was Miss identifying Different races that they had less data for. So, you know, a less controversial example is if we're talking about fruit, right, so if we have a dataset that has 20, oranges, two bananas and 20, apples, then it's just going to be worse than identifying bananas, right? So one of the things that that can be done is you apply AI to automatically look at your data balance and say, and surface those issues where it can say, hey, you have less of this thing, you probably want to label more of that thing.
Grant
So I'll try to manage the data set better in terms of proper representation. And try finding, finding bias is a real challenge for organizations. And I think one of the things that your platform would allow or unable to do is if you can take off the pain of all of the machinery of just getting through this and and free up organizations time to be more strict in terms of evaluating and finally, oh, taking the time to do those kinds of things, I think you might have an opportunity to improve in that area meeting customers might be able to improve in there, would that be a fair takeaway?
Elizabeth
Yeah, it's something that that we're really passionate about trying to provide tools around. And, and we're kind of prioritizing these these tools. The other one is, is that has to do with your data is as well is finding inaccuracies in, in your models. So the one example is X ray machines. So they did. They've basically they had an inaccuracy in a model that was finding a correlation, I think it was for it was for the disease detection. So it was finding a correlation just when the X ray was mobile, versus when they went into kind of a hospital to get the X ray. And so, you know, these models are in many cases, really just very strong pattern detectors, right. And so one of the things that can really help to, you know, to prevent something like that is to make it easy to slice and dice your data in as many ways as possible, and then run models that way. And make sure that you aren't finding the same correlation, or the same sort of accuracy with a different data set, or a different running of the model in a different data set. So said, in other words, you would be able to say, I'm going to run all of the portable X-ray machines versus all of the hospital ones, and see if I'm getting the same correlation as I am with, you know, cancer versus not cancer, or whatever they were looking for.
Grant
A quick question for you on this. So in my experience with AI, I have found sort of two things to consider. One is the questions that I'm trying to get answered guides me in terms of, you know, how I prepare the model, right? I'm gonna first lean towards certain things, obviously, if I want to know that this is a banana, right, or an apple or what have you. So the kind of question that when I answer leads me to how I prepare the model, which means it leads me to the data that I select. And the question is, is do I do I? Should I spend the time really putting together the strong set of questions? And rather, rather than do that, just gather my data? And then and then execute that data, the build a model? And then Ben, try to answer some questions out of that, you see what I'm saying that way, maybe I'm not going to introduce any bias into it.
Elizabeth
So we we encourage a very clear sort of understanding of the questions that you want to answer, right? Because that helps you do a few things, it helps you craft a model that's really going to answer that question, as opposed to accidentally answering some other questions, right. But it also helps you right size the technology so far, for example, if you're doing if you're trying to answer the question of how many people are entering this building, because you want to understand, you know, limits of how many people can be in the building or, you know, COVID restrictions or whatever it is, that that solution doesn't need to have facial recognition, right. So to answer that question, you don't need you know, lots of other technologies included in there. And so yeah, So, so defining those questions ahead of time can really help in sort of a more ethical use of the technology.
Grant
So one of the first jobs we would then have a small medium business do would be get clarity around those questions that actually can help us take some of the bias out. Is that a fair takeaway from what you share?
Elizabeth
Exactly like the questions you're trying to answer. And the questions you aren't not trying to answer can also be helpful.
Grant
Oh, very good. Okay. All right. So all right, the opposite of that as well. All right. So while we could keep talking about bias, let's switch to something that is that I think comes right out of the movie iRobot, right. It's robot rights. Is this, is this a fluke? Or what, you know, is this for real? I mean, what do you think? Is there really an ethical thing to worry about here? Or what? What are your thoughts?
Elizabeth
You know, in most of the cases that I've seen, it's really more like, it comes down to just property, like treating property correctly, you know, like don't kick the robots because it's private property. So not really around sort of the robot rights but you know, some already established rules be in for the most part, I see this as kind of a Hollywood problem, more than a practical problem.
Grant
Maybe it makes good Will Smith movies. But other than that, yeah, fighting for rights, right. Now that seems like it's way out there in terms of terms of connection reality. Okay, so we can tell our listeners, don't worry about that for right now. Did you add something back there?
Carlos
Just an interesting point on the robot rights, right. While while it's far in the future, I think for robot rights, we are seeing a little bit of that now today. Right? When like Tesla AI day, when they came out, they decided that the robot shouldn't run too fast that the robot shouldn't be too strong. I think it's a bit. It's a bit interesting that, you know, we're also protecting the human race from for us building, you know, AI for bad and robots for bad in this case. So I think it's, it's, it's on both sides of that coin. And those are, those are product decisions that were made around. Let's make sure we can run that thing later. So I think I think as we continue to explore robots AI, the the use of that together, this topic will be very important, but I think it's far far away.
Grant
I'm wondering is that also blends into the next sort of ethical subtopic we talked about, which is threat to human dignity. And it might even crossed into that a little bit, right, which is, are we developing AI in a way that's going to help? protect the dignity of human certainly in health care situations? That certainly becomes important, right? You probably heard on the previous podcasts that I did, I played a little snippet from Google's duplex technology that was three year old technology, and those people had no idea. They're talking and interacting with AI. And so there's that aspect of this. So where's the line on this? When? When is it that someone needs to know that what you're interacting with is actually not human? And then does this actually mean there's a deeper problem that we're trying to solve in the industry, which is one of identity, we've got to actually create a way to know what it is that we're interacting with. And we have strong identity? Can you speak to that? Yeah,
Elizabeth
I think I think the there's two things that kind of come into play here. And the first is transparency, and the second is consent. So in this case, it really comes down to transparency, like it would be very simple in that example, for that bot to say, Hey, I'm a bot on behalf of, you know, Grant Larsen, and I'm trying to schedule a hair appointment, right, and then going from there. And that makes it a much more transparent and easy interaction. So I think in a lot of cases, really paying attention to transparency and consent can go a long way.
Grant
Yeah, absolutely. All right, that that that makes a lot of sense. It seems like we can get around some of these pieces fairly, fairly simply. All right, Carlos, any other thoughts on that one?
Carlos
The only thing there and then touches on the stuff you guys were talking about on the bias piece, right? We're really talking about visibility and introspection into the process. Right. And with bias, you have that in place, right? We can detect when you know there's a misrepresentation of classes within the the model. In some cases, there's human bias that you can get that right but it's it's having that visibility in the same case with the threat to human data. With that visibility comes the introspection where you can make those decisions. You see more about the problem.
Grant
Mm hmm. Yeah, yeah. So if we were to to be able to determine we have a bad actor, if there's not transparency, that would be a way that we could help protect the dignity of humans through this. Alright. That's reasonable. All right. So let's move on to again, sounds Hollywood ish, but I'm not sure it is weaponization of AI. Right? What are the ethics around this? I'll just throw that one on the table. Where do you what do you want to take that? Carlos, you wanna start with that one?
Carlos
Sure. I mean, so weaponization and and I think when we talk about AI and, and the advancements of it, you quickly go to weaponization. But really, weaponization has two different pieces to it, right? It's obviously it depends on which side of that fence you're on, on whether you view that technology is beneficial or detrimental. But in some cases, that AI that same technology that is helping a pilot navigate, it also helps for a guided missile system or something like that. So we really have to balance and it goes back to use cases, and how we apply that technology as a people. But you know, weaponization, the rise against the machines, these kind of questions. While they're kind of out there. They're affecting society today. And we have to be able to have productive conversation around what we believe is good and bad around this while still allowing technology to succeed. So there's a lot of advancements in the weaponization and AI in that space, but it's really, I think we have to take it on a case by case basis, and not like a blanket statement, we can't use technology in these ways.
Grant
Interesting thoughts? What are your thoughts there? Elizabeth?
Elizabeth
Yeah, you know, I it makes me think of sort of turning it on its head is, is when is it? You know, when is it unethical not to use AI, right. And so, some of those questions come up when we are talking about weaponization, you can also be talking about saving human lives and making it safer for them to do some of these operations. And and that same question can come up in some of like, the medical use cases, right? So here in the US, we have a lot of challenges around being able to use AI in medical use cases, and there's, and there's some where you can have really good human oversight of the cases, you can have sort of reproducibility of those models, they can be as explainable as possible. But it's still really, really difficult to get FDA approval there. So I, again, I think there's two sides to that coin.
Grant
And, yeah, it's it's an interesting conversation have stuff wrong, because like, in that medical case, you talked about, you could see the value of using the same kind of technology that would be used to identify a human target, and then attack it, you could take that same capability, and instead use it in a search and rescue sort of scenario, right? Where you're flying something overhead, and you're trying to find, you know, pictures or images of people that might be lost out there. Same kind of thing, right, so, so where how, go ahead, you're gonna say something was, but I can see.
Elizabeth
And there's even simpler cases in medical, where it's like, you know, there's a shortage of radiologists right now in the US and, and you can use, you can use AI to be able to triage some of that imaging. So, because right now, people are having to, in some cases, wait a really long time to get their sort of imaging reviewed. And so can can, can, and should AI help there. There's also another one along those same lines, where, with things like CT scans, you can use what's called super resolution or de noising the image. And basically, you can use much less radiation in the first place to take the imaging and then use AI on top of it to be able to essentially enhance the image. So again, you know, ultimately exposing the patient to less less radiation. So yeah, there's it's pretty interesting when when we can and can't use it. Mm hmm.
Carlos
Yeah. And I think just to add a little bit to the one we can and can't right, so, advancements through drug discovery have largely been driven through AI in the same fashion weaponization of various all drugs or other types of drugs have also benefited from Ai. So, I mean, from a society's perspective, you know, you really have to Evaluate not only greater good, but that that ultimate use case like, where where do you want to make a stance around that technology piece. And understanding both sides really provides that discussion space that's needed, you have to be able to ask really honest questions to problems that are, you know, what you can see in the future.
Grant
So is the safeguard through all of this topic around ethics? Is the safeguard, basically, the moral compass that's found in the humans themselves? Or do we need to have less, you know, legislative or policy bodies? Right, that puts us together? Or is it a blending? What do you what's your take?
Elizabeth
Um, it's interesting, the UK just came out with a national AI strategy. And they are basically trying to build an entire AI assurance industry. And, and their approach is, so they want to make sure that they're make, they're keeping it so that you can be innovative in the space, right? They don't want to make it so regulatory, that you can't innovate. But they also want to make sure that there's consumer trust in in AI. So they're putting together from a, you know, a national perspective, a guidelines and tests and, and ways to give consumers confidence in whether a model is you know, reproducible, accurate, etc, etc, while at the same time not stifling innovation, because they know, you know, how important that AI is for a essentially a country's way to compete and and the opportunities for GDP that it provides as well.
Grant
Hmm, absolutely. Yeah, I can go ahead, Carlos.
Carlos
No, I think it's his it's your question. Left alone, should we got kind of govern ourselves? I think, I think we've proven that we can't do that as a people, right. So we need to have some sort of regulatory, and committee around the review of these things. But it has to be in the light of, you know, wanting to provide a better experience higher quality, deliver value, right. And I think I think when you start with, how do we get the technology adopted and in place and deployed in a fashion where society can benefit, you start making your decisions around, you know, what the good pieces are, and you'll start your start really starting to see the outliers around Hey, wait a second, that doesn't kind of conform to the guidelines that we wanted to get this implemented with?
Elizabeth
And I think also to your question, I think it's happening at all a lot of levels. Right. So there's, you know, state regulation around privacy and use of AI and facial recognition. And, and there's, you know, some the FDA is putting together some regulation, and then also individual companies, right, so people like Microsoft, etc, have have big groups around, you know, ethics and how AI should be used for, you know, them as a company. So I think it's happening at all levels.
Grant
Yeah, like we said, that is a people we need to have some level of governing bodies around this to, and of course, that's never the end all protection, for sure. But it is, it is a step in the right direction to to help monitoring and governance. Okay, so last question, right? This is gonna sound a little bit tangential, if I could use that word tangential. It's given the state of AI where it is today. Is it artificial intelligence? Or is it augmented intelligence?
Carlos
I can go with that. So I think it's a little bit of both. So I think the result is, is to augment our intelligence, right? We're really trying to make better decisions. Some of those are automated, some of those are not we're really trying to inform a higher quality decision. And yes, it's being applied in an artificial intelligence manner, because that's the technology that we're applying, but it's really to augment our lives. Right. And, and we're using it in a variety of use cases. We've talked about a lot of them here. But there's 1000s of use cases in AI that we don't even see today that are very easy. Something as simple as searching on the internet. That's helping a lot from you know, misspelling things and, you know, not not identifying exactly what you want and recommendation engines come and say, you know, I think I'm looking for this instead. It's like, Absolutely, thanks for saving me the frustration. We're really augmenting Life in that point.
Grant
The reason why I asked that as part of this ethics piece is one of the things I noticed. And as I work with the organizations, there's a misunderstanding of how far and what AI can do at times. And and there's this misunderstanding of therefore, what's my responsibility in this. And my argument is, it's augmented intelligence in terms of its outcome, and therefore, we can't absolve the outcomes and pass that off to AI and say, Oh, well, it told me to do this, just in the same breath, we can't absolve and say, We're not responsible for the use cases either. And the way in which we use it, so we own as a human human race, we own the responsibility to pick an apply the right use cases, to even be able to challenge the AI, insights and outcomes from that, and then to take the ownership of that in what the impacts are. Agree, disagree.
Carlos
Yeah, I would really agree with that. And if you if you think about how it's implemented, in many cases, right now, the best use of AI is with human oversight, right. So it's sort of, you know, AI is maybe making initial decision, and then the human is reviewing that, or, you know, making a judgment call based on that input. So it's, it's sort of helping human decisioning instead of replacing human decisioning. And I think that's a pretty important kind of guiding principle where, where, wherever that may be necessary, we should do it. There's one, you know, the Zillow case that that happened recently, where they were using machine learning to automatically buy houses, and it was not. There was not enough human oversight in that, and I think they ended up losing something like $500 million in the case, right. So it's not really an ethics thing, but but it's just an example where in a lot of these cases, the best scenario is to have aI paired with human oversight. Yeah, yeah. Great, I think.
Grant
Yeah, no, go right ahead. Yeah.
Elizabeth
You mentioned you mentioned being able to challenge the AI, right, and that that piece is really important, in most of the cases, especially in the one that he just mentioned, around that. That Zillow case, right, without the challenging piece, you don't have a path to improvement, you just kind of assume the role, and you get into deep trouble, like you saw there. But that challenging piece is really where innovation starts, you need to be able to get back and question kind of, you know, is this exactly what I want? And if it's not, how do I change it? Right. And that's how we drive kind of innovation in the space?
Grant
Well, and I would say that that comes full circle to the platform, I saw that your organization's developing, which is to reduce the time and effort it takes to be able to cycle on that right to build the model, get the outcome, evaluate, oh, challenge, make adjustments, but don't make the effort to recast and rebuild that model such that it becomes unaffordable or too much time, I need to be able to iterate on that quickly. And I think as a platform you developed and others that I've seen, you know, continue to reduce that I think it makes it easier for us to do that in it from a from a financially responsible and beneficial perspective. 100%
Elizabeth
Yeah, one of the one of the features that you mentioned was the versioning. And that really ties into a guiding principle of ethical use as well, which is reproducibility. So if you are, if you want to use a model, you need to be able to reduce, reproduce it reliably. And so you're getting the same kind of outputs. And so that's one of the features that we've put in there to help people that versioning feature to help people, you know, comply with that type of a regulation.
Grant
I've built enough AI models to know it's tough to go back to a particular version of an AI model and have reproducibility accountability. I mean, there's a whole bunch of LEDs on that. That's exceedingly valuable. That's right. Yeah. Okay, any any final comments for me there? Yeah.
Carlos
I think for my side, I'm really interested to see where we go as people with ethics in AI. I think we've touched on the transparency and visibility required to have these conversations around ethics and our ethical use of AI. But really, in this case, we're gonna start seeing more and more use cases and solutions in their lives or we're gonna butt up against these ethical questions, and being able to have an open forum where we can discuss this. That's really up to us. To provide we have to provide the space to have these conversations, and in some cases, arguments around the use of the technology. And I'm really looking forward to, you know, what comes out of that, you know, how long does it take for us to get to that space where, you know, we're advancing in technology and addressing issues while we advanced the technology.
Grant
Excellent. Thanks, Carlos. Elizabeth.
Elizabeth
Yeah, so for me as a as a product person in particular, I'm really interested in these the the societal conversation that we're having, and the regulations that are starting to be put together and kind of the guidelines from larger companies and companies like ours that are, you know, contributing to this thought leadership. And so what's really interesting for me is being able to take those, that larger conversation and that larger knowledge base and distill it down into simple tools for people like small and medium businesses that can then feel confident using AI and these things are just built in sort of protecting them from making some mistakes. So I'm really interested to see sort of how that evolves and how we can productize it to make it simple for people.
Grant
Yeah, yeah. Bingo. Exactly. Okay, everyone. I'd like to thank Carlos Elizabeth, for joining me here today. Wonderful conversation that I enjoyed that a lot. Thanks, everyone for listening. And until next time, get some AI with your ethics.
Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.