Share Tech Law Talks
Share to email
Share to Facebook
Share to X
By Reed Smith
5
1111 ratings
The podcast currently has 81 episodes available.
Emerging technology lawyers Therese Craparo, Anthony Diana and Howard Womersley Smith discuss the rapid advancements in AI in the financial services industry. AI systems have much to offer but most bank compliance departments cannot keep up with the pace of integration. The speakers explain: If financial institutions turn to outside vendors to implement AI systems, they must work to achieve effective risk management that extends out to third-party vendors.
Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU.
Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining.
Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters.
Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now?
Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.?
Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well.
Cynthia: Interesting, so I’d say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree?
Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach?
Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take?
Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that.
Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our organization where we're using AI and what is an AI system as defined under the various laws, also making sure we have a global interpretation of that term. And then that is a step many of our clients are taking at the moment is like setting up an AI inventory. And that's already a very difficult and tough step. And then the next one is then like per AI system that is then coming up in this register is to define the risk management process. And of course, that's the point where in Europe, we look into the AI Act and look what kind of AI system do we have, high risk or any other sort of defined system. Or today, we're talking about the generative AI systems a bit more. For example, there we have strong obligations in the European AI Act on the providers of such generative AI. So less on companies that use generative AI, but more on those that develop and provide the generative AI because they have the deeper knowledge on what kind of training data is being used. They need to document how the AI is working and they need to also register this information with the centralized database in the European Union. They also need to give some information on copyright protected material that is contained in the training data so there is quite some documentation requirements and then of course so logging requirements to make sure the AI is used responsibly and does not trigger higher are risks. So there's also two categories of generative AI that can be qualified. So that's kind of like the risk management process under the European AI Act. And then, of course, organizations also look into risks into other areas, copyright, data protection, and also IT security. Cynthia, I know IT security is one of the topics you love. You add some more on IT security here and then we'll see what Nikki says for the US.
Cynthia: Well, obviously NIST 2.0 is coming into force. It will cover providers of certain digital services. So it's likely to cover providers of AI systems in some way or other. And funny enough, NIST 2.0 has its own risk management process involved. So there's supply chain due diligence involved, which would have to be baked into a risk management process for that. And then the EU's ENISA, Cybersecurity Agency for the EU, has put together a framework for cybersecurity, for AI systems, dot dot binding. But it's certainly a framework that companies can look to in terms of getting ideas for how best to ensure that their use of AI is secure. And then, of course, under NIST, too, the various C-Certs will be putting together various codes and have a network meeting late September. So we may see more come out of the EU on cybersecurity in relation to AI. But obviously, just like any kind of user of AI, they're going to have to ensure that the provider of the AI has ensured that the system itself is secure, including if they're going to be putting trained data into it, which of course is highly probable. I just want to say something about the training data. You mentioned copyright, and there's a difference between the EU and the UK. So in the UK, you cannot use, you know, mine data for commercial purposes. So at one point, the UK was looking at an exception to copyright for that, but it doesn't look like that's going to happen. So there is a divergence there, but that stems from historic UK law rather than as a result of the change from Brexit. Nikki, turning back to you again, I mean, we've talked a little bit about risk management. How do you think that that might differ in the US and what kind of documentation might be required there? Or is it a bit looser?
Monique: I think there are actually quite a bit of similarities that I would pull from what, you know, we have in the EU. And Andy, I think this goes back to your question about whether companies can establish a global process, right? In fact, I think it's going to to be really important for companies to see this as a global process as well. Because AI development is going to happen, you know, throughout the world. And it's really going to depend on where it's developed, but also where it's deployed, you know, and where the outputs are deployed. So I think taking a, you know, broader view of risk management will be really important in the the context of AI, particularly given. That the nature of AI is to, you know, process large swaths of information, really on a global scale, in order to make these analytics and creative development and content generation processes faster. So that just a quick aside of I actually think what we're going to see in the US is a lot of pulling from what we've seen that you and a lot more cooperation on that end. I agree that, you know, really starting to frame the risk governance process is looking at who are the key players that need to inform that risk measurement and tolerance analytics, that the decision making in terms of how do you evaluate, how do you inventory. Evaluate, and then determine how to proceed with AI tools. And so, you know, one of the things that I think makes it hopefully a little bit easier is to be able to leverage, you know, from a U.S. Perspective, leverage existing compliance procedures that we have, for example, for SEC compliance or privacy compliance or, you know, other ethics compliance programs. Brands and make AI governance a piece of that, as well as, you know, expand on it. Because I do think that AI governance sort of brings in all of those compliance pieces. We're looking at harms that may exist to a company, not just from personal information, not just from security. Not just from consumer unfair deceptive trade practices, not just from environmental, standpoints, but sort of the very holistic view of not to make this a bigger thing than it is, but kind of everything, right? Kind of every aspect that comes in. And you can see that in some of the questions that developers are supposed to be able to answer or deployers are supposed to be able to answer in risk management programs, like, for example, in Colorado, right, the information that you need to be able to address in a risk management program and an impact assessment really has to demonstrate an understanding of, of the AI system, how it works, how it was built, how it was trained, what data went into it. And then what are the full, what is the full range of harms? So for example, you know, the privacy harms, the environmental harms, the impact on employees, the impact on internal functions, the impact on consumers, if you're using it externally, and really be able to explain that, whether you have to put out a public statement or not, that will depend on the jurisdiction. But even internally, to be able to explain it to your C-suite and make them accountable for the tools that are being brought in, or make it explainable to a regulator if they were to come in and say, well, what did you do to assess this tool and mitigate known risks? So, you know, kind of with that in mind, I'm curious, what steps do you think need to go into a governance program? Like, what are one of the first initial steps? And I always feel that we can sort of start in so many different places, right, depending on how a company is structured, or what initial compliance pieces are. But I'm curious to know from you, like, Like, what would be one of the first steps in beginning the risk management program?
Cynthia: Well, as you said, Nikki, I mean, one of the best things to do is leverage existing governance structures. You know, if we look, for instance, into how the EU is even setting up its public authorities to look at governance, you've got, as I've mentioned, you know, kind of at the outset, you've almost got a multifaceted team approach. And I think it would be the same. I mean, the EU anticipates that there will be an AI officer, but obviously there's got to be team members around that person. There's going to be people with subject matter expertise in data, subject matter expertise in cyber. And then there will be people who have subject matter expertise in relation to the AI system itself, the data, training data that's been used, how it's been developed, how the algorithm works. Whether or not there can be human intervention. What happens if there are anomalies or hallucinations in the data? How can that be fixed? So I would have thought that ultimately part of that implementation is looking at governance structure and then starting from there. And then obviously, I mean, we've talked about some of the things that go into the governance. But, you know, we have clients who are looking first at use case and then going, okay, what are the risks in relation to that use case? How do we document it? How do we log it? How do we ensure that we can meet our transparency and accountability requirements? You know, what other due diligence and other risks are out there that, you know, blue sky thinking that we haven't necessarily thought about. Andy, any?
Andy: Yeah, that's, I would say, one of the first steps. I mean, even though not many organizations allocate now the core AI topic in the data protection department, but rather perhaps in the compliance or IT area, still from the governance process and starting up that structure, we see a lot of similarities to the data protection. Protection GDPR governance structure and so yeah I think back five years to implementation or getting ready for GDPR planning and checking what what other rules we we need to comply with who knew do we need to involve get the plan ready and then work along that plan that's that's the phase where we see many of our clients at the moment. Nikki, more thoughts from your end?
Monique: Yeah, I think those are excellent points. And what I have been talking to clients about is sort of first establishing the basis of measurement, right, that we're going to evaluate AI development on or procurement on. What are the company's internal principles and risk tolerances and defining those? And then based off of those principles and those metrics, putting together an impact assessment, which borrows a lot from what, you know, from what you both said, it borrows a lot from the concept of impact assessments under privacy compliance, right? Right, to implement the right questions and put together the right analytics in order to measure whether a AI tool that's in development is meeting up to those metrics, or something that we are procuring is meeting those metrics, and then analyzing the risks that are coming out of that. I think a lot of that, the impact assessment is going to be really important in helping make those initial determinations. But also, you know, and this is not just my feeling, this is something that is also required in the Colorado law is setting up an impact assessment, and then repeating it annually, which I think is particularly important in the context of AI, especially generative AI, because generative AI is a learning system. So it is going to continue to change, There may be additional modifications that are made in the course of use that is going to require reassessing, is the tool working the way it is intended to be working? You know, what has our monitoring of the tool shown? And, you know, what are the processes we need to put into place? In order to mitigate the tool, you know, going a little bit off path, AI drift, more or less, or, you know, if we start to identify issues within the AI, how do we what processes do we have internally to redirect the ship in the right process. So I think impact assessments are going to be a critical tool in helping form what is the rest of the risk management process that needs to be in place.
Andy: All right. Thank you very much. I think these were a couple of really good practical tips and especially first next steps for our listeners. We hope you enjoyed the session today and look forward if you have any feedback to us either here in the comment boxes or directly to us. And we hope to welcome you soon in one of our next episodes on AI, the law. Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or established standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith partners share insights about U.S. Department of Health and Human Services initiatives to stave off misuse of AI in the health care space. Wendell Bartnick and Vicki Tankle discuss a recent executive order that directs HHS to regulate AI’s impact on health care data privacy and security and investigate whether AI is contributing to medical errors. They explain how HHS collaborates with non-federal authorities to expand AI-related protections; and how the agency is working to ensure that AI outputs are not discriminatory. Stay tuned as we explore the implications of these regulations and discuss the potential benefits and risks of AI in healthcare.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Wendell: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in healthcare. My name is Wendell Bartnick. I'm a partner in Reed Smith's Houston office. I have a degree in computer science and focused on AI during my studies. Now, I'm a tech and data lawyer representing clients in healthcare, including providers, payers, life sciences, digital health, and tech clients. My practice is a natural fit given all the innovation in this industry. I'm joined by my partner, Vicki Tankle.
Vicki: Hi, everyone. I'm Vicki Tankle, and I'm a digital health and health privacy lawyer based in Reed Smith's Philadelphia office. I've spent the last decade or so supporting health industry clients, including healthcare providers, pharmaceutical and medical device manufacturers, health plans, and technology companies navigate the synergies between healthcare and technology and advising on the unique regulatory risks that are created when technology and innovation far outpace our legal and regulatory frameworks. And we're oftentimes left managing risks in the gray, which as of today, July 30th, 2024, is where we are with AI and healthcare. So when we think about the use of AI in healthcare today, there's a wide variety of AI tools that support the health industry. And among those tools, a broad spectrum of the use of health information, including protected health information, or PHI, regulated by HIPAA, both to improve existing AI tools and to develop new ones. And if we think about the spectrum as measuring the value or importance of the PHI, the individuals individuals identifiers themselves, it may be easier to understand that the far ends of the spectrum and easier to understand the risks at each end. Regulators in the industry have generally categorized use of PHI in AI into two buckets, low risk and high risk. But the middle is more difficult and where there can be greater risk because it's where we find the use or value of PHI in the AI model to be potentially debatable. So on the one hand of the spectrum, for example, the lower risk end, there are AI tools such as natural language processors, where individually identifiable health information is not centric to the AI model. But instead, for this example, it's the handwritten notes of the healthcare professional that the AI model learns from. And with more data and more notes, the tool's recognition of the letters themselves, not the words the letters form, such as patient's name, diagnosis, or lab results, the better the tool operates. Then on the other hand of the spectrum, the higher risk end, there are AI tools such as patient-facing next best action tools that are based on an individual's patient medical history, their reported symptoms, their providers, their prescribed medications, potentially their physiological measurements, or similar information, and they offer real-time customized treatment plans with provider oversight. Provider-facing clinical decision support tools similarly support the diagnosis and treatment of individual patients based on individual's information. And then in the middle of the spectrum, we have tools like hospital logistics planners. So think of tools that think about when the patient was scheduled for an x-ray, when they were transported to the x-ray department, how long did they wait before they got the x-ray, and how long after they received the x-ray were they provided with the results. These tools support population-based activities that relate to improving health or reducing costs, as well as case management and care coordination, which begs the question, do we really need to know that patient's identity for the tool to be useful? Maybe yes, if we also want to know the patient's sex, their date of birth, their diagnosis, date of admission. Otherwise, we may want to consider whether this tool can be done and be effective without that individually identifiable information. What's more is that there's no federal law that applies to the use of regulated health data in AI. HIPAA was first enacted in 1996 to encourage healthcare providers and insurers to move away from paper medical and billing records and to get online. And so when HIPAA has been updated over the years, the law still remains outdated in that it does not contemplate the use of data to develop or improve AI. So we're faced with applying an old statute to new technology and data use. Again, operating in a gray area that's not uncommon in digital health or for our clients. And to that end, there are several strategies that our HIPAA-regulated clients are thinking of when they're thinking of permissible ways to use PHI in the context of AI. So treatment, payment, healthcare operations activities for covered entities, proper management and administration for business associates, certain research activities and individual authorizations, or de-identified information are all strategies that our clients are currently thinking through in terms of permissible uses of PHI in AI.
Wendell: So even though HIPAA hasn't been updated to apply directly to AI, that doesn't mean that HHS has ignored it. So AI, as we all know, has been used in healthcare for many years. And in fact, HHS has actually issued some guidance previously. Under the White House's Executive Order 14110, back in the fall of 2023, which was called Safe, secure, and trustworthy development and use of artificial intelligence, jump-started additional HHS efforts. So I'm going to talk about seven items in that executive order that apply directly to the health industry, and then we'll talk about what HHS has done since this executive order. So first, the executive order requires the promotion of additional investment in AI, and just to help prioritize AI projects, including safety and privacy and security. The executive order also requires that HHS create an AI task force that is supposed to meet and create a strategic plan that covers several topics with AI, including AI-enabled technology, long-term safety and real-world performance monitoring, equity principles, safety, privacy, and security, documentation, state and local rules, and then promotion of workplace efficiency and satisfaction. faction. Third, HHS is required to establish an AI safety program that is supposed to identify and track clinical errors produced by AI and store that in a centralized database for use. And then based on what that database contains, they're supposed to propose recommendations for preventing errors and then avoiding harms from AI. Fourth, the executive order requires that all federal agencies, including HHS, focus on increasing compliance with existing federal law on non-discrimination. Along with that includes education and greater enforcement efforts. Fifth, HHS is required to evaluate the current quality of AI services, and that means developing policies and procedures and infrastructure for overseeing AI quality, including with respect to medical devices. Sixth, HHS is required to develop a strategy for regulating the use of AI in the drug development process. Of course, FDA has already been regulating this space for a while. And then seventh, the executive order actually calls on Congress to pass a federal privacy law. But even without that, HHS's AI task force is including privacy and security, as part of its strategic plan. So given those seven requirements really for HHS to cover, what have they done since the fall of 2023? Well, as the end of July 2024, HHS has created a funding opportunity for applicants to receive money if they develop innovative ways to evaluate and improve the quality of healthcare data used by AI. HHS has also created the AI task force. And many of our clients are asking us, you know, about AI governance. What can they do to mitigate risk from AI? And HHS has, the task force has issued a plan for state, local, tribal, and territorial governments related to privacy, safety, security, bias, and fraud. And even though that applies to the public sector, Our private sector clients should take a look at that so that they know what HHS is thinking in terms of AI governance. Along with this publication, NIST also produces several excellent resources that companies can use to help them with their AI governance journey. Also important is that HHS has recently restructured internally to try to consolidate HHS's ability to regulate technology and areas connected to technology and place that under ONC. And ONC, interestingly enough, has posted job postings for a chief AI officer, a chief technology officer, and a chief data officer. So we would expect that once those roles are filled, they will be highly influential in how HHS looks at AI, both internally and then also externally, and how it will impact the strategic thinking and position of HHS going forward with respect to AI. Our provider and tech clients have also been interested in how AI and what HHS is saying affects certified health IT. And earlier this year, actually, ONC published the HTI-1 rule, which, among other things, is establishes transparency requirements for AI that's offered in connection with certified health IT. And that rule, the compliance deadline for that rule is December 31st of this year. HHS has also been involved in focusing on non-discrimination just as the executive order requires. And so our clients are asking, can they use AI for certain processes and procedures? And in fact, it appears that HHS strongly endorses the use of AI in technology, improving patient outcomes, etc. They've certainly not published anything that says AI should not be used. And in fact, CMS issued a final rule this year and FAQs that clarify that AI can be used to process claims under Medicare Advantage plans, as long as there's human oversight and all other laws are compliant. So there is no indication at all from HHS that using AI is somehow prevented or companies should be worried about using it as long as they comply with existing law. So after the White House executive order in the fall of 2023, HHS has a lot of work to do. They've done some, but there's still a lot to do related to AI. And we should expect more guidance and activity in the second half of 2024.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views opinions or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
AI-driven autonomous ships raise legal questions, and shipowners need to understand autonomous systems’ limitations and potential risks. Reed Smith partners Susan Riitala and Thor Maalouf discuss new kinds of liability for owners of autonomous ships, questions that may occur during transfer of assets, and new opportunities for investors.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Susan: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. And today we will focus on AI in shipping. My name is Susan Riitala. I'm a partner in the asset finance team of the transportation group here in the London office of Reed Smith.
Thor: Hello, I'm Thor Maalouf. I'm also a partner in the transportation group at Reed Smith, focusing on disputes.
Susan: So when we think about how AI might be relevant to shipping, One immediate thing that springs to mind is the development of marine autonomous vessels. So, Thor, please can you explain to everyone exactly what autonomous vessels are?
Thor: Sure. So, according to the International Maritime Organization, the IMO, a maritime autonomous surface ship or MASS is defined as a ship which, to a varying degree, can operate independently of human interaction. Now, that can include using technology to carry out various ship-related functions like navigation, propulsion, steering, and control of machinery, which can include using AI. In terms of real-world developments, at this year's meeting of the IMO's working group on autonomous vessels, which happened last month in June, scientists from the Korean Research Institute outlined their work on the development and testing of intelligent navigation systems for autonomous vessels using AI. That system was called NEEMO. It's undergone simulated and virtual testing, as well as inland water model tests, and it's now being installed on a ship with a view to being tested at sea this summer. Participants in that conference also saw simulated demonstrations from other Korean companies like the familiar Samsung Heavy Industries and Hyundai of systems that they're trialing for autonomous ships, which include autonomous navigation systems using a combination of AI, satellite technology and cameras. And crewless coastal cargo ships are already operating in Norway, and a crewless passenger ferry is already being used in Japan. Now, fundamentally, autonomous devices learn from their surroundings, and they complete tasks without continuous human input. So, this can include simplifying automated tasks on a vessel, or a vessel that can conduct its entire voyage without any human interaction. Now, the IMO has worked on categorizing a spectrum of autonomy using different degrees and levels of automation. So the lowest level still involves some human navigation and operation, and the highest level does not. So for example, the IMO has a degree Degree 1 of autonomy, a ship with just some automated processes and decision support, where there are seafarers on board to operate and control shipboard systems and functions. But there are some operations which can be automated at times and be unsupervised. Now, as that moves up through the degrees, we get to, for example, Degree 3, where you have a remotely controlled ship without seafarers on board the ship. The ship will be controlled and operated from a remote location. All the way up to degree four, the highest level of automation, where you have a fully autonomous ship, where the operating systems of the ship are able to make their own decisions and determine their own actions without human interaction. action.
Susan: Okay, so it seems like from what you said, there are potentially a number of legal challenges that could arise from the increased use of autonomy in shipping. So for example, how might the concept of seaworthiness apply to autonomous vessels, especially ones where you have no crew on board?
Thor: Yeah, that's an interesting question. So the requirement for seaworthiness is generally met when a vessel's properly constructed, prepared, manned and equipped for the voyage that's intended. Now, in the case of autonomous vessels, they're not going to be able to. The kind of query turns to how a shipowner can actually warrant that a vessel is properly manned for the intended voyage where some systems are automated. What standard of autonomous or AI-assisted watchkeeping setup could be sufficient to qualify as having excised due diligence? A consideration is of course whether responsibility for seaworthiness could actually be shifted from the shipowner to the manufacturer of the automated functions or or the programmer of the software of the automated functions on board the vessel as you're aware the concept of seaworthiness is one of many warranties that's regularly incorporated in contracts for the use of ships and for carriage of cargo. And a ship owner can be liable for the damage that results if there's an incident before which the ship owner has failed to exercise due diligence to make the ship seaworthy. And this, in English law, is judged by the standard of what level of diligence would be reasonable for a reasonably prudent ship owner. That's true even if there has been a subsequent nautical fault on board. But how much oversight and knowledge of workings of an autonomous or AI-driven system could a prudent ship owner actually have? I mean, are they expected to be a software or AI expert? Under the existing English law on unseaworthiness, a shipowner or a carrier might not be responsible for faults made by an independent contractor before the ship came into their possession or before it came into their orbit. So potentially faults made during the shipbuilding process. So to what extent could any faults in an AI or autonomous system be treated in that way? Perhaps a ship owner or carrier could claim a defect in an autonomous system came about before the vessel came into their orbit and therefore they're potentially not responsible for subsequent unseaworthiness or incidents that result. There's also typically an exception to a ship owner's liability for navigational faults on board the vessel if that vessel has passed a seaworthiness test. But if certain crew and management functions have been replaced by autonomous AI systems on board, how could we assess whether there's or not there has actually been a navigational fault for which the owners might escape liability or pre-existing issue of unseaworthiness, so a pre-existing hardware or software glitch? This opens up a whole new line of inquiry as to at what might have happened behind the software code or the protocols of the autonomous system on board and the legal issues of responsibility of the ship owner and the subsequent applicable liability for any incidents which might have been caused by unseaworthiness are going to involve a significant legal inquiry and in new areas where it comes to autonomous vessels.
Susan: Sounds very interesting. And I guess that makes me think of, I guess, a wider issue that crewing is only part of, which would be standards and regulations relating to autonomous vessels. And obviously, as a finance lawyer, that would be something my clients will be particularly interested in, in terms of what standards are there in place so far for autonomous vessels and what regulation can we expect in the future?
Thor: Sure. Well, the answer is at the moment, there's not very much. So as I've mentioned already, the IMO has established a working group on autonomous vessels. And the aim of that IMO working group is to adopt a non-mandatory goal-based code for autonomous vessels, the MASS code, which will aim to be in place by 2025. But like I said, that will be non-mandatory, and that will then form the basis for what's intended to be a mandatory MASS Code, which is expected to come into force on the 1st of January 2028. Now, the MASS Code working group last met in May of this year. And it reports on a number of recommendations for inclusion in the initial voluntary MASS Code. Interestingly, one of those recommendations was for all autonomous vessels, so even the fully autonomous degree four vessels, to have a human being, a person in charge designated as the master even if that person is remote at all times so that may rule out a fully autonomous non-supervised vessel from being compliant with the code. So mandatory standards still very much under develop in development and not currently in force until 2028 at the moment that doesn't mean to say there won't be national regulations or flag regulations covering those vessels before then.
Susan: Right. And then I guess another area there would be insurance. I mean, what happens if something happens to a vessel? I mean, I'm looking at it from a financial perspective, of course, but obviously for ship owners as well, insurance will be the key source of recovery. So what kinds of insurance products would already be available for autonomous vessels?
Thor: Well, good to know that some of the insurers are already offering products covering autonomous vessels. So just having Googled what's available the other day, I bumped into Ship Owners Club, which holds entries for between 50 and 80 autonomous vessels under their All Risks P&I cover. And it seems that Guard is also providing hull and machinery and P&I cover for autonomous vessels. And I can see that their industry is definitely taking steps to get to grips with cover for autonomous vessels. So hull and P&I cover is definitely out there. So we've covered some of the legal challenges and insurance and what autonomous vessels are. I wonder, Susan, what other more specific challenges people interested in financing autonomous vessels might face?
Susan: Sure. Yeah. So, I mean, I guess I'll preface that by saying that I'm an asset finance lawyer. So instinctively, when I think about financing autonomous vessels, I'm thinking about the assets itself. So either financing the construction or the acquisitions of of the vessel. But in terms of autonomous vessels in particular, there are boundless investment opportunities beyond just the vessel itself, you know, on the financing, some of the research and development, some of the corporate finance of the companies designing and building those vessels, and the technology used to operate them. So there's, I imagine, a vast opportunity here for an investor who's keen to get involved. From a commercial perspective, autonomous vessels are pretty new. They're pretty untested. Obviously, you've talked a lot about the fact that a lot of the regulation isn't really completely there yet. There's a lot of development still to come. So it takes quite a brave investor to put funding into it. And so far, at least, the return on investment is a bit uncertain. It's not like investing in a tanker or a bulk carrier where you've got a known market. Everyone knows what the problems are. Everyone knows what the risks are, how to mitigate them. So in a lot of ways, this is all still very, very new, both for the owners and for the finances. But investors are very interested in sustainability solutions. They're interested in what the next big thing is. So I imagine that the autonomous ships are quite likely to appeal with potentially better safety records, being more sustainable. That in turn would then make the asset better value for the investors and less likely to result in insurance claims or reputational damage resulting from incidents and that sort of thing. From a legal perspective, it doesn't immediately seem that there would be a huge difference in taking a mortgage over an autonomous ship versus a manned one. But then it becomes a bit more complicated if we start to think about enforcing that mortgage. So in the traditional way to enforce a mortgage, the mortgagee will arrest the vessel in a suitable port. Depending on where the vessel is, the lender may need to instruct the borrower or the manager to sail the vessel to a suitable port. And if the borrower fails to do this, the lender can become a mortgagee in possession, take over the ship, sail it into a friendly port and apply for traditional sale. But how are you going to do that if you can't just go on board and say to the master, hey, I've arrested this ship, I'm going to take over now. And thinking about, for example, the degree three vessels where you'd have a remote operator redirecting the ship, what happens? Presumably the mortgagee would have to go to them and say we'd like you to redirect this vessel what if they refuse can the lender take over can they override the autonomous system or the remote operation would they have to. Would there be cybersecurity issues, issues with password and access and things like that? I mean, these are all kind of big questions at the moment that no one's tried to do this yet. So it isn't really clear how all of this would fit in with the existing law on the rights of a mortgagee in possession, which is a very well-tested legal concept, but it does assume physical control of the ship, which is not as obvious in an autonomous scenario as it would otherwise be. And a conducted issue to that would be, what I already mentioned, is kind of the absence of a clear market, and this would be relevant in the context of a judicial sale. So at least at the outset, valuing autonomous vessels could be a bit difficult. And until there's a clearly defined secondhand market, it might be difficult to lend us to determine whether it's even worth enforcing in terms of the potential return they would get, because it's difficult to analyze how much you might be able to get for the vessel. Not aware of any cases where someone has tried to do this. So the existing law will definitely need to develop and it's going to be very interesting times as we navigate these changes in the market in relation to autonomous vessels.
Thor: Yeah, I can see that autonomy definitely throws up a whole bunch of issues for financing.
Susan: Definitely. I mean, at the moment, we don't entirely know all the answers, but we're definitely looking forward to finding out.
Thor: Right.
Susan: Thank you so much for joining us for our AI podcast today.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
In this episode, we explore the intersection of artificial intelligence and German labor law. Labor and employment lawyers Judith Becker and Elisa Saier discuss key German employment laws that must be kept in mind when using AI in the workplace; employer liability for AI-driven decisions and actions; the potential elimination of jobs in certain professions by AI and the role of German courts; and best practices for ensuring fairness and transparency when AI has been used in hiring, termination and other significant personnel actions.
Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.
Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.
Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.
Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.
Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?
Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of understanding as to what does it mean when you say it's open source and artificial intelligence or open source AI, for that matter. Because now we have a different problem that we are trying to grapple with. The problem we're trying to grapple with is the definition of what is open-source AI. We understand open-source from a software point of view, from a hardware point of view. We understand that I have access to the code, I have access to the chip designs, and so on and so forth. No questions there. It's very clear to understand. But when you talk about generative AI as a specific instance of open-source AI, I can have access to the models. I can have access to the weights. I can do those kinds of stuff. But what was it that made those models become the models? Where were the data from? What's the data? What's the provenance of the data? Are these data openly available? Or are they hidden away somewhere? Understandably, we have a huge problem because in order to train the kind of models we're training today, it takes a significant amount of data and computing power to train the models. The average software developer does not have the resources to do that, like what we could do with a Linux environment or Apache or Firefox or anything like that. So there is this problem. So the question still comes back to is, what is open source AI? So the open source initiative, OSI, is now in the process of formulating what does it mean to have open source AI. The challenge we find today is that because of the success of open source in every sector of the industry, you find a lot of organizations now bending around and throwing around the label, our stuff is open source, our stuff is open source, when it is not. And they are conveniently using it as a means to gain attention and so on. No one is going to come and say, hey, do you have a proprietary tool? Adding that ship has sailed. It's not going to happen anymore. But the moment you say, oh, we have an open source fancy tool, oh, everybody wants to come and talk to you. But the way they craft that open source message is actually quite sadly disingenuous because they are putting restrictions on what you can actually do. It is contrary completely to what the open-source licensing says in open-source initiative. I'll pause there for a while because I threw a lot of stuff at you.
Bryan: No, no, no. That's a lot to unpack here, right? And there's a term I learned last week, and it's called AI washing. And that's where people try to bandy the terms, throw it together. It ends up representing something it's not. But that's fascinating. I think you talked a little bit about being able to see what's behind the AI. And I think that's kind of part of those 11 criteria that you talked about. I think auditability, transparency would be kind of one of those things. I think we're beginning to go into some of the challenges, kind of pitfalls that we need to look out for. But I'm going to just put a pause on that and I'm going to ask Howard to jump in with some questions on his phone. I think he's got some interesting questions for you also.
Howard: Yeah, thank you, Bryan. So, Harris, you spoke about the open source initiative, which we're very familiar with, and particularly the kind of guardrails that they're putting around what open source should be applied to AI systems. You've got a separate foundation. What's your view on where open source should feature in AI systems?
Harish: It's exactly the same as what OSI says. We are making no difference because the moment you make a distinction, then you bifurcate or you completely fragment the entire industry. You need to have a single perspective and a perspective that everybody buys into. It is a hard sell currently because not everybody agrees to the various components inside there, but there is good reasoning for some of the challenges. But at the same time, if that conversation doesn't happen, we have a problem. But from AI Verify Foundation perspective, it is our code that we make. Our code, interestingly, it's not an AI tool. It is a testing tool. It is written purely to test AI solutions. And it's on an Apache license. This is a no-brainer type of licensing perspective. It's not an AI solution in and of itself. It's just taking an input, run through the test, and spit out an output, and Mr. Developer, take that and do what you want with it.
Howard: Yeah, thank you for that. And what about your view on open source training data? I mean, that is really a bone of contention.
Harish: That is really where the problem comes in because I think we do have some open source trading data, like the Common Crawl data and a whole slew of different components there. So as long as you stick to those that have been publicly available and you then train your models based on that, or you take models that were trained based on that, I think we don't have any contention or any issue at the end of the day. You do whatever you want with it. The challenge happens when you mix the trading data, whether it was originally Common Crawl or any of the, you know, creative license content, and you mix it with non-licensed or licensed under proprietary stuff with no permission, and you mix it up, then we have a problem. And this is actually an issue that we have to collectively come to an agreement as to how to handle it. Now, should it be done on a two-tier basis? Should it be done with different nuances behind it? This is still a discussion that is ongoing, constantly ongoing. And OSI is taking the mother load of the weight to make this happen. And it's not an easy conversation to have because there's many perspectives.
Bryan: Yeah, thank you, for that. So, Harish, just coming back to some of the other challenges that we see, what kind of challenges do you foresee the continued development of open source with AI we'll see in the near future you've already said we've encountered some of them some of the the problems are really kind of in the sense a man-made because we're a lot of us rushing into it what kind of challenges do you see coming up the road soon.
Harish: I think the, part of the the challenge you know it's an ongoing thing part of the challenge is not enough people understand this black box called the foundational model. They don't know how that thing actually works. Now, there is a lot of effort that is going into that space. Now, this is a man-made artifact. This piece of software that you put in something and you get something out or get this model to go and look at a bunch of files and then fine-tune against those files. And then you query the model, and then you get your answer back, a rag for that matter. It is a great way of doing it. Now, the challenge, again, goes back to because people are finding it hard to understand, how does this black box do what it does? Now, let's step back and say, okay, has physics and chemistry and anything in science solved some of these problems before? We do have some solutions that we think that make sense to look at. One of them is known as, well, it's called Computational Fluid Dynamics, CFD. CFD is used, for example, if you want to do a fluid analysis or flow analysis over the wing of an aircraft to see where the turbulences are. This is all well understood, mathematically sound. You can model it. You can do all kinds of stuff with it. You can do the same thing with cloud formation. You can do the same thing with water flow and laminar flow and so on and so forth. There's a lot of work that's already been done over decades. So the thinking now is, can we now take those same ideas that has been around for a long time and we have understood them and try and see if we can apply this into what happens in a foundational model. And one of the ideas that's being worked on is something called PINN, which stands for Physics Informed Neural Networks. So using physics, standard physics, to figure out how does this model actually work. Now, once you have those things working, then it becomes a lot more clearer. And I would hazard a guess that within the next 18 to 24 months, we'll have a far clearer understanding of what is it inside that black box that we call the foundational model. With all these known ways of solving problems that, you know, who knew we could figure out how water flows or how, who knew we could figure out how, you know, the air turbulence happens over a wing of a plane. We figured it out. We have the math behind it. So that's where I feel that we are solving some of these problems step by step.
Bryan: And look, I take your point that we all need to try to understand this. And I think you're right. That is the biggest challenge that we all face. Again, when it's all coming thick and fast at you, that becomes a bigger challenge. Before I kind of go into my last question, Howard, any further questions for Harish?
Howard: I think what Harish just came up with in terms of the explanation of how the models actually operate is really the killer question that everybody is poised with the work the type of work that I do is on the procurement of technology for financial sector clients and when they want to understand when procuring AI what the model does it they often receive the answer that it is a black box and not explainable which kind of defies the logic of what their experience is in terms of deterministic software you know if this then that you know ] find it very difficult to get their head around the answer being a black box box methodology and often ask you know what why can't you just reverse engineer the logic and plot a point back from the answer as a breadcrumb trail to the input? Have you got any views on that sort of question from our clients?
Harish: Yeah, there's plenty of opportunities to do that kind of work. Not necessarily going back from a breadcrumb perspective, but using the example of the PINN, Physics Informed Neuro Networks. Not all of them can explain stuff today. We have to, no one, an organization and a CIO who is worth their weight in gold should ever agree to an AI solution that they cannot explain. If they cannot explain, you are asking for trouble. So that is a starting point. So don't go down the path just because your neighbor is doing that. That is being very silly from my perspective. So if we want to solve this problem, we have to collectively figure out what to do. So I give you another example of an organization called KWAAI.ai. They are a nonprofit based in California, and they are trying to build a personal AI solution. And it's all open source, 100%. And they are trying really, really hard to explain how is it that these things work. And so this is an open source project that people can participate in if they choose to and understand more and at some point some of these things will become available as model for any other solution to be tested against so so and then let me then come back to what the verify foundation does we have two sets of tools that we have created one is to create One is called AI Verified Toolkit. What it does is if you have your application you're developing that you claim is an AI solution, great. Now, what I want you to do is, Mr. Developer, put this as part of your tool chain, your CICD cycle. When you do that, what happens, you change some stuff in your code. Then you run this through this toolkit, and the toolkit will spit out a bunch of reports. Now, in the report, it will tell you whether it is biased, unbiased, is it fair, unfair, is it transparent, a whole bunch of things it spits out. Then you, Mr. Developer, make a call and say, oh, is that right or is that wrong? If it's wrong, we'll fix it before you actually deploy it. And so this is a cycle that has to go continuously. That is for traditional AI stuff. Now, you take the same idea in the traditional AI and you look at generative AI. So there's another project called Moonshot. That's the name of the project called Moonshot. It allows you to test large language models of your choosing with some inputs and what outputs come up with the models that you are testing against. Again, you do the same process. The important thing for people to understand and developers to understand, and especially businesses to understand is, as you rightly pointed out, Howard, the challenge we have, this is not deterministic outputs. These are all probabilistic outputs. So if I were to query a large language model, like AAM in London, by the time I ask the question at 10 a.m. in Singapore, it may give me a completely different answer. With the same prompt, exactly the same model, a different answer. Now, is the answer acceptable within your band of acceptance? If it is not acceptable, then you have a problem. That is one understanding. The other part of that understanding is, it suggests to me that I have to continuously test my output every single time for every single output throughout the life of the production of the system because it is probabilistic. And that's a problem. That's not easy.
Howard: Great. Thank you, Harish. Very well explained. But it's good to hear that people are trying to address the problem and we're not just living in an inexplicable world.
Harish: There's a lot of effort underway. There's a significant amount. MLCommons is another group of people. It's another open source project out of Europe who's doing that. AI Verified Foundation, that's what we are doing. We're working with them as well. And there's many other open source projects that are trying to address this real problem. Yeah so one of the outcomes hopefully that you know makes a lot of sense is at some point in time the tools that we have created maybe be multiple tools can be then used by some entity who is a certification authority so to speak takes the tool and says hey Mr. company a company b, we can test your ai solutions against these tools and once it is done you pass we give you a rubber stamp and say you have tested against it so that raises the confidence level from a consumer's perspective, oh, this organization has tested their tools against this toolkit and as more people start using it, the awareness of the tools being available becomes greater and greater. Then people can ask the question, oh, don't just provide me a solution to do X. Was this tested against this particular set of tools, a testing framework? If it's not, why not? That kind of stuff.
Howard: And that reminds me of the Black Duck software that tests for the prevalence of open source in traditional software.
Harish: Yeah, yeah. In some sense, that is a corollary to it, but it's slightly different. And the thing is, it is about how one is able to make sure that you... I mean, it's just like ISO 9000 certification. I can set up the standards. If I'm the standards entity, I cannot go and certify somebody else against my own standards. So somebody else must do it, right? Otherwise, it doesn't make sense. So likewise, from AI Verify Foundation perspective, we have created all these tools. Hopefully this becomes accepted as a standard and somebody else takes it and then goes and certifies people or whatever else that needs to be done from that point.
Howard: Yeah and and we we do see standards a lot you know in the form of iso standards recovering almost like software development and cyber security again that also makes me think about certification which we're is seeing appear in European regulation. We saw it in the GDPR, but it never came into production as something that you certify your compliance with the GDPR. We have now seen it appear in the EU AI Act. And because of our experience of not seeing it appear in the GDPR, we're all questioning, you know, whether it will come to fruition in the AI Act or whether we have learned about the advantages of certification, and it will be focused on when the AI Act comes into force on the 1st of August. I think we have many years to understand the impact of the AI Act before certification will start to even make a small appearance.
Harish: It's one thing to have legislative or regulated aspects of behavior. It's another one when you voluntarily do it on the basis of this makes sense. Because then there is less of hindrance or less of resistance to do it. It's just like ISO 9000, right? No one legislates it, but people still do it. Organizations still do it because it's their, oh yeah, we are an ISO 9035 organization, And so we have quality processes in place and so on and so forth, which is good for those that is important. That becomes a selling point. So likewise, I would love to see something that right now, ISO 42001, which is all the series of AI-related standards. I don't think any one of them has got anything that can be right now be certified yet. Doesn't mean it will never happen. So that could be another one, right? So again, the tools that AI Verified Foundation creates and Mel Korman creates and everybody feeds into it. Hopefully that makes sense. I'd rather see a voluntary take-up rather than a mandated regulatory one because things change. And it's much harder to change the rules than to do anything else.
Howard: Well, I think that's a question in itself, but probably it will take us way over our time whether the market forces us to drive standardization. And we could probably have our own session on that, but it's a fascinating subject. Thank you, Harish.
Bryan: Exactly I think standards and certifications are possibly the kind of the next thing to look out for for AI you know Harish you could be correct. But on that note last question from me Harish so, interestingly the term you use moonshot right and so personally for you what kind of moonshot wish would you have for open source and AI. Leave aside resources, yeah if you could choose what kind of development would you think would be the one that you would look out for, the one that excites you?
Harish: I would rather that, for me, we need to go all the way back to the start from an AI training perspective, right? So the data. We have to start from the data, the provenance of the data. We need to make sure that that data is actually okay to be used. Now, instead of everybody going and doing their own thing, Can we have a pool where, you know, I tap into the resources and then I create my models based on the pool of well-known, well-identified data to train on. Then at least the outcome from that arrangement is we know the provenance of the data. We know how it was trained. We can see the model. model, and hopefully in that process, we also begin to understand how the model actually works with whichever physics related understanding that we can throw at it. And then people can start benefiting and using it in a coherent manner. Instead of what we have today, I mean, in a way, what we have today is called a Cambrian explosion, right? There are a billion experiments happening right now. And majority, 99.9% of it will fail at some point. And 0.1% needs to succeed. And I think we are getting to that point where there's a lot more failures happening rather than successes. And so my sense is that we need to have data that we can prove that it's okay to get and okay to use, and it is being replenished as and when needed. And then you go through the cycle. That's really my, you know, Mojoc perspective.
Bryan: I think there's really a lot for us to unpack, to think about, but I think it's really been an interesting discussion from my perspective. I'm sure, Howard, you think the same. And I think with this, I want to thank you for coming online and joining us this afternoon in Singapore, this morning in Europe on this discussion. I think it's been really interesting from a perspective of somebody who's been in technology and interesting for the ReadSmith clients who are looking at this from a legal and technology perspective. And I just wanted to thank you for this. And I also wanted to thank the people who are tuning into this. Thank you for joining us on this podcast. Stay tuned to the other podcasts that the firm will be producing, and I do have a good day.
Harish: Thank you.
Howard: Thank you very much.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
The rapid integration of AI and machine learning in the medical device industry offers exciting capabilities but also new forms of liability. Join us for an exciting podcast episode as we delve into the surge in AI-enabled medical devices. Product liability lawyers Mildred Segura, Jamie Lanphear and Christian Castile focus on AI-related issues likely to impact drug and device makers soon. They also give us a preview of how courts may determine liability when AI decision-making and other functions fail to get desired outcomes. Don't miss this opportunity to gain valuable insights into the future of health care.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Mildred: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, myself, Mildred Segura,, partner here at Reed Smith in the Life Sciences Practice Group, along with my colleagues, Jamie Lanphear and Christian Castile, will be focusing on AI and its intersection with product liability within the life sciences space. And especially as we see more and more uses of AI in this space, we've been talking about there's a lot of activity going on with respect to the regulatory landscape as well as the legislative landscape and activity going on there, but not a lot of discussion about product liability and its implications for companies who are doing business in this space. So that's what prompted our desire and interest in putting together this podcast for you all. And with that, I'll have my colleagues briefly introduce themselves. Jamie, why don't you go ahead and start?
Jamie: Thanks, Mildred. I'm Jamie Lanphear. I am of counsel at Reed Smith based in Washington, D.C. I'm in the Life Sciences and Health Industry Group. I've spent the last 10 years defending manufacturers and product liability litigation, primarily in the medical device and pharma space. I think, like you said, this is just a really interesting topic. It's a new topic, and it's one that hasn't gotten a lot of attention. A lot of airtime, you know, you go to conferences these days and AI is sort of front and center in a lot of the presentations and webinars. And much of the discussion is around, you know, regulatory cyber security and privacy. And I think that, you know, in the coming years, we're going to start to see product liability litigation in the AI medical device space that we haven't seen before. Christian, did you want to go ahead and introduce yourself?
Christian: Yeah, thanks, Jamie. Thanks, Mildred. My name is Christian Castile. I am an associate at Reed Smith in the Philadelphia office. And much like Mildred and Jamie, my practice consists primarily working alongside medical device and pharmaceutical manufacturers in product liability lawsuits. And Jamie, I think what you mentioned is so on point. It feels like everybody's talking about AI right now. And to a certain extent, I think that can be intimidating, but we actually are at a really interesting vantage point opportunity to get in the ground on the ground floor of some of this technology and how it is going to shape the legal profession. And so, you know, as the technology advances, we're going to see new use cases popping up across industries and, of course, of interest to this group in particular is that healthcare space. So it's really exciting to be able to grapple this headfirst and the people who are sort of investing in this now are going to be able to just really be a leg up when it comes to evaluating their risk.
Mildred: So thanks, Jamie and Christian, for those introductions. As we said at the outset, you know, we're all product liability litigators and based on what we're seeing, AI product liability is the next wave of product liability litigation on the horizon for those in the life sciences space and we're thinking very deeply about these issues and working with clients on them because of what we see on the horizon and what we're already seeing in other spaces in terms of litigation and that's, you know, what we're We're here to discuss today because of the developments that we're seeing in product liability litigation in these other spaces and the significant impact of, you know, what that litigation may represent for those of us in the life sciences space. And to level set our discussion today, we thought it would be helpful to briefly describe, you know, the kind of AI-enabled, you know, med tech or medical devices that we're seeing currently out there on the market. And I know, Jamie, you and I were talking about this, you know, in preparation for today's podcast in terms of, you know, just talking about FDA-cleared devices. I mean, what are the metrics that we're seeing with respect to that and the types of AI-enabled technology?
Jamie: Sure. So, we've seen a huge uptick in the number of medical devices that are incorporating artificial intelligence and machine learning. There are currently around 900 of those devices on the market in the United States, and more than 150 of those were authorized by FDA just in the last year. So, definitely seeing a growing number, and we can expect to see a lot more in the years to come. The majority of these devices, about 75%, are in the field of radiology. So, So for example, we now have algorithms that can assist radiologists when they're reviewing a CT scan of a patient's chest and highlight potential nodules that the radiologist should review. We see similar technology being used to detect cancer. So there's algorithms that can identify cancerous nodules or lesions that may not even be visible to a radiologist because they are undetectable by the human eye. And then other areas where we're seeing these devices being used, then cardiology and neurology.
Mildred: And I would add to that, you know, we're also seeing it with respect to, you know, surgical robots, right? And even though we don't have fully autonomous surgical robots out there on the market, you know, we do have some forms of surgical robots. And I think it's just on the horizon that we'll start to see, you know, in the near future, perhaps these surgical robots using, you know, artificial intelligence driven algorithms. And so that just the thought of that, right, that we're moving in that direction, I think, makes this discussion so important. And not just sort of in the medical device arena, but also within the pharma space where you're seeing the use of artificial intelligence to speed up and improve clinical development, drug discovery, and other areas. So you can see where the risks lie just within that space alone in addition to medical devices. And Christian, I know that you've been looking at other areas as well. So I wanted to tell us a little bit about those.
Christian: Sure. Yeah, and very similar to sort of the medical device space, there is a lot of really exciting room for growth and opportunity in the pharmaceutical space, seeing more and more technologies coming out that are focusing on streamlining things like drug discovery, using machine learning models, for example, to assist with identification of which molecules are going to be the most optimal to use in either pharmaceutical products, but also in the development of identifying mechanisms of action for being able to explain some of the medicines and the disease states that we have now that we're not able to explain as well. And then looking even more broadly, right? So you have, of course, these very specific use cases tied to the pharmaceutical products that we're talking about. But even more broadly, you'll see companies who are integrating AI into things like manufacturing processes, for example, and really working on driving the efficiency of the business, both from the product development standpoint, but also from a product production standpoint as well. So lots of opportunity here to get involved in the AI space and lots of ways to sort of grapple with how to best integrate it into a business.
Mildred: And I think that brings us to the question of what is product liability? For those listeners who may not be as familiar with the law of product liability, just to level set here too, you know, typically we're talking about three common types of product liability claims, right? You have your design defect, manufacturing defect, and failure to warn claims. Those are the typical claims that we see. And each of these scenarios or claims is premised on a product that leaves a manufacturer's facility, you know, with the defect in place, either in the product or in the warning. And these theories fit neatly for products that remain unchanged from the moment they leave the manufacturer's facility, such as consumer goods that are sold at retail. But what about when you start incorporating AI, machine learning technologies into these types of. Devices that are going to be learning and adapting, what does that mean for these types of product liability claims? And what is the impact? How will the courts deal and address and assess these types of claims as they start to see these types of devices and claims being made related to these types of technologies? And I think the key question that will come up right, is in the context of a product liability suit, is this AI-related technology even a product, right? And historically, courts have viewed software as a service that is not subject to product liability causes of action. However, that approach may be evolving to reflect the most Most products, you know, today maybe contain software or are composed entirely of software. We're seeing some litigation and other spaces that Jamie will touch on in a little bit that are showing sort of a change in that trend that we had been seeing and now moving in a different direction, which is something that we want to talk about. So maybe, Jamie, why don't you share a little bit about sort of what we're seeing in connection with product liability claims in other spaces that may inform what happens in the life sciences space.
Jamie: Yeah. So there have been a few cases and decisions over the last few years that I think help inform what we can expect to see with respect to products liability claims in the life science space, particularly around devices that incorporate artificial intelligence and software. One of those cases is the social media products liability MDL out of the Northern District of California. There you have plaintiffs who have filed suit on behalf of minors alleging that operators of various social media platforms designed these platforms to intentionally addict children. And this has allegedly resulted in a number of mental health issues and the sexual exploitation of minors. Now, last year, the defendants filed a motion to dismiss, and there were a lot of issues addressed in that motion, a lot of arguments made. We don't have time to go through all of them. But the one I do want to talk about that is relevant to our discussion today is the defendants argument that their social media platforms are not products, they're services. And as such, they should not be subject to product liability claims. And that argument is really in line with the historical approach courts have taken towards software, meaning that software has generally been considered a service, not a product. So software developers have generally not been subject to product liability claims. And so that's what the defendants argued in their motion, You know, that they were providing a platform where users could come, create content, share ideas. They weren't over in a warehouse making a good, distributing it to the general public, etc. So the court did not agree. The court rejected the defendant's argument and refused to take what it called an all or nothing approach to evaluating whether the plaintiff's design defect claims could proceed. And so the court took a more nuanced approach and it looked at the specific functions of these platforms that the plaintiffs were alleging was defective and evaluated whether each was more akin to tangible personal property or to ideas and content. So, for example, one of the claims that the plaintiff made was that the platforms lacked adequate parental controls and age verification. And so the court looked at, you know, what is the purpose of parental controls and age verification and its access? The court said this has nothing to do with sharing ideas, but this is more like products that contain parental controls, such as a prescription medicine bottle. And the court went through this analysis for each of the other allegedly defective functions. And interesting for each, it concluded that the plaintiff's product liability claims could proceed. And so what I think is huge to take away from this particular decision is that, you know, the court really moved away from the traditional approach courts have taken towards software with respect to product liability. And I think this really opens the door for more courts to do the same, specifically to expand products liability law, strict products liability to various types of software and software functions, such that the developers of the software can potentially be held liable for the software that they're developing. And while there have been a few one-off cases over the years, mostly in state court, in which the court has found that products liability liability law does apply to software. Here we have a huge MDL with a significant number of plaintiffs in federal court. And I think that this case is going to have, or this decision at least, is going to have a huge impact on future litigation.
Mildred: And I think that that's all really helpful, Jamie, in terms of the way you put the court's analysis. And I think one important to highlight is that in this particular case, the plaintiffs brought their causes of action both in strict liability as well as negligence. And I think the reason that's important to us and why it's of concern that you're seeing these plaintiffs, typically they might bring these types of claims under a negligence standard, which involves a reasonable person standard, assessing was there a duty to warn. And the court did look at some of that, you know, was there a duty here to the plaintiffs, but also strict liability, which is the one that you don't typically see brought in the case of, you know, software applications being discussed. And so the fact that you're seeing plaintiffs moving in this direction, asserting the strict product liability claims, in addition to negligence, which is what you would typically see, I think, is what is worth paying attention to. And, you know, this decision was at the motion to dismiss stage. So it will be interesting to see how it unfolds as the case moves forward through discovery and ultimately summary judgment. And it's not the only case out there. There are some other cases as well that are grappling with these issues. But this particular case, as Jamie noted, that the analysis was very detailed, very nuanced in terms of how the court got to where it did, you know, and it did a very thoughtful analysis going through, is it a software or a product? Once it answered that question and moved to, as Jamie noted, analyzing each of the product claims that were being asserted, and with failure to warn, it didn't really dive into that because of the way it had been pleaded. But nevertheless, it's still a very important decision from our perspective. And that was within sort of, you know, this product liability context. We've seen other developments in the case law. Involving cases alleging design defect, not necessarily in the product liability context, but more so in the consumer protection space, if you will. Specifically, one case that we were talking about in preparation for this podcast involving certain types. What was it, Jamie, the specific technology at issue?
Jamie: Yeah, so the Roots case is extremely interesting. And although it's a consumer protection case, not a products case, I do think that it foreshadows the types of new theories that we can expect to see in products liability litigation involving devices that incorporate software, artificial intelligence, and machine learning. So Roots Community Center is a California state case in which a community health center filed suit against manufacturers. Developers, distributors, and sellers of pulse oximeters, which are those devices that measure the amount of oxygen in your blood. And so the plaintiffs are alleging that these devices do not properly measure oxygen levels in people with darker skin, that the level of skin pigmentation can and does affect the output that these devices are generating by overestimating the oxygen level for these individuals. Individuals and that by doing so, these individuals are thinking that they have more oxygen than they do and they appear healthier than they are and they may not seek or receive the appropriate care as a result. And the reason for this, according to plaintiffs, is that the developers of the software when they were developing this device did not take into account the impact that skin color could have, that they essentially drew from data sets that were primarily white. And as such, they got results that. Largely apply to white folks. And so this issue of bias, right, is not one that I've ever seen raised as a theory of defect in a products case. And again, this isn't a products case, but I do expect to see this theory, products cases involving medical devices that incorporate artificial intelligence. You know, the FDA has been very clear that bias and health equity are are at the forefront of their efforts to develop guidelines and procedures specific to artificial intelligence machine learning-enabled devices. Particularly given that the algorithms depend on the data being used to generate output. And if the data is not reflective of the population who will be using the device and inclusive of groups like women, people of color, etc., the outputs for these groups may not be accurate.
Mildred: And what about with respect to failure to warn? You know, we know as product liability litigators that, you know, one of our typical defenses to a failure to warn claim is the learned intermediary doctrine, right? Which means that a manufacturer's duty to warn runs to the physician. You know, you're supposed to provide adequate warnings to the physician to enable them to be able to discuss the risks and benefits of a given device or pharmaceutical or treatment to the patient. But what happens to that? And then that's in the case of prescription medical devices or prescription pharmaceuticals, right? But what happens when you start incorporating AI and these, you know, machine learning technologies into a. Whether it's a medical device or in the pharma space, what happens to that learned intermediary defense? I mean, are you seeing anything that would change your mind in terms of learned intermediary doctrine is here to stay, it's not really going to change, right? I think if you ask me, I would say that based on what we're seeing so far, whether it's within the social media context or even cases that we've seen in the life sciences space that may not be specific to AI or machine learning. I think the fact that we're not yet at the stage where the technology is fully autonomous, it's more assistive, we're augmenting what a physician is doing, that will still require that you have this learned intermediary between the patient and the manufacturer who can speak to, you know, perhaps this treatment, whether it's through a medical type device or a pharmaceutical, is using this technology. Here's what it will be doing for you. This is the way it will function, et cetera. But, you know, does that mean that the manufacturer will have to make sure that they're providing clear instructions to the physician? You know, I think the answer to that is yes. And that's something that the FDA, through its guidance that it's put out, is looking at and has spoken to, Jamie, to your point, right? Not only with respect to bias, they're also looking to ensure that to the extent these technologies are being incorporated, that, you know, instructions related to their use are adequate for the end user, whether that's, you know, in many cases, the physician. But it also does raise questions as as these technologies get more sophisticated, who will be liable, right? What happens when you do start to see a more fully autonomous system who might be making decisions that. The physician just doesn't have the capacity to unpack, for instance, or fully evaluate, right? Who's responsible then? And I think that may explain a little bit of the reticence on the part of physicians to adopt these technologies. And I think ultimately, it's all about transparency, having clear, adequate information where they feel comfortable not only using the technology, right? But who ultimately will be responsible if, you know, God forbid, there's something that goes. Undetected, or the technology is telling the doctor to do something and the doctor overrides it, situations like that. And so I don't know if you all have any additional thoughts on that.
Christian: It's very interesting, right, this learned intermediary concept, I think, particularly because as we see this technology grow, we're going to see the bounds of this doctrine get stretched out a little bit. And to your point, Mildred, transparency here is going to be important, not only with respect to who is making those decisions, but also with respect to how those decisions are being made. So when you're talking about things like, how is the AI working and how are the algorithms that are underlying this technology coming to the conclusions that they are, it's going to be really important in this warning context that what's being discussed, how the AI is helping integrate into these products that everybody involved is able to understand specifically what that means. What is the AI doing? How is it doing it? And how does that translate to the medical service or the benefit that this product or pharmaceutical is providing? And that's all interesting and going to be, I think, incorporated in novel ways as we move forward.
Mildred: And Christian, sort of related to that, right, you mentioned in terms of, you know, whether it's regulation or guidance specific to FDA. What would you say about that in terms of, you know, how it goes hand in hand with product liability? ability?
Christian: Absolutely. So, I mean, as we see increased levels of regulation and increased regulatory attention on this topic, I think one aspect that's going to be really critical to keep in mind is that as we are developing our framework here, the regulations that come out are really going to represent, especially at this beginning stage when we're still coming in to better understanding of the technology itself, these regulations are really going to represent the floor rather than the ceiling. And so it's going to be important for companies who are working in this space and thinking about integrating these technologies to think about how can we incorporate and come into compliance with these regulations, but what are the very specific concerns that might be raised above and beyond these regulations? And so, Jamie, you're talking about some of these social media cases where some of the injuries alleged are very specific to subpopulations of of users in these social media platforms. And so how are we, for example, going to address the vulnerabilities in the population that our products are being marketed to while staying in compliance with the regulations as well? And so that sort of interplay is going to be really, really interesting to see to what degree these legal theories are stretched above and beyond what we're used to seeing and how that will impact understanding of the way the regulations are integrated into the business.
Jamie: You raise a great point, Christian, with respect to the regulations being a floor rather than a ceiling. I think there are a lot of companies out there that reasonably think that as long as they're following the regulations and doing what they're supposed to be doing on that front, that their risk in litigation is minimal or maybe even non-existent. But as we know, that's not the case. A medical device manufacturer can do everything right with respect to complying with FDA regulations and still be found liable in a courtroom. You know, plaintiff's lawyers often come up with pretty creative theories to put in front of a jury regarding the number of things the manufacturer, and I have my air quotes going over here, could or should have done but didn't. And these are often things that are not legally required or even practical sometimes. And ultimately, at least with respect to negligence, it's up to the fact finder to decide if the manufacturer acted reasonably. And while this question often involves considerations of whether the manufacturer complied with regulations and guidances and the like, compliance, even complete compliance, is not a bar to liability. And as product liability litigators, we see plaintiffs relying on a lot of the same theories, a lot of the same types of evidence, and a lot of the same arguments. And so having that base of knowledge and being able to share that with manufacturers and say, hey, look, I know we're not there yet. I know this litigation isn't happening today. But here are maybe some things that you can do to help mitigate your potential future risks or defend against these types of cases later on. And, you know, that's one of the reasons why we wanted to start this conversation.
Mildred: Yeah, and I would definitely echo that as well, Jamie, because as Christian mentioned, you know, the guidance that's being put out by FDA, for instance, really that's the floor in many ways and not the ceiling. And sort of looking at the guidance to provide input and insight into, okay, here's what we should be doing with respect to, you know, the design of this algorithm that will then be used for this clinical trial or to deliver this specific type of treatment, right, as you illustrated in the Roots case involving, you know, the allegation of bias in pulse oximeters, for instance, and really looking at mitigating the potential risk that is foreseeable and can be identified. And obviously, not every risk might be identifiable. And that all gets into, you know, the negligence standard in terms of, you know, what is foreseeable and what isn't. But when you're dealing with these very sophisticated, complex technologies, you know, these questions that we're so used to dealing with in sort of your normal product liability case, I think, will get more complex and nuanced. As we start to see these types of cases within the life sciences space, we're already starting to see it within the social media context, as Jamie touched on earlier. And so I would say, you know, because we're getting close to the end here of our podcast in terms of some key takeaways is obviously monitoring the case law, even if it's not in the life sciences space, but for instance, in the social media space and what's going on there as well as other areas. Monitoring what's going on in the regulatory space, because clearly we have a lot going on. And not just FDA, you also have the Federal Trade Commission issuing guidance and speaking to these issues. And if you don't have a prescription, you know, medical device that's governed by FDA, then you most certainly need to look at, you know, what other agencies govern your specific technology, especially if you're partnering with a medical device manufacturer or pharmaceutical company. And of course, legislation, we know there's a lot of activity, both at the federal and the state level, with respect to the regulation of AI. And so I think there too, we have our eye on that as well in terms of, you know, what if any legislation is coming out of that and how it will impact product liability. And Jamie and Christian, I know you guys have other thoughts on, you know, some of the key takeaways.
Jamie: Yeah. So I want to just sort of return to this issue of bias and the importance that manufacturers make sure that they're looking at and taking into account the available knowledge, whether in scientific journals or medical literature, et cetera, related to how factors like race, age, gender. Impact, the medical risks, diagnosis, monitoring, treatment of the condition that a particular device is intended to diagnose or treat, monitoring those things is going to be really important. I also think being really diligent about investigating and documenting the reasons for making certain decisions typically helps. Not always, but usually in litigation, being able to show documentation explaining the basis for decisions that were made can be extremely helpful. So, for example, the FDA put out a guidance document related to a predetermined change control plan, which is something that was developed specifically for medical devices that incorporate artificial intelligence and machine learning. And the plan is intended to set forth the modifications that manufacturers intend to or anticipate will occur over time as the device develops and the algorithm learns and changes post-market. And one of the recommendations in that guidance is that the manufacturer engage with FDA early before they submit the plan to discuss the modifications that will be included. Now, it's not a requirement, but I expect that if a company elects not to do this, that this is something plaintiff's counsel in a products case would say is evidence that the manufacturer was not reasonable, that the manufacturer could and should have talked to FDA, gotten FDA input, but didn't want to do that. Whereas if the manufacturer does do it and there's evidence of discussions with the FDA and even better, FDA's agreement with what the manufacturer ended up putting in its plan, That would be extremely useful to help defend against a product case because you're essentially showing the jury that, hey, this manufacturer talked with FDA, ran the plan by FDA, FDA agreed, the company did what even FDA thought was right, while that wouldn't be a bargain liability, right? Right. It's not it's not going to it's not going to completely immunize a manufacturer, but it is good evidence to support that the company acted reasonably at the time and under the circumstances.
Christian: Yeah. And I would just add to, you know, Jamie, you touched on so many of the important aspects here. I think the only thing I would add at this point is, you know, the importance as well, making sure that you understand the technology that you're integrating. And this goes so well in hand, Jamie, with much of what you just said about understanding who's making the decisions and why. Investing the energy upfront into ensuring that you're comfortable with the technology and how it works will allow you then moving down the line to just be much more efficient in the way that you respond, whether that's to regulatory modifications down the line, whether that's to legal risk. It will just put you in much of a stronger position if you are able to really explain and understand what that technology is doing.
Mildred: And I think that's the key, Christian, as you said, you know, being able to explain how it was tested, that it was robust, right? Yes, of course, it met the guidance, and if there's a regulation, even better. But that all measures, you know, within reasonable balance were taken to ensure that, you know, this technology being used is safe, is effective, and you try to identify all of the potential risks that could be known based on the anticipated way the technology is working. So with that, of course, we can do a whole podcast just on this topic alone with respect to mitigating the risk. And I think that will be a topic that we focus on as part of one of our subsequent podcasts, but I think unless Jamie or Christian, you have any other thoughts that brings us to an end here of our podcast.
Jamie: I think that pretty much covers it. Of course, there's a lot more detail we could get into with respect to the various theories of liability and what we're seeing and the developments in the case law and steps companies can be taking now, but maybe we can save that for another podcast.
Christian: And I completely agree. I think there's going to be so much to dig into over the next few months and years to come. So we're looking forward to it. Thank you, everybody, for listening to this episode of Tech Law Talks. And thank you for joining Mildred, Jamie, and I as we explore the dynamics between AI technologies and the product liability legal landscape. Stay connected by listening to this podcast moving forward. We're looking forward to putting out new episodes talking about AI and other emerging technologies. And we look forward to speaking with you soon.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Reed Smith and its lawyers have used machine-assisted case preparation tools for many years (and it launched the Gravity Stack subsidiary) to apply legal technology that cuts costs, saves labor and extracts serious questions faster for senior lawyers to review. Partners David Cohen, Anthony Diana and Therese Craparo discuss how generative AI is creating powerful new options for legal teams using machine-assisted legal processes in case preparation and e-discovery. They discuss how the field of e-discovery, with the help of emerging AI systems, is becoming more widely accepted as a cost and quality improvement.
----more----
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
David: Hello, everyone, and welcome to Tech Law Talks and our new series on AI. Over the the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in eDiscovery. My name is David Cohen, and I'm pleased to be joined today by my colleagues, Anthony, Diana, and Therese Craparo. I head up Reed Smith's Records & eDiscovery practice group, big practice group, 70 plus lawyers strong, and we're very excited to be moving into AI territory. And we've been using some AI tools and we're testing new ones. Therese, I'm going to turn it over to you to introduce yourself.
Therese: Sure. Thanks, Dave. Hi, my name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith. My practice focuses on eDiscovery, digital innovation, and data risk management. And And like all of us, seeing a significant uptick in the interest in using AI across industries and particularly in the legal industry. Anthony?
Anthony: Hello, this is Anthony Diana. I am a partner in the New York office, also part of the Emerging Technologies Group. And similarly, my practice focuses on digital transformation projects for large clients, particularly financial institutions. and also been dealing with e-discovery issues for more than 20 years, basically, as long as e-discovery has existed. I think all of us have on this call. So looking forward to talking about AI.
David: Thanks, Anthony. And my first question is, the field of e-discovery was one of the first to make practical use of AI in the form of predictive coding and document analytics. Predictive coding has now been around for more than two decades. So, Teresa and Anthony, how's that been working out?
Therese: You know, I think it's a dual answer, right? It's been working out incredibly well, and yet it's not used as much as it should be. I think that at this stage, the use of predictive coding and analytics in e-discovery is pretty standard, right? Right. As Dave, as you said, two decades ago, it was very controversial and there was a lot of debate and dispute about the appropriate use and the right controls and the like going on in the industry and a lot of discovery fights around that. But I think at this stage, we've really gotten to a point where this technology is, you know, well understood, used incredibly effectively to appropriately manage and streamline e-discovery and to improve on discovery processes and the like. I think it's far less controversial in terms of its use. And frankly, the e-discovery industry has done a really great job at promoting it and finding ways to use this advanced technology in litigation. I think that one of the challenges is that still is that while the lawyers who are using it are using it incredibly effectively, it's still not enough people that have adopted it. And I think there are still lawyers out there that haven't been using predictive coding or document analytics in ways that they could be using it to improve their own processes. I don't know, Anthony, what are your thoughts on that?
Anthony: Yeah, I mean, I think to reiterate this, I mean, the predictive coding that everyone's used to is it's machine learning, right? So it's AI, but it's machine learning. And I think it was particularly helpful just in terms of workflow and what we're trying to accomplish in eDiscovery when we're trying to produce relevant information. Information, machine learning made a lot of sense. And I think I was a big proponent of it. I think a lot of people are because it gave a lot of control. The big issue was it allowed, I would call, senior attorneys to have more control over what is relevant. So the whole idea is you would train the model with looking at relevant documents, and then you would have senior attorneys basically get involved and say, okay, what are the edge cases? It was the basic stuff was easy. You had the edge cases, you could have senior attorneys look at it, make that call, and then basically you would use the technology to use what I would say, whatever you're thinking in your brain, the senior attorney, that is now going to be used to help determine relevance. And you're not relying as much on the contract attorneys and the workflow. So it made a whole host of sense, frankly, from a risk perspective. I think one of the issues that we saw early on is everyone was saying it was going to save lots of money. Didn't really save a lot of money, right? Partly because the volumes went up too much, partly because, you know, the process, but from a risk perspective, I thought it was really good because I think you were getting better quality, which I think was one of the things that's most important, right? And I think this is going to be important as we start talking about AI generally is, and in terms of processes, it was a quality play, right? It was, this is better. It's a better process. It's better managing the risks than just having manual review. So that was the key to it, I think. As we talked about, there was lots of controversy about it. The controversy often stemmed from, I'll call it the validation. We had lots of attorneys saying, I want to see the validation set. They wanted to see how the model was trained. You have to give us all the documents and train. And I think generally that fell by the wayside. That really didn't really happen. One of the keys though, and I think this is also true for all AI, is the validation testing, which Teresa touched upon, that became critical. I think people realized that one of the things you had to do as you're training the model and you started seeing things, you would always do some sampling and do validation testing to see if the model was working correctly. And that validation testing was the defensibility that courts, I think, latched on on. And I think when we start talking about Gen AI, that's going to be one of the issues. People are comfortable with machine learning, understand the risks, understand, you know, one of the other big risks that we all saw as part of it was the data set would change, right? You have 10 custodians, you train the model, then you got another 10 custodians. Sometimes it didn't matter. Sometimes it really made a big difference and you had to retrain the model. So I think we're all comfortable with that. I think as Therese said, it's still not as prevalent as you would have imagined, given how effective it is, but it's partly because it's a lot of work, right? And often it's a lot of work by, I'll say, senior attorneys instead of developing it, when it's still a lot easier to say, let's just use search terms, negotiate it, and then throw a bunch of contract attorneys on it, and then do what you see. It works, but I think that's still one of the impediments of it actually being used as much as we thought.
Therese: And I think to pick up on what Anthony is saying, what I think is really important is we do have 20 years of experience using AI technology in the e-discovery industry. So much has been learned about how you use those models, the appropriate controls, how you get quality validation and the like. And I think that there's so much to use from that in the increasing use of AI in e-discovery, in the legal field in general, even across organizations. There's a lot of value to be had there of leveraging the lessons learned and applying them to the use of the emerging types of AI that we're seeing that I think we need to keep in mind and the legal field needs to keep in mind that we know how to use this and we know how to understand it. We know how to make it defensible. And I think as we move forward, those lessons are going to serve us really well in facilitating, you know, more advanced use of AI. So in thinking about how the changes may happen going forward, right, as we're looking forward, how do we think that generative AI based on large language models are going to change e-discovery in the future?
Anthony: I think there, in terms of how generative AI is going to work, I have my doubts, frankly, about how effective it's going to be. We all know that these large language models are basically based on billions, if not trillions of data points or whatever, but it's generic. It's all public information. That's how the model is based. One of the things that I want to see as people start using generative AI and seeing how it would work, is how is that going to play when we're talking about very, it's confidential information, like almost all of our clients that are dealing with e-discovery, all this stuff's confidential. It's not stuff that's public. So I understand the concept if you have a large language model that is billions and billions of data points or whatever is going to be exact, but it's a probability calculation, right? It's basically guessing what the next answer is going going to be, the next word is going to be based on this general population, not necessarily on some very esoteric area that you may be focused on for a particular case, right? So I think it remains to be seen of whether it's going to work. I think the other area where I have concerns, which I want to see, is the validation point. Like, how do we show it's defensible? If you're going in and telling a court, oh, I use Gen AI and ran the tool, here's the relevant stuff based on prompts, what does that mean? How are we going to validate that? I think that's going to be one of the keys is how do we come up with a validation methodology that will be defensible that people will be comfortable with? Again, I think intuitively machine learning was I'm training the model on what a person, a human being deemed is responsive. So that. Frankly, it's easier to argue to a court. It's easier to explain to a regulator. When you say, I came up with prompts based on the allegations of the complaint or whatever, it's a little bit more esoteric, and I think it's a little bit harder for someone to get their heads around. How do you know you're getting relevant information? So, I think there's some challenges there. I don't know how that's going to play out. I don't know, Dave, because I know you're testing a lot of these tools, what you're seeing in terms of how we think this is actually going to work in terms of using generative AI in these large language models and moving away from the machine learning.
David: Yeah, I agree with you on the to be determined part, but I think I come in a little bit more optimistic and part of it might be, you know, actually starting to use some of these tools. I think that predictive coding has really paved the way for these AI tools because what held up predictive coding to some extent was people weren't sure that courts were going to accept it. Until the first opinions came out, Judge Peck's decision and the Silvermore and subsequent case decisions, there was concern about that. But once that precedent came out, and it's important to emphasize that the precedent wasn't just approving predictive coding, it was approving technology-assisted review. And this generative AI is really just another form of technology-assisted review. And what it basically said is you have to show that it's valid. You have to do this validation testing. But the same validation testing that we've been doing to support predictive coding will work on the large language model generative AI-assisted coding. It's essentially you do the review and then you take a sample and you say, well, was this review done well? Did we hit a high accuracy level? The early testing we're doing is showing that we are hitting even better accuracy levels than with predictive coding alone. And I should say that it's even improved in the six months or so that we've been testing. The companies that are building the software are continuing to improve it. So I am optimistic in that sense. But many of these products are still in development. The pricing is still either high or to be announced in some cases. And it's not clear yet that it will be cost effective beyond current models of using human review and predictive coding and search terms. And they're not all mutually exclusive. I mean, I can see ultimately getting to a hybrid model where we still may start with search terms to cut down on volume and then may use some predictive coding and some human review and some generative AI. Ultimately, I think we'll get to the point where the price point comes down and it will make review better and cheaper. Right. But I also didn't want to mention, I see a couple other areas of application in eDiscovery as well. The generative AI is really good at summarizing single large documents or even groups of documents. It's also extremely helpful in more quickly identifying key documents. You can ask questions about a whole big document population and get answers. So I'm really excited to see this evolution. And I don't know when we're going to get there and what the price effectiveness point is going to be. But I would say that in the next year or two, we're going to start seeing it creep in and use more and more effectively, more and more cost effectively as we go forward.
Anthony: Yeah, that's fascinating. Yeah, I can see that even in terms of document review. If a human was looking at it, if AI is summarizing the document, you can make your relevance determination based on the summary. Again, we can all talk about whether it's appropriate or not, but that would probably help quite a bit. And I do think that's fascinating. I know another thing I hear is the privilege log stuff. And again, I think using AI, generative AI to draft privilege logs in concept sounds great because obviously it's a big costs factor and the like. But I think we've talked about this, Dave and Therese, like we already have, like there's already tools available, meaning you can negotiate metadata logs and some of these other things that cut the cost down. So I think it remains to be seen. Again, I think this is going to be like another arrow in your quiver, a tool to use, and you just have to figure out when you want to use it.
Therese: Yeah. And I think one of the things I think in not limiting ourselves to only thinking about, right, document review, where there's a lot of possibility with generative AI, right, witness kits, putting together witness outlines for depositions and the like, right? Not that we would ever just rely on that, but there's a huge opportunity there, I think, as a starting point, right? Just like if you're using it appropriately. And of course, today's point, the price point is reasonable, you can do initial research. There's a lot of things that I think that it can do in the discovery realm, even outside of just document review, that I think we should keep our minds open to because it's a way of giving us a quicker, getting to the base more quickly and more efficiently and frankly, more cost-effectively. And then you can take a look at that and the person and can augment that or build upon it to make sure it's accurate and it's appropriate for that particular litigation or that particular witness and the like. But I do think that Dave really hit the nail on the head. I don't think this is going to be, we're only going to be moving to generative AI and we're going to abandon other types of AI. There's reasons why there's different types of AI is because they do different things. And I think what we are most likely to see is a hybrid. Right. Right. Some tools being used for something, some tools being used for others. And I think eventually, as Dave already highlighted, the combination of the use of different types of AI in the e-discovery process and within the same tool to get to a better place. I think that's where we're most likely heading. And as Dave said, that's where a lot of the vendors are actually focusing is on adding into their workflow this additional AI to improve the process.
David: Yeah. And it's interesting that some of the early versions are not really replacing the human review. They are predicting where the human review is going to come out. So when the reviewer looks at the document, it already tells you what the software says. Is it relevant or not relevant? And it does go one step beyond. It's hard because it not only tells you the prediction of whether it's relevant or not, but it also gives you a reason. So it can accelerate the review and that can create great cost savings. But it's not just document review. Already, there's e-discovery tools out there that allow you to ask questions, query databases, but also build chronologies. And again, with that benefit, then referencing you to certain documents and in some cases having hyperlinks. So it'll tell you facts or it'll tell you answers to a question and it'll link back to the documents that support those answers. So I think there's great potential as this continues to grow and improve.
Anthony: Yeah. And I would say also, again, let's think about the whole EDRM model, right? Preservation. I mean, we'll see what enterprises do, but on the enterprise side, using AI bots and stuff like that for whether it's preservation, collection and stuff, it'll be very interesting to see if these tools can be used there to sort of automate some of the standard workflows before we get to the review and the like, but even on the enterprise side. The other thing that I think it will be interesting, and I think this is one of the areas where we still have not seen broad adoption, is on the privilege side. We know and we've done some analysis for clients where privilege or looking for highly sensitive documents and the like is still something that most lawyers aren't comfortable using. Using AI, don't know why I've done it and it worked effectively, but that is still an area where lawyers have been hesitant. And it'll be interesting to see if gender of AI and the tools there can help with privilege, right? Whether it's the privilege logs, whether it's identifying privilege documents. I think to your point, Dave, having the ability to say it's privileged and here's the reasons would be really helpful in doing privilege review. So it'll be interesting to see how AI works in that sphere as well, because it is an area where we haven't seen wide adoption of using predictive coding or TAR in terms of identifying privilege. And that's still a major cost for a lot of clients. All right, so then I guess where this all leads to is, and this is more future-oriented. Do we think we're at this stage now that we have generative AI that there's a paradigm shift, right? Do we think there's going to be a point where even, you know, we didn't see that paradigm shift bluntly with predictive coding, right? Predictive coding came out, everyone said, oh my God, discovery is going to change forever. We don't need contract attorneys anymore. You know, associates aren't going to have anything to do because you're just going to train the model, it goes out. And that's clearly hasn't happened. Now people are making similar predictions with the use of generative AI. We're now not going to need to do docker view, whatever. And I think there is concern, and this is concern just generally in the industry, is this an area, since we're already using AI, where AI can take over basically the discovery function, where we're not necessarily using lots of lawyers and we're relying almost exclusively on AI, whether it's a combination of machine learning or if it's just generative AI. And they're doing lots of work without any input or very little input from lawyers. So I'll start with Dave there. What are your thoughts in terms of where do we see in the next three to five years? Are we going to see some tipping point?
David: Yeah, it's interesting. Historically, there's no question that predictive coding did allow lawyers to get through big document populations faster and for predictions that it was going to replace all human review. And it really hasn't. But part of that has been the proliferation of electronic data. There's just more data than ever before, more sources of data. It's not just email now. It's Teams and texts and Slack and all these different collaboration tools. So that increase in volume is partially made up for the increase in efficiency, and we haven't seen any loss of attorneys. I do think that over the longer run that there is more potential for the Gen AI to replace replace attorneys who do e-discovery work and, frankly, to replace lawyers and other professionals and all other kinds of workers eventually. I mean, it's just going to get better and better. A lot of money is being invested in. I'm going to go out on a limb and say that I think that we may be looking at a whole paradigm shift in how disputes are resolved in the future. Right now, there's so much duplication of effort. If you're in litigation against an opposing party, You have your documents set that your people are analyzing at some expense. The other side has their documents set that their people are analyzing at some expense. You're all looking for those key documents, the needles in the haystack. There's a lot of duplicative efforts going on. Picture a world where you could just take all of the potentially relevant documents. Throw them into the pot of generative AI, and then have the generative AI predetermine what's possibly privileged and lawyers can confirm those decisions. But then let everyone, both sides of court, query that pot of documents to ask, what are the key questions? What are the key factual issues in the case? Please tell us the answers and the documents that go to those answers and cut through a lot of the document review and document production that's going on now that frankly uses up most of the cost of litigation. I think we're going to be able to resolve disputes more efficiently, less expensively, and a lot faster. And I don't know whether that's five years into the future or 10 years into the future, but I'll be very surprised if our dispute resolution procedure isn't greatly affected by these new capabilities. Pretty soon, I think, when I say pretty soon, I don't know if it's five years or 10 years, but I think judges are going to have their AI assistance helping them resolve cases and maybe even drafting first drafts of court opinions as well. And I don't think it's all that far off into the future that we're going to start to see them.
Therese: I think I'm a little bit more skeptical than Dave on some of this, which is probably not surprising to either Dave or to to Anthony on this one. Look, I think, I don't see AI as a general rule replacing lawyers. I think it will change what lawyers do. And it may replace some lawyers who don't keep pace with technology. Look, it's very simple. It's going to make us better, faster, more efficient, right? So that's a good thing. It's a good thing for our clients. It's a good thing for us. But the idea, I think, to me, that AI will replace the judgment and the decision-making or the results of AI is going to replace lawyers and I think is maybe way out there in the future when the robots take over the world. But I do think it may mean less lawyers or lawyers do different things. Lawyers that are well-versed in technology and can use that are going to be more effective and are going to be faster. I think that. You're going to see situations where it's expected to be used, right? If AI can draft an opinion or a brief in the first instance and save hours and hours of time, that's a great thing. And that's going to be expected. I don't see that being ever being the thing that gets sent out the door because you're going to still need lawyers who are looking at it and making sure that it is right and updating it and making sure that it's unique to the case and all the judgments that go into those things are appropriate. I do find it difficult to imagine a world having, you know, been a litigator for so many years where everyone's like, sure, throw all the documents in the same pod and we'll all query it together. Maybe we'll get to that point someday. I find it really difficult to imagine that'll happen. There's too much concern about the data and control over the data and sensitivity and privilege and all of those things. You know, we've seen pockets of making data available through secure channels so that you're not transferring them and the like, where it's the same pool of data that would otherwise be produced, so that maybe you're saving costs there. But I don't, again, I think it'll be a paradigm shift eventually in that, paradigm shift that's been a long time coming, though, I think, right? We started using technology to improve this process years ago. It's getting better. I think we will get to a point where everyone routinely more heavily relies on AI for discovery and that that is not the predictive coding or the tar for the people who know how to use it, but it is the standard that everybody uses. I do think, like I said, it will make us better and more efficient. I don't see it really replacing, like I said, entirely lawyers or that will be in a world where all the data just goes in and gets spit out and you need one lawyer to look at it and it's fine. But again, I do think it will change the way we practice law. And in that sense, I do think it'll be a paradigm shift.
Anthony: The final thought is, I think I tend to be, I'm sort of in the middle, but I would say generally we know lawyers have big egos, and they will never allow, they will never think that a computer, AI tool or whatever, is smarter than they are in terms of determining privilege or relevance, right? I mean, I think that's part of it is, there's, you know, you have two lawyers in a room, they're going to argue about whether something is relevant. You have two lawyers in a room, they're going to argue about something privileged. So it's not objective, right? There's subjectivity. And I think that's going to be one of the chances. And I think also, we've seen it already. Everyone thought. Every lawyer who's a litigator would have to be really well-versed in e-discovery and all the issues that we deal with. That has not happened. And I don't see that changing. So unless I'm less concerned about being a paradigm shift than all of us going out for those reasons.
David: Well, I think everyone needs to tune back in on July 11th, 2029 when we come back to get stuff to begin and see who we're going.
Anthony: Yes, absolutely. All right. Thanks, everybody.
David: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
Singapore is developing ethics and governance guidelines to shape the development and use of responsible AI, and the island nation’s approach could become a blueprint for other countries. Reed Smith partner Bryan Tan and Raju Chellam, editor-in-chief of the AI Ethics & Governance Body of Knowledge, examine concerns and costs of AI, including impacts on owners of intellectual property and on workers who face job displacement. Time will tell whether this ASEAN nation will strike an adequate balance in regulating each emerging issue.
----more----
Transcript:
Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday.
Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore all the key challenges and opportunities within the rapidly evolving AI landscape. Today, we'll focus on AI and building the ecosystem here in Singapore. My name is Bryan Tan, and I'm a data and emerging technology partner at Reed Smith Singapore. Together, we have with us today, Mr. Raju Chellam, the Editor-in-Chief of the AI E&G BOK, And that stands for the AI Ethics and Governance Body of Knowledge, initiative by the SCS, Singapore Computer Society, and IMDA, the Infocomm Media Development Authority of Singapore. Hi, Raju. Today, we are here to talk about the AI ecosystem in Singapore, of which you've been a big part of. But before we start, I wanted to talk a little bit about you. Can you share what you were doing before artificial intelligence appeared on the scene and how that has changed after we now see artificial intelligence being talked about frequently?
Raju: Thanks, Bryan. It's a pleasure and an honor to be on your podcast. Before AI, I was at Dell, where I was head of cloud and big data solutions for Southeast Asia and South Asia. I was also chairman of what we then called COIR, which is the Cloud Outage Incidence Response. This is a standards working group under IMDA, and I was vice president in the cloud chapter at SCS. In 2018, the Straits Times Press published my book called Organ Gold on the illegal sale of human organs on the dark web. I was then researching the sale of contraband on the dark web. So all of that came together and helped me when I took over the role of AI in the new era.
Bryan: So all of that comes from dark place and that has led you to discovering the prevalence of AI and then to this body of knowledge. So the question here is, so tell us a little bit about this body of knowledge that you've been working on. Why does it matter? Is it a game changer?
Raju: Let me give you some background. The Ethics & Governance Body of Knowledge is a joint effort by the Singapore Computer Society and IMBA, the first of its kind in the Asia-Pacific, if not the world, to pull together a comprehensive collection of material on developing and deploying AI ethically. It is anchored on the AI Governance Framework 2nd Edition that IDA launched in 2020. The first edition of the BOK was launched in October 2020 before GenAI emerged on the scene. The second edition focused on GenAI was launched by Minister Josephine Thieu in September 2023. And the third edition, the most comprehensive, will be launched on August 22, which is next month. The most crucial thing about this is that it's a compendium of all the use cases, regulations, guidelines, frameworks related to the responsible use of AI, both from a developing concept as well as a deploying concept. So it's something that all Singaporeans, if not people outside, would find great value in accessing.
Bryan: Okay. And so I see how that kind of relates to your point about the dark web, because it is really about a technology that's there that can be used for a great deal of many things. But without the ethics and the governance on top of that, then you run into that very same kind of use case or problem that you were researching on previously. And, you know, So as you then go around and you speak with a lot of people about artificial intelligence, what do you really think is the missing piece or the missing pieces in AI? What are we not doing today?
Raju: In my view, there are two missing pieces in AI, especially generative AI. One is the need for strong ethics and governance guidelines and guardrails to monitor, if not regulate, the development and deployment of AI to ensure it is fair, transparent, accountable, auditable. Two, is the awareness that AI, especially GenAI, can be used just as effectively by bad actors to do harm, to commit crimes, to spread fake news and even cause major social unrest. So, these two missing pieces which are not mutually exclusive can be used for good as well as bad. It's the same with the beginning of the airplanes, for instance. Airplanes can be used to ferry people and cargo around the world. They can also be used to drop bombs. So we need strong guardrails in place. And the EU AI Act is just a starting point that has shown the world that AI, especially GenAI, needs to be regulated so that companies don't misuse information that customers and businesses entrust to it.
Bryan: Okay. Let's just move on a little bit. about cybersecurity. Some of your background is also getting involved with cybersecurity, advising, consulting on cybersecurity. In terms of generative AI, do you see any negative impact, any kind of pitfalls that we should be looking out for from a cybersecurity point of view?
Raju: That's a very pertinent question, given that the Cyber Security Agency of Singapore has just released data that estimates that 13% of phishing scams might be AI-generated. There are also two darker versions of ChatGPT, for example. One is called Fraud GPT, F-R-A-U-D, and the other is called Worm GPT, W-O-R-M. Both are available on the dark web. They can also be used for RAAS, which is ransomware as a service that bad actors can hire to carry out specific attacks. Being aware of the negative possibilities of GenAI is the first step for companies and individuals to be on guard and keep their PII or personally identifiable information safe. So as a person involved in cybersecurity, I think the access that bad actors have to the tool that's so powerful, so all-consuming, so prevalent, can be a weapon.
Bryan: And so it's an area that we all need to kind of watch out for. You can't simply ignore the fact that alongside the tremendous power that comes with the use of GenAI, the cybersecurity aspects should not be ignored. And that's something we should pay attention to. But other than just moving away from cybersecurity, other than cybersecurity, any other issues in AI that also worry you?
Raju: The two key concerns about AI, according to me, other than cybersecurity, are number one, the potential of AI to lead to a loss of jobs for humans. And the second concern is its impact on the environment. So let me delve a little deeper. The World Economic Forum has estimated that AI adoption could impact 85 million jobs by 2030. Goldman Sachs has said in a report that AI could replace about 300 million full-time jobs. McKinsey reports that 14% of employees might need to change their careers due to AI by 2030. This could cause massive unrest in countries with large populations like India, China, Indonesia, Pakistan, Brazil, even the US. The second is sustainability. According to the University of Massachusetts at Amherst study, the training process for a single AI model can emit 284 tons of carbon dioxide. That's equal to greenhouse gas emissions of roughly 62.6 petrol-powered vehicles being driven for a year in the US. These are two great impacts. People, governments, companies, regulators have yet to grapple with because these could become major issues by the time we turn this decade.
Bryan: So certainly some challenges coming up. I remember that for many years you were also an editor with the Business Times here in Singapore. And so this question is about media and media content, specifically, I think, digital media content. And, you know, with that background in mind, now looking closely at generative AI, do you see generative AI affecting the area of digital media and content generation? Do you see any interesting use cases in which gen AI has been applied here?
Raju: Yes, I think digital media and content, including the entire field of advertising, public relations, marketing, will be or is being currently impacted to a large extent by Gen AI, both in its use as well as in its potential. To the extent that many digital media content companies are actively looking at GenAI as a possible route to replace human labor. In fact, if you look at the Hollywood Actors Union, they all went on strike because producers were turning to GenAI to even come up with movie scripts. So, it is a major concern because unlike previous technologies which impacted the lowest ranks of the value chain, such as secretarial jobs, for instance. GenAI has the potential to impact the higher or highest value chain, for instance, knowledge workers. So they could be threatened because all of their accumulated knowledge can be used by GenAI to churn out material as good as, if not better than, what humans could do in certain circumstances. Not in all circumstances, but with digital media content, most of the time, the GenAI model is not augmenting its human potential, it's also churning out material that can be used without human oversight.
Bryan: So certainly a challenge and interesting use case in the field of digital media content. Last question, and again, back to the body of knowledge and talked a little bit about the Singapore government's involvement in this area. In Singapore, we do have a tendency for a lot of things to be government-led. In this particular area where we are really talking about frontier technology like artificial intelligence. Do you think this is the right way to go about it to let the government take the lead? And if so, what more can be done or should be done?
Raju: That's a good question. The good part is that Singapore is probably one of the very few countries, if not the only one where the government tries to be ahead of the curve in tech adoption and in investing in cutting-edge technologies such as AI, quantum computing, biotech, etc. While this is generally good in the sense that a clear direction is set for industry to focus on, is there a risk that companies may focus too narrowly on what the government wants instead of what the market wants? I don't know. More research needs to be done in this area. But look at the numbers. Spending on AI-centric systems is set to surpass 300 billion US dollars worldwide by 2026, as per IDC estimates, up from about $154 billion in 2023. So Singapore's focus on AI and GenAI was the right horse to bet on. And it's clear that AI is not a fad, not a hype, not an evolution, but a revolution in tech. So at least we got that part right here. Whether we will get the other parts or the components right, I think only time will tell.
Bryan: Okay, and final question, looking at it from ecosystem point of view, various moving parts, various parts working together. For you personally, if you had a crystal ball and a wishing wand and you could wish for anything in the future that would help this ecosystem or you think will aid this ecosystem, what would that be?
Raju: I think there is need for stronger guardrails and some kind of regulation to ensure that people's privacy is protected. The reason is, GenAI can infringe upon the copyrights and IP rights of other companies and individuals. This can lead to legal, reputational, and or financial risks for the companies using pre-trained models. GenAI models can perpetuate or even amplify biases learned from the training data, resulting in biased, explicit, unfair or discriminatory outcomes, which could cause social unrest if not monitored or audited or accounted for accurately. And the only authority or authorities that can do this are government regulators. So I think government has to take a more proactive role in ensuring that basic human rights and basic human data is protected at all times.
Bryan: With this, I thank you. Certainly a lot more to be done in building up the ecosystem to encourage and evolve the role of AI in today's world. But I want to thank you, Raju Chellam, for joining us. And I want to invite you who are listening to continue to listen to our series of Tech Law Talks, especially this one on artificial intelligence. And thank you for hearing us.
Raju: Thank you, Bryan. It's been a pleasure.
Bryan: Likewise. Thanks so much, Raju. I really enjoyed doing this.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email [email protected]. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.
The podcast currently has 81 episodes available.
1,624 Listeners
537 Listeners
341 Listeners
791 Listeners
966 Listeners
89 Listeners
1,452 Listeners
210 Listeners
178 Listeners
664 Listeners
251 Listeners
2,474 Listeners
55 Listeners
156 Listeners
330 Listeners