Share TechLaw Chat
Share to email
Share to Facebook
Share to X
As increasingly sophisticated video and audio recording devices become available to householders at only moderate cost, deployment of such surveillance tech by householders is becoming ubiquitous. However, those deploying these devices do not always consider the impact of their surveillance tech on neighbouring properties or the legal ramifications of that impact. This episode explores this theme, and considers the causes of action and practical steps available to a neighbour adversely affected by overly intrusive surveillance tech.
References:
As of now, the UK has not enacted online harms legislation, and social media platforms in general are under no statutory duty to protect children from harmful content. However, providers of video-sharing platforms do have statutory obligations in that regard, set out in Part 4B of the Communications Act 2003 (added to the Act by amendment in 2020). Amongst other things, section 368Z1 of the Act requires providers of such platforms to make appropriate measures to protect under-18s from videos and audio-visual commercial communications containing "restricted material". Regardless of the statutory obligations (or lack thereof in the case of non-video social media platforms), many platforms expend considerable efforts seeking to protect children from harm.
In this episode, we consider how a video-sharing start-up might focus its resources in order to comply with its statutory obligations and to maximise the prospects that it offers a safe environment for children. We are joined in this endeavour by Dr Elena Martellozzo, an Associate Professor in Criminology at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Elena has extensive experience of applied research within the Criminal Justice arena. Elena’s research includes exploring children and young people’s online behaviour, the analysis of sexual grooming and online harm and police practice in the area of child sexual abuse. Elena has emerged as a leading researcher and global voice in the field of child protection, victimology, policing and cybercrime. She is a prolific writer and has participated in highly sensitive research with the Police, the IWF, the NSPCC, the OCC, the Home Office and other government departments. Elena has also acted as an advisor on child online protection to governments and practitioners in Italy (since 2004) and Bahrain (2016) to develop a national child internet safety policy framework.
Further reading:
This end-of-year episode explores the viability of delivery of Christmas gifts by drone in UK airspace. Someone has ambitious plans involving the precision drop of parcels down chimneys. We discuss the legal risks that arise and the hurdles that will have to be jumped if the Civil Aviation Authority is to authorise that plan.
Further reading:
The long-anticipated Supreme Court decision in Lloyd v Google [2021] UKSC 50 was handed down on 10 November 2021. Reversing the decision of the Court of Appeal and reinstating the first instance decision of Warby J, the Supreme Court held that Richard Lloyd could not pursue a damages claim as representative of the class of individuals affected by Google's alleged breach of the Data Protection Act 1998 in relation to the so-called "safari workaround". The reasoning is involved, and the Judgment bears reading in full. In essence, however, the court held that establishing a right to damages for breach of the Data Protection Act 1998, and quantifying those damages, involved a claimant-by-claimant analysis that, in each case, must identify the breach affecting that claimant, the loss suffered by that claimant, and the causal connection between breach and loss. The claims were accordingly unsuitable in principle for a representative action. The Judgment also addressed in some detail the nature of damages for breach of data protection legislation, and the nature and scope of representative actions under CPR 19.6.
In this episode we explore some of the ramifications of the decision through a scenario involving a data breach at an online marketplace.
The Judgment may be found here, and a press summary here.
Non-fungible tokens (or 'NFTs') are a blockchain-based mechanism for uniquely identifying digital assets, and verifying both authenticity and ownership. An increasingly popular use case for NFTs (albeit it is only one of several use cases) involves the creation and sale of digital art. Notwithstanding that the NFT marketplace for digital art is dynamic and growing (with some NFTs selling at auction for vast sums), the legal basis of NFTs and, critically, the nature of what a purchaser actually acquires when purchasing an NFT artwork, are not universally understood. We explore these issues in this episode, which concerns the purchase of an NFT image for commercial use.
Further reading:
AI companies need to engage with the ethical implications of their systems. That involves planning ahead: in this episode, we therefore look at the European Union’s proposed AI regulation, and – with the help of our guest, Patricia Shaw – discuss its application in an EdTech context. The proposed regulation is available here.
Patricia Shaw is CEO of Beyond Reach Consulting Ltd, providing AI/data ethics strategy, public policy engagement, bespoke AI/data ethics risk and governance advice, and advisory board services, across financial services, public sector (Health- and EdTech), and smart cities.
Trish is passionate about Responsible AI and is an expert advisor to IEEE’s Ethical Certification Program for Autonomous Intelligent Systems and P7003 (algorithmic bias) standards programme, a Fellow of ForHumanity contributing to the Independent Audit of AI Systems. She contributed to The Institute for Ethical AI in Education’s Ethical Framework for AI in Education, and is a Fellow of the Royal Society of Arts having been on the Advisory Board for the ‘Power over information’project concerning regulation of online harms.
A non-practising Solicitor, public speaker, and author, Trish is also Chair of the Trustee Board of the Society for Computers and Law, Member of the Board of iTechlaw as well as Vice Chair of their AI committee. She is listed on 2021 - 100 Brilliant Women in AI Ethics™.
Where a contract confers a discretion on one party that materially affects the rights of its counterparty, the discretion must be exercised rationally. The Supreme Court held in Braganza v BP Shipping Ltd [2015] UKSC 17 that exercising a discretion rationally involves (i) taking the right things (and only the right things) into account, and (ii) avoiding a decision that no reasonable decision-maker could have reached. In this episode, we explore how those principles might operate in the context of a discretion exercised automatically by a machine learning algorithm. We do so in the context of a fraud detection algorithm and an online farmers' market somewhere in East Anglia.
Further reading:
Fully autonomous vehicles may be a few years away, but cars offering so-called “eyes off/hands off”, or “Level 3” automation, whereby the car is sufficiently capable that the driver’s role is limited to taking over control when requested by the car to do so, is expected to be commercially available in the very near future. In this episode we flash forward to summer 2023 and an accident involving a Level 3 autonomous vehicle. We consider how existing legal frameworks cope with the liability issues that arise when AI takes control of the driving but where the driver remains in the safety chain as a fallback for when the automation cannot cope.
Further reading:
AI can improve how businesses make decisions. But how does a business explain the rationale behind AI decisions to its customers? In this episode, we explore this issue through the scenario of a bank that uses AI to evaluate loan applications and needs to be able to explain to customers why an application may have been rejected. We do so with the help of Andrew Burgess, founder of Greenhouse Intelligence ([email protected]).
About Andrew: He has worked as an advisor to C-level executives in Technology and Sourcing for the past 25 years. He is considered a thought-leader and practitioner in AI and Robotic Process Automation, and is regularly invited to speak at conferences on the subject. He is a strategic advisor to a number of ambitious companies in the field of disruptive technologies. Andrew has written two books - The Executive Guide to Artificial Intelligence (Palgrave MacMillan, 2018) and, with the London School of Economics, The Rise of Legal Services Outsourcing (Bloomsbury, 2014). He is Visiting Senior Fellow in AI and RPA at Loughborough University and Expert-In-Residence for AI at Imperial College’s Enterprise Lab. He is a prolific writer on the ‘future of work’ both in his popular weekly newsletter and in industry magazines and blogs.
Further reading:
This podcast explores the benefits and limitations of Smart Contracts in the context of human-provided services by considering the practicalities of using Smart Contracts to regulate the contractual relationship between brands and social media influencers.
Further reading:
The podcast currently has 14 episodes available.