AUTM on the Air

Balancing Innovation and Integrity: Ethical AI in Tech Transfer with Charles Holloran


Listen Later

Are we ready for the ethical challenges AI brings to Tech Transfer? Today’s episode dives into artificial intelligence's rapidly evolving role in tech transfer, examining the frameworks that help us navigate its legal, societal, and ethical complexities. Our guest, Charles Halloran, brings deep expertise in technology licensing and intellectual property, with a career that spans some of the most significant patent and trademark cases. His perspective on managing AI responsibly is invaluable for anyone looking to understand the delicate balance between innovation and integrity.

We’re exploring questions around the ethical use of AI, particularly in the unique environment of Tech Transfer offices at universities. Charles shares insights on how data should be curated and protected, ways universities can create their own safe AI systems, and the protocols necessary to avoid pitfalls in data-sharing. The discussion touches on real-world issues like inventorship, confidentiality, and open-source licensing, offering actionable steps for institutions striving to build trust while leveraging AI's capabilities.

Listeners will come away with practical guidance on fostering responsible AI use, from addressing bias in training data to implementing clear data-management policies. Charles emphasizes that adopting a strong ethical foundation isn’t just good practice—it’s essential for sustainable innovation. This conversation is packed with insights and strategies for navigating the AI-driven future of Tech Transfer with transparency and care.


In This Episode:

[02:02] Tech Transfer is a bridge that brings innovation to the public. Data used to train AI needs to be well-curated and ethically sourced.

[04:01] Legal and ethics challenges TTOs face in maintaining standards, especially when it comes to protecting proprietary information.

[05:05] Charles talks about data privacy and hosting your own AI infrastructure. We've come to understand what reasonable protections need to be in place for previous technologies.

[06:27] AI challenges include helping people understand what's working and what's happening to the data.

[07:37] Universities have put policies in place that restrict the use of LLMs that aren't the licensed commercial choice of the university.

[08:39] Charles talks about protocols and best practices for ensuring that TTOs maintain proper disclosure and human oversight over AI generated work.

[10:25] Ethical responsibilities regarding AI assisted inventorship. Tech Transfer offices need to ask how AI was used if it was used at all.

[13:08] Balancing Innovation with ethical safeguards. Charles talks about the 13 Principles for Using AI Responsibly in Harvard Business Review.

[13:57] Effectiveness and safety are primary concerns in the White House Bill of AI Rights. 

[15:03] Find an AI Bill of Rights that works with your institutional culture.

[16:28] Many TTOs make these frameworks available on a website. Also build it into your education process and outreach to researchers.

[17:58] Charles has a strong background in open-source licensing.

[18:09] How principles from open source can inform responsible AI practices.

[21:18] Charles shares an example where lack of attention to responsible AI policies led to a speed bump in commercializing a product.

[23:07] Being casual about the data that you're using at the development stage leads to roadblocks or problems at the commercialization stage.

[26:18] Charles talks about issues with licensing and shared data between different hospitals or universities.

[27:56] We talk about the risks of social biases when using AI. The first place to begin is recognizing that bias is an issue.

[30:59] We are developing better tools and awareness to help counteract bias.

[32:02] What tech transfer offices can do to help broaden the use of underrepresented groups. Using AI tools to alleviate bias.

[35:05] Should TTOs take a leading role in setting ethical standards for AI use especially when it comes to managing bias in societal impact?

[37:32] It's likely ethical considerations in AI will evolve very quickly.

[41:34] How to start building a foundation for ethical AI use. Charles recommends choosing a framework. Use transparency and create trust. 


Resources: 

Charles Halloran - KPPB

Charles Halloran LinkedIn

Harvard Business Review's 13 Principles for Using AI Responsibly

Blueprint for an AI Bill of Rights


...more
View all episodesView all episodes
Download on the App Store

AUTM on the AirBy AUTM

  • 5
  • 5
  • 5
  • 5
  • 5

5

10 ratings


More shows like AUTM on the Air

View all
Travel with Rick Steves by Rick Steves

Travel with Rick Steves

172 Listeners

On Being with Krista Tippett by On Being Studios

On Being with Krista Tippett

10,291 Listeners

Friday Night Comedy from BBC Radio 4 by BBC Radio 4

Friday Night Comedy from BBC Radio 4

2,122 Listeners

Amicus With Dahlia Lithwick | Law, justice, and the courts by Slate Podcasts

Amicus With Dahlia Lithwick | Law, justice, and the courts

3,481 Listeners

Revisionist History by Pushkin Industries

Revisionist History

59,004 Listeners

Pod Save America by Crooked Media

Pod Save America

86,630 Listeners

The Daily by The New York Times

The Daily

110,567 Listeners

Stay Tuned with Preet by Preet Bharara

Stay Tuned with Preet

32,407 Listeners

Worklife with Adam Grant by TED

Worklife with Adam Grant

9,171 Listeners

The Readout Loud by STAT

The Readout Loud

316 Listeners

The Prof G Pod with Scott Galloway by Vox Media Podcast Network

The Prof G Pod with Scott Galloway

5,431 Listeners

The Rest Is Politics by Goalhanger

The Rest Is Politics

3,000 Listeners

The Rest Is Entertainment by Goalhanger

The Rest Is Entertainment

902 Listeners

The Rest Is Politics: US by Goalhanger

The Rest Is Politics: US

2,159 Listeners

Assembly Required with Stacey Abrams by Crooked Media

Assembly Required with Stacey Abrams

1,547 Listeners