
Sign up to save your podcasts
Or
As AI grows and becomes more accessible, it's changing our lives in many ways—including the workforce. Our guest today is an expert in organization and workforce development who will tell us how AI is shaping the hiring process.
Fred Oswald is a Professor at Rice and the Herbert S. Autrey Chair in Social Sciences. His Organization & Workforce Laboratory (OWL) at Rice focuses on selection and job performance models in organizational, educational, and military contexts, as predicted by individual differences (such as personality and ability) as well as group differences (workgroup characteristics, gender, race/ethnicity, and culture).
In our first episode of Season 3, Fred joins host David Mansouri. They delve into Fred’s journey to Rice, his research on testing and job performance models, and the work being done in his lab at Rice.
The conversation highlights the ethical and practical applications of AI in organizational and educational settings, exploring how AI tools can shape hiring practices and support teaching and learning.
Let us know you listened to the episode and leave questions for future guests by completing this short form.
Episode Guide:
Beyond The Hedges is a production of Rice University and is produced by University FM.
Show Links:Is AI turning users into critics?
21:17: I've noticed in my own experimentation—no surprise, because I think there's a common experience, you know, with generative AI. With the language models, you often become a critic in ways that you, of course, criticize or advise your own work, but when a GPT is producing language, say, summarize this paper for me or something like that, you shift into the role of a critic and say, is this good enough? However, you define that good enough for you, good enough for your audience, your stakeholders, you know, both in terms of the thematic, the substance—what is there? Does it seem right? But also, critically, what is missing? What didn't show up? And really working with that, I think, changes kind of your approach to how you do some of that work.
Using AI to empower the talents we have
27:47: You do have to build the fundamentals to understand what AI is doing, so in that sense, we can't use AI as a crutch. We have to use it as a way to empower the talents we already have and are building ourselves.
Examining bias and AI's influence in decision-making
08:59: How does bias work when we talk about AI as biased? Well, what does that mean in terms of the data and the decisions that are made from those data? This work gets embedded in these organizations. So, I'm not only concerned with the development of tests, but I'm concerned about the context in which they're being used.
As AI grows and becomes more accessible, it's changing our lives in many ways—including the workforce. Our guest today is an expert in organization and workforce development who will tell us how AI is shaping the hiring process.
Fred Oswald is a Professor at Rice and the Herbert S. Autrey Chair in Social Sciences. His Organization & Workforce Laboratory (OWL) at Rice focuses on selection and job performance models in organizational, educational, and military contexts, as predicted by individual differences (such as personality and ability) as well as group differences (workgroup characteristics, gender, race/ethnicity, and culture).
In our first episode of Season 3, Fred joins host David Mansouri. They delve into Fred’s journey to Rice, his research on testing and job performance models, and the work being done in his lab at Rice.
The conversation highlights the ethical and practical applications of AI in organizational and educational settings, exploring how AI tools can shape hiring practices and support teaching and learning.
Let us know you listened to the episode and leave questions for future guests by completing this short form.
Episode Guide:
Beyond The Hedges is a production of Rice University and is produced by University FM.
Show Links:Is AI turning users into critics?
21:17: I've noticed in my own experimentation—no surprise, because I think there's a common experience, you know, with generative AI. With the language models, you often become a critic in ways that you, of course, criticize or advise your own work, but when a GPT is producing language, say, summarize this paper for me or something like that, you shift into the role of a critic and say, is this good enough? However, you define that good enough for you, good enough for your audience, your stakeholders, you know, both in terms of the thematic, the substance—what is there? Does it seem right? But also, critically, what is missing? What didn't show up? And really working with that, I think, changes kind of your approach to how you do some of that work.
Using AI to empower the talents we have
27:47: You do have to build the fundamentals to understand what AI is doing, so in that sense, we can't use AI as a crutch. We have to use it as a way to empower the talents we already have and are building ourselves.
Examining bias and AI's influence in decision-making
08:59: How does bias work when we talk about AI as biased? Well, what does that mean in terms of the data and the decisions that are made from those data? This work gets embedded in these organizations. So, I'm not only concerned with the development of tests, but I'm concerned about the context in which they're being used.