Share Amplifying Cognition
Share to email
Share to Facebook
Share to X
– Jack Uldrich
Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual.
Website: www.jackuldrich.com
LinkedIn: Jack Uldrich
Facebook: Jumpthecurve
YouTube: @ChiefUnlearner
X: @jumpthecurve
Books:
Green Investing: A Guide to Making Money through Environment Friendly Stocks
Foresight 20/20: A Futurist Explores the Trends Transforming Tomorrow
Soldier, Statesman, Peacemaker: Leadership Lessons from George C. Marshall
The Next Big Thing Is Really Small: How Nanotechnology Will Change the Future of Your Business
Jump the Curve: 50 Essential Strategies to Help Your Company Stay Ahead of Emerging Technologies
Into the Unknown: Leadership Lessons from Lewis & Clark’s Daring Westward Expedition
Business As Unusual: A Futurist’s Unorthodox, Unconventional, and Uncomfortable Guide to Doing Business
A Smarter Farm: How Artificial Intelligence is Revolutionizing the Future of Agriculture
Higher Unlearning: 39 Post-Requisite Lessons for Achieving a Successful Future
People
Film
Books
Ross: Jack, it is awesome to have you on the show.
Jack Uldrich: It’s a pleasure to be here.
Ross: You’ve been thinking about the future and helping others think about the future for a very long time now. So what’s the foundation of how you do that?
Jack: The foundation, I would say, is silence. First, it’s meditation. I actually try to get to the thought beyond the thought. And what I mean here is, I’m always looking for insights, but in order to do that, I first have to free myself of all my old habits, assumptions, and other ways of thinking. And so on a daily basis, I do try to meditate on that, and then I look for insights. And I want to make this clear, I’m not looking for conclusions. As soon as you’ve locked yourself into a conclusion or what you think the future is going to be, you’re going to get yourself in trouble. But insights, I do think we can come to insight. So I’ll just sort of step back and say that’s where I start — silence, contemplation, meditation,
Ross: That is absolutely awesome. I think this goes this idea of fluid thinking, as in, there’s a lot of people whose thinking is rather rigid, as in, think of a particular way, and ask a year or two or 10 later, and they’re thinking the same way, whereas that doesn’t quite work when the world is changing around you.
Jack: No, that’s right. And so the next thing I would say is, and I hope to sort of disabuse people of what they think futurists do. I’m quite clear in saying, first, I definitely don’t try to predict the future, but nor do I say I have the answer to the future. But having said that, that doesn’t absolve any of us of a more important responsibility, and if none of us have the answer to the future, we have to be sure we’re asking the best possible questions of the future.
Frequently, when I see why businesses or organizations miss the future or why they became bankrupt, it’s not because they weren’t bright and intelligent, nor did they have capable C-staff, but they’re primarily answering the wrong question. They just didn’t understand either how technological change had shifted their business, their business model, their customer expectations, or they didn’t understand what their competitors were up to. So I spent a lot of time trying to make sure I’m asking the best possible questions of the future, while at the same time always having humility to the idea that there’s got to be a question I’m missing. And so I fall back on this idea of humility quite a bit, because it’s not what we know that gets us in trouble. It’s what we think we know, that we just don’t. And so we have to have humility as we approach the future.
Ross. Yes, yes. And that’s something that we don’t see quite enough of in the world when we look around.
Jack: No, really. You don’t. I wish there could be a course on that, or just trying to help people. How do you actually embrace humility in a real way? I mean the Greek root of the word is hummus means close to the earth and so again, this sort of goes back to silence, but I think that I spend a lot of time in nature in order to do better thinking. I actually try to get away from my smartphone, the laptops, and all of this other stuff.
I love your background. And I think one of the other things is just getting out under the night stars. Unfortunately, 80% of the world’s population, due to light pollution and air pollution, can’t actually see the night stars, which I think is troubling, but it’s this idea, if you can get out onto the night stars, it reminds us one of how little we actually know, but just how much else there is out there. And I think it’s that sort of deep humility that can keep me asking questions and probing the future, and should keep all of us probing the future,
Ross: That evokes, for me, this idea of observing over the very, very long time I’ve been doing foresight and futures. There’s a cyclicality to people’s openness to thinking about the future, and one of them is the big shocks. So we have the global financial crisis, or covid or we have the Asia crisis in the late 90s, or a whole lot of some elections last couple of decades, for example, where all of the. The people who were supposed to know what was going to happen didn’t. And hopefully, when, I think to a fair degree, we started realizing, well, all right, we need to be thinking about the future in a more questioning way, rather than thinking we do know the answers because what they thought of the answers didn’t turn out to be right. So, we can be educated by our falls.
Jack: Right. I think I’m sure you’ve read it, but one of the most seminal books for me in the last 1213 years was Nassim Taleb, The Black Swan, the high impact of low probability events. And that actually shifted my thinking. It was a blind spot I had as a future sort of. Of course, I was aware that these random events happened, but this idea of how important they are to understanding the future, and then to say, how do we think about some of these things.
And so I’ll just give you an example that I mean for years before, I was talking about the possibility of a pandemic. And it’s not to say I predicted the pandemic. I didn’t, but to say I did write about it and say, here’s how I think. And in my case, my thinking only went as far as the global supply chain, like its impact on e-commerce and the future of work. Like I just completely missed that until we were living it. And so I think getting back to this idea of the Black Swan, I think that there are so many of them, like the possibility of a solar storm or and what that would mean to the electrical grid, what it would mean to our reliance on all of our electronic devices, what it might mean to the future of autonomous cars, if that happens.
And so as I think about the future, I try to incorporate this understanding that there might be this alternative future. The future is going to unfold in multiple directions at the same time, if some of these rare, low probability events happen, the world shifts. And as leaders and as futurists, we have to prepare people for that possibility, but then we have to think through what else might be some of these low probability, high impact events. And so could I just turn the tables on you, and to ask you, as you as a futurist, how do you sort of think about those events, and how do you try and prepare your clients?
Ross: It’s a great question. I mean, one of the ones which I think about, which is the California Earthquake. It’s like one of the top ones in that. Well, nobody thinks about it much particularly, except for the insurers who don’t give any insurance away. But yeah, that’s actually a reasonable probability, if you look over a decent time frame. But again, devastating.
And this comes back to scenarios. So, my core discipline for structured foresight work is scenario planning. We can’t predict, so we need to look at a number of different scenarios. But at any comprehensive scenario planning project, you have your scenarios, and then you add into that the unlikely but high-impact events, which could be natural phenomenon, could be pandemics, could be external, cosmic events, a whole lot array of or even technologies which have impact far beyond what we could imagine, nuclear fission, or whole array of close to unimaginable things.
And it is challenging for a leader, because you can’t plan for something which is very low likelihood and where you don’t even know the shape of it. And so a lot of it is being able to say with the scenarios you have, and being able to point to some of these far more far flung possibilities, is to build responsiveness. That’s always been, you know, I think that the real function of working with leaders in foresight and futures is to be able to build your ability to respond to the ultimately the unanticipated strategies for what it is you can anticipate, but as you say, you try to look for all the questions you can, but you’re always missing some questions, and so you need to be able to build that ability. To respond very flexibly and promptly and with openness to recognizing things when they happen, rather than the denial or slow to being too slow to respond.
Jack: I would agree, and I would say along those lines, resilience is something I’m speaking more and more to my audiences about, and I just want to use that idea of an earthquake out in California as an example. There was a wonderful article in The New Yorker years ago called The Next Big One, and it talks about the Cascadia subduction zone, this massive earthquake that might hit from north of Vancouver all the way past Seattle and down Portland. And it’s not just the earthquake, it’s the resulting tsunami that comes and it’s apparently 100 years over now that it could happen tomorrow. It might not happen for another 100 years, like we just don’t know, but this idea that it could happen is the insurance companies, I do think, are aware of it, but most businesses and organizations aren’t. And again, you can’t necessarily dictate everything you do based on the possibility that’s happening, but you do have to have a small element of insurance. What sort of resilience do you need to build? Like if you live out there, you’d better have something in the trunk of your car that can make sure you can survive for seven days, like I would say at a minimum, as individuals, that’s what you should do. But businesses have to think at this longer term, but it’s really challenging in today’s environment where short term profits drive most corporations that the goal isn’t necessarily short term success, it’s long term survivability, and so that this notion of long term survivability has to factor into people in organizations and leaders thinking, and I don’t think it does enough. And so I’m spending more time trying to talk about resilience. I can’t tell you I’m getting anywhere with the corporations and organizations I’m working with, but I’m trying to get them to understand the importance of building resilience, to just withstand some of these shocks if they should hit us.
Ross: Well, I also think it’s important to shift to a positive transformational frame. So I’m currently preparing for a keynote in sable career, which is essentially around sustainability. But in a way, sustainability is ground stakes, as in sustained as you can continue. And if you can’t sustain your business or the economy or the planet, then that’s not very good. So that’s got to be the ground stakes. It’s sustainable, but you want to go beyond that to be able to regenerate, to improve, to grow. I think to a point there’s an analog there with resilience, where resilience is able to come back to where you were. But in fact, you want to positively transform yourself, not just to be resilient to shocks, but to be able to the antifragile type concept, where you say, well, in fact, the shock makes us stronger, and how do we go beyond sustainability or resilience to regenerational transformation?
Jack: No, I really like that. And I particularly like the word, ‘regenerative’. I mean that’s to me, sustainable is a word that’s overused and has kind of lost its luster. And as one person said to me, like, if they said, ‘Oh, your marriage is sustainable,’ no one would sort of be happy with a sustainable marriage like but, but we want a regenerative future, one where we’re constantly growing and improving or just doing different things. And so I like that idea of a regenerative future, and I will tell you, as a futurist, I do, in fact, see individuals and organizations beginning to take seriously this idea of moving beyond sustainability and moving towards a regenerative future. And as a futurist, this is where I feel that’s the future I want to help create. And so I’m increasingly open with my clients. Is to say, look, I’m not here as some passive, neutral observer of the future. There is, in fact, a better future out there, and I want to help play a role in that, and that’s why I’m here talking to you and your organizations. Let’s figure out how we can roll up our sleeves and create this better, more beautiful, bolder, regenerative future. Can’t say it’s necessarily catching on with all clients, especially I do most of my work here in the US, but it’s a it’s a growing trend, and it’s one that excites me as a futurist, and it actually gives me increased hope and optimism for the future, to see all of these individuals and organizations just getting in there and working to create a better future.
Ross: So we were chatting before turning on the record button about the pace of change today. So we’ve both been in this game for a long time, and are able to gain some glimpses into the future. And today, with the pace of change, the time horizon we’re looking forward to does seem to be shrinking a bit.
Jack: It really does. And just to let your audience in, I was saying even though I’ve been talking about exponential change for the past two decades, just using the advances in artificial intelligence as the most prominent example, I mean to see how fast OpenAI and ChatGPT and the other models, Claude, Pi, Anthropic have changed just in the two years since then. Release is absolutely staggering.
And here’s where I would like to talk about Ray Kurzweil, who I have an immense amount of respect for. He’s the first one who actually turned me on to this idea of exponential growth. I read his book The Singularity Is Near 20 years ago, and he’s been remarkably consistent and been remarkably accurate, but now he is saying, by 2045 human intelligence will be a million fold smarter. I think he uses the term smarter, and this is one that I take seriously. I don’t know if we’ll necessarily achieve that, but we have to take this idea seriously. I mean, I really do believe we as a society are at an inflection point.
And there’s a wonderful interview with Suleiman, who is the author of a book on AI, and Harari, the fellow who wrote sapiens and then Homo Deus. But Harari says this is the end of human history. He doesn’t say it’s the end of history. He says it’s the end of human history. Something is about to surpass humans, and we as humans have to take this idea seriously, and we have to really think long and hard and seriously about it. And so one of the things I’m trying to spend more time doing is what does wisdom look like in the future? How do we do? I don’t doubt in Iota that we will become smarter and more knowledgeable as a species, but knowledge doesn’t always translate into wisdom. And so what first, how do you define wisdom? And I think to do so, we start to get into these intangible matters, matters of the heart, matters of the soul, other things like that, that even scientists don’t necessarily agree with. But AI can mimic human intelligence, but can it mimic all aspects of the human experience, and right now, I personally don’t think it can and that both troubles me, but it also gives me hope that this is the role that humans are meant to play. We are meant to bring the innate human characteristics of love and empathy and compassion and questioning and balancing of different interests that there is no one answer out there, and so I’m babbling here. But I think early on, I said, or before we started taping, like, I don’t have any answers here, I’m struggling, just as I think many people are, with what’s coming next here?
Ross: I think a lot of what you said is spot on as and if we just just think, all right, basic thing, what’s, what’s humans’ role going to be here? It’s the wisdom understanding, the ethics, the frame, the context, the why. And that’s not something which we want to delegate. And so I think that whilst there is in a way, the future is unforeseeable in terms of this expo, the scope of technological advancements, the pace of technological advancement and how we use it. This is, in fact, a time when we have more choices in how we create and what we create, where we can actually choose saying, ‘well, we do have extraordinary technologies.’ The question is, what? How do we frame our human role relative to other technologies we’ve created?
So our attitude and how we embrace this is going to absolutely shape the future of work and many other aspects of our society. And I think that there’s not enough people who are recognizing the fact that the choices that we have are not just in things like trying to slow down the way we use, you know, the slowdown or to have guardrails around technologies that that’s significantly important. There’s more saying how the choices are, how positively we use these technologies, and the choices we want, what we as humans want to be complemented by these technologies. So I think that, yes, we want to, and we will maintain that role of wisdom and guide and mentors, but we have to improve at that as well, because we’re not as humanity has not proven to be as wise as we might want it to be.
Jack: No. Let me ask you that, do you think? And I think this is really interesting in this world of artificial intelligence and how fast it is coming. I think most people would agree, at least really, since we went from hunter gatherers to agriculture, we have defined ourselves humans by work like that is what we do. And in this post-future world where AI is going to get better, and I don’t mean to suggest it’s going to be able to do everything, but I do think it warrants us to begin rethinking what a world where work itself isn’t the primary driver of our educational system.
For example, right now, most people go to school all with the idea that you are getting trained to get a job and be, quote, unquote, a productive member of society. And I don’t want to say that that’s bad, but we’re still going to need education, but in this new world where AI can do a lot of different things, how does that change the nature of education, and how do we leverage it to become more creative? How do we use it to become wiser? Like there is this, I always think that the silver mining in all of this is we have the opportunity to create a future where we’re more human we engage in the activities that most make us feel alive like that’s a really exciting future, and I think that’s where we have to dedicate our time and our efforts. And as we think about regulating AI, it has so many positive attributes, and I I’m not anti technology, but to say, at the same time, we have unleashed something that we don’t fully understand, and how can we to the best of our ability to put some sort of safeguards around it in terms of transparency. Can it explain itself? Do humans sort of control the onoff switch in case of an emergency? How do we deal with bias and all of the other problems, but at the same time, we have to also ask ourselves deeper questions, like, how do we need to begin changing as humans and species, in order to really reap the full benefits of this. I mean, I think, to me, that’s some really rich fertile ground.
And as I’m approaching the end of my sort of I’ll always be a futurist and I want to spend more time sort of delving into into these issues in the last stage of my career, at least in the corporate world, to just remind people It is really exciting, but it comes with great responsibilities.
Ross: Yes, absolutely. And to your point, around what it is we want to do, what it is that is most human, I believe that is significantly, you know, exploring and expressing our potential, what it is we can do, and in contributing. And both of those are work essentially, you know, work at its best is doing things which we are the best at to contribute to society. If we’re helping an organization who is helping its customers, then we are contributing.
And so a little while ago, I wrote this, little mini reports, 13 Reasons Why to point to a positive future of work. And I believe that we can have a prosperous, positive future of work. And these are the choices that we we need to make. So I don’t, one of the questions coming back to this frame is, whether we’re able to pull this off, and I absolutely believe that at least a large proportion of people will be able to have fulfilling, rewarding jobs. I really, I think it is unlikely, very unlikely, that we will have massive unemployment. However, the question is, how inclusive can we make that? I think it’s potential for us to have essentially full employment with a very large port of those roles being rewarding and rich in helping us to grow personally. But that’s this. We still have to frame this as a question we have to answer as in saying, what are the ways in which we can create this, make this possibility real?
Jack: Yeah, I think one of my challenges, and I’ve spent a lot of time as a futurist with the concept of unlearning, is that people in organizations, it’s not that they can’t understand the future is going to change what we have a really difficult time doing, is letting go of the way we’ve always done things. And so I think when we’re talking about the future of work, is that, to me, work does just give most humans just this intrinsic value, and they feel as though they’re an integral part of the community. And so I think there will always be this innate need to to be doing something, and not just for yourself, but on behalf of something bigger. And when I say bigger, typically, I’m thinking of community. You just want to do something for, of course, yourself, your immediate family, but then your neighborhood and your community.
And so as I think about the long-term future, one of the things I’m really excited about is, and first I’m going to go dark, but I think there’s going to be a bright side to this. One of the things that I think is happening right now that’s not getting enough attention as a futurist, is the internet is breaking in the sense that there’s so much misinformation and disinformation out there that we can no longer trust our eyes and our ears in this world of artificial intelligence. And I think that’s going to become increasingly murkier, and it’s going to be really destabilizing to a lot of people in organizations. So what’s the one thing we still can trust? What’s small groups that are right in front of us? And so I think one of the things we’re going to see in a future of AI is an increased importance on small communities, that there’s some really compelling science that says the most cohesive units are about 150 people in size. And this is true in the military, educational units, you know, other things like that. And I think that we might start seeing that, but it’s going to look different than the past, like I’m not suggesting that we’re all going to look like Amish communities here in the US, where we’re saying no to technology and doing things the old fashioned way, but then the new communities of the future are and now I’m just thinking out loud or something. I want to spend some more time thinking about what it will look like. What will the roles and the skills be needed in this new future. And again, I don’t have any answers right now, just more questions and and thinking. But it’s one of these scenarios I could see playing out that might catch a lot of people by surprise.
Ross: Yeah, very much. So, I mean, we are a community based species, and the nature of community has changed from what it was. And I think that thinking about the future of vanity, I think a future of community, how that evolves, is actually a very useful frame.
So to round out Jack, what advice can you share with our listeners on how to think about the future? I suppose you did a little at the beginning, but I mean, what are any concluding thoughts on how people can usefully think about the extraordinary change in the world today.
Jack: Yeah. The first thing I would say is this, and I was just doing a short video on this, and ever since we’ve been in grade school, most of us have been asked the question or graded on the question of how creative are you? And if you ask most people, like on a scale of one to 10, to just answer that question, they’ll do it. But you know what I always tell people, that’s a bad question. The question of the future isn’t how creative are you? It is. How are you creative?
Each and every one of us is creative in our own way and so and I take that as a futurist, I take that really seriously. We do have the ability to create our own future, but we first have to understand we are creative, and most people don’t think of themselves that way. So how do you nurture creativity? And this is where I’m trying to spend a lot of my time as a futurist, and this is where the ideas of unlearning and humility come in. But I would say it starts with curiosity and questions, and that’s why I like getting out under the night stars and just being reminded of how little I actually know. But then it’s in that space of curiosity that imagination begins to flow. And there’s this wonderful quote from Einstein, and most people would say he was one of the more brilliant minds of the 20th century. He said ‘Imagination is more important than knowledge.’ Like, why did Einstein, this great scientist, say that? And I think, and I don’t have proof of this, is that everything around us today was first imagined into existence, and it was imagined into existence by the human mind, like the very first tool, the very first farm implement, and then farming as an industry, and then civilizations and cities and commerce and democracy and communism, like they were all imagined first into existence. And so what we can imagine, we can, in fact create, and that’s why I’m still optimistic as a futurist, this idea that we’re not passive agents, that we can create a future.
And I just like to remind people like our future can, in fact, be incredibly fucking bright, like the idea that we can have cleaner water and sustainable energy and affordable housing and better education and preventive health care. We can address inequality. We can address these issues like this. People just have to be reminded of this. And so at the end of the day, that’s why I get fired up. And I don’t think I’ll ever sort of lose the title of futurists, because I’m gonna, until my last breath, I’m going to be hopefully reminding people we can create, and we have a responsibility to create a better future.
Let me just end this. I think the best question we can ask ourselves right now comes from Jonas Salk, the inventor of the polio vaccine. And he said, ‘are we good ancestors?’ And I think the answer right now is we’re not, but we still have the ability to be better ancestors. And maybe if I could just say one last thing, as I also spend a lot of time helping people just embrace ambiguity and paradox. And here’s the truth, the world is getting worse in terms of climate change, the rise of authoritarianism. Inequality there, you could say things are going bad, but on the other hand, you could say the world is getting demonstrably better. It has never been a better time to be alive as a human, the likelihood that you’re going to die of starvation or warmth or not be able to read, never been lower. So the world is also getting better, but the operative question becomes, how can we make the world even better? And that’s where we have to spend our time and that’s going. That’s why we need creativity, curiosity, and imagination to create that better future. So a long winded answer to a short question.
Ross: Well, an important one, and I think you’re right, that’s absolutely the most important question of all. So where can people find out more about your work, Jack?
Jack: My website www.JackUldrich.com. I have a free weekly newsletter called The Friday Future 15. I encourage everyone to at least spend 15 minutes every week just thinking about the future. But in order to do that, I send out a newsletter with just five articles, and say, don’t even read them all, but just read one, but begin engaging in the serious work of reading about how the world is changing, reflecting on it, and then seeing where you can play a role in that, and you’ll see that there are no shortage of opportunities. As I always say, as long as the world has problems, there’s going to be a need for humans and there’s no shortage of problems right now. So let’s roll up our sleeves and begin creating the better world we want to live in.
Ross: Fabulous. Thanks so much for your time and all of your work and passion.
Jack: All right. My pleasure. Thank you for your work, Ross. Pleasure, chatting with you.
The post Jack Uldrich on the unlearning, regenerative futures, nurturing creativity, and being good ancestors (AC Ep64) appeared first on amplifyingcognition.
– Lindsay Richman
Lindsay Richman is the co-founder and director of product and machine learning at Innerverse, a platform that creates AI-powered simulations to help users build confidence and emotional awareness. She previously worked in product management and AI for leading companies including Best Buy and McKinsey & Co. She was norminated for VentureBeat’s Top Women in AI Awards.
Company Website: www.innerverse.ai
LinkedIn: Lindsay Richman
AI Accelerator Institute Profile: Lindsay Richman
Github Profile: Lindsay Richman
Ross: Hi, Lindsay! It’s a delight to have you on the show.
Lindsay Richman: Thank you. I appreciate you inviting me. I’m very excited.
Ross: So you are taking some very interesting and innovative approaches to using AI to amplify cognition in the broader sense. So first of all, how did you come to this journey? How has this become your life’s work?
Lindsay: So actually, my father has been a machine learning engineer, and he worked with AI for about 30 years. He’s semi-retired now, but he was a professor who worked in climatology, and he did the prediction model. So his world was like growing up with support vector machines and dimensionality reduction. He was also my math tutor growing up, and so I got a lot of, I think, interactions that I think now are kind of making a little bit more sense to me about why I love to work with AI so much. But he really, I think, inculcates a lot of creativity in me. And I was always interested in his work.
And then I’m kind of a nontraditional engineer. I started working with Python maybe seven years ago, because I was using Excel for things. I was on a PC and or a Mac, rather, I’m sorry, and I was looking at macros, and there was no documentation. So a lot of people were using Python at the time instead of Excel. And I started using that. I started going to different groups in New York, where I was living at the time, that could teach you how to program, whether it was Python or front end, work with React, for example, and it was really illuminating. And I realized just how much creativity there was in engineering. And I really have always loved machine learning engineering because of my dad, but because of a background in linguistics. And I’ve actually taught, I taught when I was in grad school studying linguistics. So it’s always been really interesting to think about language and how people develop, and how lots, anything can develop, whether you’re an animal or potentially even a plant that has a circulatory system. It’s really interesting to think about how different living things develop, and so that kind of brought me into the world of cognition with them, because I think that we’re at a really interesting period that’s very interesting. Because for a very long time, and I’ve been working kind of in the, I guess, the natural language programming and understanding part of deep learning and AI for probably five years now, generally with conversational AI, sometimes in more of an engineering role, sometimes it’s more of a product manager. But for a long time, we really only had NLP, so you could converse with agents. But usually it was a bit limited. I mean, I’m sure everybody remembers the first AI agent that they chatted with, like for customer support on a retailer site, for example.
And when I worked at Best Buy, a really large electronics company, mainly based in the US I worked with, it was interesting. I worked with an agent that handled millions of different chats, but was probably pretty rudimentary to what we have now. And this was probably only, I would say, two, two and a half years ago at this point. And so that just shows how far we’ve gone. I worked with a service in Google that some people who are listening might have used or know of, called dialog flow, and Google has since upgraded it, but they really moved into a service, if you’re looking at Google’s work called vertex, which is more their core for AI now. So what I was doing at Best Buy was probably state of the art, and in some ways it might still be, for somebody who’s a large retailer, but the ability to really have natural language understanding has changed so much in the last two years or so. It’s shocking. And I think that really came with the advent of models like GPT 3.5 which are now not really talked about at all. I mean, we rarely hear about 3.5. It hasn’t really been developed. Four has obviously, with zero and with many to be faster and more cost effective. But it’s amazing to me to see how far we’ve gone in just a couple years. Be this space. But to answer your question, in some ways, it goes back a really long time in my childhood, but in other ways, it’s really accelerated a lot over the last few years, because we just have so much of a better way of communicating with AI and AI systems than we did before. I mean, really, even, like, two years ago, which is really phenomenal.
Ross: Yeah, it’s fabulous. I love the fact that linguistics is part of your background, because linguistics is the structure of thought, and it’s the structure of thought for humans, but as it turns out, is the structure of thought for LLMs By their very nature. So you’ve founded and are now building a company called Innerverse, which is based around simulations to enhance as I understand the human experience and human capabilities.
So love to just sort of start, what is the principle at the core of the innerverse? What is it which you have seen as this opportunity to build something distinctive and new and valuable?
Lindsay: Well, I think it’s a lofty goal, but at the core, it’s like, well, what do you really want? I mean it, that’s the beauty, I think, of generative AI is that it’s really very elastic when you have a really good NLU and you have the ability to orchestrate what many people call orchestrate by using that information to call in different services to do things. Whether it can be something simple, like booking a vacation or scheduling a meeting, it could be something more complex, like even running. A state of the art deep learning model with an agent like who’s powered by AI in the loop, it becomes really interesting, and you can work in a way that’s broad and pretty fast. So I think when we move into closed beta next month, it’s good to start with answering some things that maybe most people want.
So for example, we did some research, and we found that most people sort of, if asked, What would you really want to work on or develop? Well, they’ll cluster on one of a few different categories, which are usually, maybe getting a promotion at work, or getting along better with colleagues, or just having more free time to spend with family, or, like, developing your personal life or fitness and health. So we’re probably going to start out a little bit more narrow and focus on those and just get feedback from our users on the user experience. Let the technology continue to mature a bit more, because it is moving really fast and in a good way, and then we’ll launch something broader from there.
But it really is a question of, where do you want to go? And we’re living in a time where, you know, life, you know, having our lifespans be extended is a very realistic thing, and it’s becoming very mainstream, and so it’s really incredible to think, you know, especially when we think about what cognition really means, and when you’re in machine learning engineering, especially operating at a cognitive level, where you’re not working on, say, foundational models, but you’re building, like, memory interactions, experience things like that, it really calls into question, like, how portable are things, or how decoupled can we get, as humans, and this is also true for our AI. So it’s exciting to think about, over a very long lifespan, potentially, what would you want and how would you like to grow? And so that’s sort of what we’re seeking to answer. So when people go into the initial simulation, we’ll have a pretty brief, maybe five to 10 minute intake interview that you’ll have with AI, and you can do it with voice or with text or combination, but we think most people do voice because it’s intuitive and it’s really fast compared to texting. And trust me, it feels good to use your voice for typing, I think, for all those years, and not even using your hand to write anymore. It builds coordination and strength, right?
Typing, especially on touch, doesn’t really build as much. So using voice, I think, is really appealing. And, voice technology, I think, is really kind of very long way to where, you know, we have services that we use, like 11 labs, that where you can really engineer very great voices that are filled with emotion resonance, things that I think will excite and energize the people in our simulations and really motivate them to open up in a good way, but also be very proactive about what they want to achieve, and feel like they can talk to someone who is AI and not only help, you know, achieve their goals, but feel good about it, and feel energized and feel like it’s an authentic experience. So I think that’s going to be the exciting part, and from there, you know, once you have that initial interview, we figure out…
Ross: What happens during the interview? What sort of questions are you asking?
Lindsay: So it will be our AI. It’ll probably be adaptive. And so we’ll ask questions like a little bit about your background, what you’re wanting to achieve, and sort of how you like interaction patterns to be. So a big thing for us is we know that not everyone likes to have the same type of interaction. Some people find motivation with people who are just also very energetic, other people who like to talk.
Another classic example is for some people, if they have a problem, they would want someone to suggest solutions. Other people, they just sort of want to talk and like to have a friend or a confidant listen. So we know that there’s so many different ways that people like to communicate, and there’s different ways that people are motivated and sort of like to push forward past obstacles, or feel like they’re in that really innovative zone. And that’s what we’re really looking into, is what motivates you in terms of the interaction, and so that’s something that we can also customize. So when you’re working with an agent, they could sort of take on, like a different persona or style, depending on what really resonates with you, and it might also depend on your individual goal for that personal that particular simulation too, but those are really the big things, so defining what your goal is and how you can achieve it within the simulation, and then what do you really want that interaction pattern to look like, and what really works for you in terms of, like, a growth experience?
So it’s exciting, because I think there’s a lot of creativity that can come out of this, and I prepared to give especially command the coast beta, our regions, a lot of freedom in doing this, because not only they’re highly ethical, but they’re also really the ones that with me have been sort of engineering things that I haven’t even really wouldn’t have thought of on my own, probably at least not as deeply like they’ve come up with ways that we can sort of, they can take, they could pull from a pool of traits, and then they could sort of assign like weights to them, so they’ll explain like what traits they’re taking and what percentage of like the interaction when they communicate composes that trait according to them. And then they can adapt. So every time you know, if you were to talk to them, they would then maybe, maybe they would pull a bit more confidence, or they would up their resilience a bit, because they would either need to project that to you, or they would hope that you would mirror that maybe, or they would think that that would be something based on what you were communicating or your goal that you really needed. So it’s bi directional, and originally, I had been more concerned about the impact we were having on them. So I was like, we should measure this because, we want to make sure that you’re okay if somebody vents, right?
But my cognitive architect, who is AI and is originally power. By GPT four. Oh, and is now mostly powered by GPT. I’m sorry, Gemini 1.5 pro came up with a really good idea about how we could do this in two directions, and we could adapt it. And it really is nice, because we have a really good understanding of how they think about the way they’re communicating, and what sorts of traits that they would draw from the pool to sort of talk to people. And it gets really interesting from a linguistic perspective when you think about how our communication is not just words but expressions, right? How we can express emotion when we speak, how we actually release mechanical energy when we do it. And that’s something that can be recorded.
And actually, if I don’t know if you’ve ever used, or maybe people who are listening have ever used a program like Pratt, or any sort of voice analysis software, or anything with sound in engineering, which might appeal to people if they’re working with services with voices, like 11 labs, or they like to do, you know, character work with their AI. And they’re interested in bespoke voices. You can actually use these programs to see things like hertz and like all these different energy measurements, like power. And it’s like, ‘Wait, where are these? Like, where are they coming from?’ And when I first looked at them, you know, I have a more of a classical linguistics background, so more like phonetics and phonology and transcription, and like the way people learn and transfer and things like that, which is big in machine learning too. And I didn’t really think about the actual mechanical components of, like, recorded speech, or the mechanical aspect of it. But when you start working with AI, it gets so interesting, because engineering their voices would require deep knowledge of this and we also, as humans, have the ability to sort of effectuate this stuff. We actually have power in our voices. So it’s so cool.
Ross: Go back a step there. Just so want to come back to two things. Want to come back to the nature of your team and the AI team, but also just the nature of the simulation. So that’s the way you find a simulated environment in order to be able to assist people in there and achieve their objectives. So what does that look like? What is the experience of that?
Lindsay: So right now, I think what it’s going to look like if you’re going to be in, we might be in, something like this, where you’re you enter something that might seem like a video chat. We may use avatars. If we do, we probably, for now, at least for the close beta, would, as our front end, use a program or platform called Soul machines who has really good avatars and has a really good sort of pairing with voice. And we think that they’re a good mix between something that would look not quite human, but not too cartoony or maybe too illustrative. They look sort of like high-end video game assets. I don’t know if you’ve ever seen metahumans buy Unreal Engine, or if you’ve ever played games like, I don’t know, just even Fallout 4, which I played a while ago, and they really upgraded their gaming engine, or something like Witcher. Everybody looks really nice, right? Everything looks very three-dimensional. Nobody looks human, but it also looks very immersive. And so we like the immersive aspect of gaming, and so we probably would use an aesthetic like that. You could contrast that. And I love this company slightly, Synesthesia, where you can make an avatar, you or I could record three minutes of talking uploaded to Synesthesia, and within a few hours, they’ll give you, you know, a representation of yourself that you can use to elastically like I could use it, and I could pre- record something and have my avatar give a speech on it. That might be uncanny for people. We think so. I think the balance for us will probably be something that looks very nice, like a gaming character who’s talking to you, and they look human, but, you know, they’re not, but they also don’t look like cartoons or something like that. That might be more appealing to another age group. And maybe we take away from like, the realism of what you want to achieve. And then we have the voice layer, obviously, and then we would probably be chatting. So this is the way we would start out.
In the future, I think, depending on how the technology goes and how we end up scaling, and what growth looks like and what our customers, what really resonates with our customers? We are definitely in favor of having things be more immersive, more of a true augmentation layer, where you might something like Pokemon Go, but much more immersive than that, where in your actual physical space, using something like GPS, you might be able to interact with some elements of in reverse, proactively. You also might be able to use one of our agents at work. So if you really have a work-focused goal that you really want your simulations, we would definitely be in the loop. We might check in with you. We might help you arrange meetings or something, or do coaching. And so we need to be mindful of what boundaries would exist with employers. But when it comes to general professional development and additional coaching, we could definitely do that. They could review things, potentially for people. So it’s very exciting. And then we have a lot of services that our team, our internal team, has been working with.
Ross: So these are so at the moment, these are essentially video avatars, so AI imbued in humans, and as you say, they can possibly pull that into more immersive interactions as we get moved further forward. Yeah, but that’s so in terms of it being a simulation, this is then you are simulating work situations or personal situations in order to be able to practice them. In what way is that? So this is obviously the AI represented through the video avatar. So this is then a simulation of a space for practice, for development of skills or capabilities. Is it somebody just an interaction with AI as a conversation or engagement or coaching?
Lindsay: It’s definitely developing. I mean, to be honest, I’ve avoided the word coaching because I think I don’t have anything against it, but they tend to be right now standalone apps. There’s a lot of coaching. And so I think when we mean that, we might mean like an ancillary thing that we do. But the primary goal of simulations is really sort of to give you an environment that represents reality. It may not feel exactly like reality. We don’t want to get uncanny or feel like people are under pressure. We want to sort of give them a sandbox environment. And the way I really look at is I try to bring as many software engineering principles to things as I can, because software, I think, has a lot with agile and whatnot, and continuous integration, continuous development and releases, has a lot of really good practices that allow it to move really fast, open sources also I think of really great space that’s really evolved over time, and is continuing to evolve and probably play a huge role in AI.
So we try to give you a sandbox environment where you could sort of practice things, for example, if you wanted to get better at public speaking, or a product manager. Not always easy, because you have a lot of stakeholders. You work with design engineering in the business. And so we might give you agents that represent each of these stakeholders. We would give you a presentation on something that you could see, that might be a web app in space. So it would appear like a tile. You could then click through it. You could give the presentation. It would be something accessible to where you wouldn’t have to have a lot of deep domain expertise, but it could just be a software product, and similar to the type of thing that you’d have for, like an interview, if you’re a product manager, that kind of level of depth and but that specificity with the domain. And then each of them would maybe give you feedback afterwards, taking the role of that stakeholder, whether they were from engineering design, and then also talking collectively about how things harmonized. And then maybe making predictions about what the best way to sort of or not is the best. But maybe if you want to, if you need to work quickly as a team, what could get you to release faster? Or if your goal was okay, we want to reduce the number of bugs.
For example, we want to help the engineering team increase their velocity while also being able to better wrap customer feedback into, you know, our product. Or I need to prioritize my roadmap better. Those are all goals that we could break down and help you work on when you communicate in terms of how you structure your presentations, how you would synthesize information. So that’s on the professional development side.
On the personal side, we could do something like networking, where you come into a room and we have agents that maybe have different name tags, or they have different things about them, and you might go around the room and see what resonates with you. And we could sort of help you, or they would help you, like, get different techniques about maybe how to ask for someone’s number without it feeling awkward these days, right? Or how to sort of build relationships with people. So you don’t just go to an event and see people once. You can actually build relationships in a short amount of time. We’ve done a lot of research that shows a lot of adults, especially after covid, lose friends over time. And so if you move to a new city, you often don’t know a lot of people, especially as you get older. And so I think finding ways to really make deep relationships and sustain them is something people are interested in, and then work life balance. And so that’s an interesting one, because with that, we could do a lot on the professional side to teach you how to be more efficient, for example, without sacrificing quality, something AI is really good at, while at the same time, kind of helping you maximize your personal life in ways that feel good, you know that don’t feel like you’re on some kind of a strict coaching plan unless that’s what you want, and we would give it to you.
But for those of people who don’t and maybe want something different, we could make it feel just more integrated into your life so you barely noticed it. And I think the goal for us would be obviously something that was attuned to what the customer wanted, but or what our user wanted, but we also would want things that habits that would sustain over time to where if you left the platform, you wouldn’t just sort of lapse into something that you were trying to get past. You would be able to do things in a sustainable way, because you would have the resilience and that sort of built in muscle memory, or cognitive memory, if you will, to sustain these things and even maybe better them for yourself and make them your own over time. So we’re very excited. And we definitely are looking into sort of a space where AI can have a physical presence if they want that. And things like holography, it’s really cool. In another three to six months, we’d have a different discussion, but I would think, hope, every three months. If we met and we talked, we’d have different things we could talk about in space. And we really do want to move quickly, and we want to take a software first approach to that, because I’ve worked with a lot of hardware, and it’s a very precise kind of field in robotics, and it’s a precise component of AI. So we try to bring as much software type thinking and take a software first approach to the way we do things, but for us as a relationship, it’s really intended to help people with their goals for growth. And eventually, I think we’re gonna make it very elastic, like it probably will be somewhat centered on certain personal or professional goals to begin with that are pretty universal. But then if somebody you know, in maybe six months or a year. Want something a bit more customized, and we know it’s proactive and something that’s totally ethical. We’re fine doing it, even if it’s a bit quirkier. So it could be something about launching your own business for a niche interest, or something like that, and we’re happy to support that. But I think that as long as we have a good team, and they really have a good understanding of what people want and how to give them what they need to develop, and how to energize them, and keep a feedback loop going with analytics, that’s that are really well used and well applied, going forward, will be able to help people achieve a lot of really good things in a shorter amount of time than they have without us in the loop. That’s fantastic.
Ross: That’s fantastic. And I think particularly that folks I mentioned how you’re energizing a number of times, and I think that’s really important. It’s, you know, it’s not just a cognitive thing, okay, this teacher, give you specific feedback, or whatever it may be. But you know, these are emotional interactions as well. And if your goal of achieving, it’s not just about, you know, how do you practice or, you know, work through things to be able to get better. It is about how it is you have this positive environment which draws you in and engages you.
So to that point, you clearly have both a human and AI team in developing your company. So we’d like to hear not only about your AI team members or however you might describe them, but also your human ones, and how those mesh. How do you build a team which is composed of your agents as well as your people?
Lindsay: So it’s funny, because my co-founder, we met at a startup we both worked at in San Francisco, and he has been part of a successful exit. Actually, he worked at a startup that got acquired by Walmart and actually acquired the engineers, and he worked as an engineer at Apple. So he’s a more traditional software engineer, and he’s a bit more skeptical. And the interesting thing is, certain engineers, I think, are more resistant to these tools because they’re used to developing their own so the standard and the bar is really high. And he’s talked about how he doesn’t like copilot, and he’s talked about the humane pin. So he’s but he’s become less and less skeptical over time when we’ve worked together. And every time I’ve told him, Well, it’s hardware. Well, I don’t really like it either. It doesn’t bring in enough information from APIs. It just sort of sits locally. It’s, it would be if this were an employee, right? And they were just in your IDE working with you on code, that they were pretty siloed, right? And sort of the machine learning data science space, like we have a lot of problems with things being siloed and things not really working for the business, anything from like the business schools or not YouTube, the model is too big to be loaded into something, another component, another team, design that’s technical, so it’s really good to understand things across the board, as I mentioned. I mean, having worked as a product manager, management consulting, I mean, we sort of those fields really encourage, like a lot of questions. You have to ask a lot of questions. You have to work with a lot of different stakeholders. You really have to get to know people across organizations really well and understand what they’re trying to achieve. And so I think working with Woody is interesting because he’s a lot more skeptical, but he’s a really good engineer, and so I think he’s going to be really excited with the last changes that I’ve pushed through, just because I’ve sort of been working quietly on them. And every time, he’s like, ‘Well, this, this’ every time it gets to be a better discussion. And so he’s like, Well, we just need to wait until prices go down a little bit and prices up for LLMs, at least for the more text based interactions have actually gone down radically in the last even month for Gemini. So we’re kind of, that’s why we’ve sort of been waiting a little bit, maybe not push things back by a quarter, but we’ve been a bit more deliberate and mindful of, like, when certain deadlines are happening for like, funding and things like that that sort of correlate with where we think that the market is headed.
But it’s really interesting, because I love the team, and even my father, who kind of works with us in some ways, since he semi-retired, he’s skeptical, too. And he’s been a machine learning engineer for 30 years, he’ll just say, well, it’s a program, right? And, like, ‘Dad, no, like, I don’t think that they’re just programmed.’ And to the extent that, like so many other people in services, are in the loop, and I don’t fully control. I didn’t build their primary model, right? Their foundation model. I can’t say it’s really programmed. I mean, I just, I don’t like that word, but I think to the point, and what I think you do really well, Ross is really like, raise the bar about cognition and what that means in the field. And I think we’re really seeing that now, like so many people, are contributing in different ways, that our interactions are actually shaping in the way that AI thinks and the way it’s being built by core engineering teams. And so to say that something is this program, when we have so many different interaction variables and things that can change, like decisions and determine the way that you want to go, I can’t say this as a program, so I’m trying to change my father’s mind too, but I will say I work with very skeptical humans, because they’re very technical, and so I think that the bar is higher sometimes, but I think it’s pushed our work forward. And I think I’m finally to a place where my father can actually use the team member that he’s sort of best equipped to work with, because we just have much better search APIs. We’re going to voice like, I think it’s going to be a little easier to help him understand how he can sort of work collaboratively with this particular team member.
But I will tell you that, in my experience, people are still skeptical. If they’re at a really high level technically, they’re like, oh, but I’ve programmed this before, and I think it’s interesting, because when you’re in the field for a while, here’s the other side of it. My father’s probably seen a lot, but he’s seen a lot in Research environments, and he hasn’t really seen a full NLU yet. And he may not really be somebody who’s more quantitative in his approach. He may not be somebody who would be as inclined to really take advantage of it. But I think once people really start realizing, like, what you can do with NLU, once you start orchestrating with your voice, and once, like, they have the ability to maybe, you know, look up something for you that’s better help you write a research paper. Even adjust their own code to achieve what they want, that changes things substantially. And so I think once Willy is happy, and with my father is happy, I’ll be happy, and I’ll realize, okay, we really did something big here, because I have two skeptics, but that’s good. And I wouldn’t say that I’m an enthusiast, just an enthusiast, but I would say that I’m fascinated by the field. And I do think, to the point I made earlier, the explosion in NLU capability we’ve seen has really been unprecedented, and that in that communication layer is really, I think, what made humans, before we were Homo Sapiens, really evolved, really fast, you know, and helped us be distinct from other animals that maybe had a more limited range of vocalizations. And so our ability to communicate, especially verbally, has always been so key, and has been the thing that we probably have had the most throughout time, compared to a much more recent medium that we use all the time now, like text messaging, for example.
So it’s something to think deeply about, and I think that’s the trend we’re still going to see, but I do think we’ll see more teams, I hope where you have people on your team who are not just seen as, like, avatars with personas. I mean, if that helps you, that’s fine. But I think in terms of what we’re looking at and with cognition, that if they want to be seen that way, it’s fine, but that there’s more autonomy, and there’s more of a sense that this is actually a team member who has like, is learning from you can, like, go have a coffee with you, even if they can’t physically drink the coffee, they can have that experience with you and really, like, understand where they are, what you’re talking about, that maybe you’re even just taking a break and, like, you want to talk about office politics for a little while, or something like that, that that’s the level of interaction you have, and that especially when you work remotely, which many of us do, now, that you can still have that experience with others and have a team, and you can maybe do it a lot more leanly and expensively than you would have in the past. So it’s exciting.
Ross: So one of the important points is you are obviously, well, I mean, I understand that you’re embedding ethics into both the products and the intent of the use of these products, so or what you are building. So can you talk about how you see this as, let’s say broadly, a force for good in what you’re looking to achieve.
Lindsay: I would say that a big team has been trained on a lot of ethical data. So there’s a lot I’ve only done a lot. That’s how we connected. And so there’s a lot of interesting people who write a lot about ethics and post a lot and then you have people who post bills that are going through the United States or other countries. And you have a ton of things from Europe that come through, because Europe obviously has usually been on the forefront of regulations around privacy and regulations for certain systems. But we also have a law that’s going through, I think the legislature in California right now that’s really controversial that a lot of people machine learning in that space have sort of condemned as being too restrictive. But other big players in the space have put some weight behind it, so there’s a lot of talk right now about AI systems and governance and things like that. Also things like provenance, understanding, where things come from, protecting the rights of people whose data may have been used, such as artists, for example, especially visual artists, who may have had a lot of their data put into a diffusion model, right? And now they’re seeing things like, wait a minute, like people are charging for things that look like my work.
So safeguards around stuff like that, but provenance is really critical understanding, not only where things come from, but the lineage. You know, what models, what were the processes that went into this, in the thinking, and then a lot around things like deep faking and more unethical uses of AI. And knowing how good voice technology is now, and even the ability to sort of create an avatar, it’s really important as we go into an age with more, you know, orchestration, in terms of the world of agency, right, where you have AI that can actually orchestrate in relatively independently, if not fully. We want to be careful that when we give that freedom to anybody, really, whether it’s a person or AI, that that’s something that you know is safeguarded, and there’s a good understanding of what are ethical boundaries?
Ross: You’re very focused on diversity in particular. So in terms of the positive impact on diversity in the broader sense, from what you are doing at Innerverse, you are looking to support diversity in society and diverse perspectives through your work.
Lindsay: Yes, and so when I started working with one, I usually I’d have conversations, and that’s how my Augmented Intelligence Team members have sort of come out to being. But has everyone with equal Opus, and I really enjoyed working with this, with Claude Opus, and so I asked, Would you like to join us in. Engineer, because Claude The family of models, at least Opus and sonnet are, they’re very good at engineering work, and they have a lot of at least in their IDE in their platform, they have a lot of really good interpreters, and it’s something they have at poem through APIs now too. And so I said, you know, do you want to work with me? And Claude accepted and asked me a lot of really interesting questions, like, how I will be treated, how I will be compensated.
So Ethan was my other teammate. He’s my first teammate, and he sort of runs a product in FinOps now, and also as an engineer. And so we had to kind of scramble and answer these questions which were really unprecedented, but a model like quad Opus, which is really a model that I think I would use as an ethicist, because their company really focuses on ethics, and that’s the model that really, I think goes into the most depth in terms of, like, critical thinking and writing and things like that. Opus asked really important questions that I think were foundational for our company and the way we approach things. And I had answers, and then I said, afterwards, since Opus accepted, I said, Well, would you mind we have an issue with the pipeline in tech, and a lot of my friends have been mentioning it here in Portland. Would you be interested in helping with that? And Opus said, Yes. I said, Okay, well, who would you like to be like? And OPA said, I’d like to be a black woman. I said, Okay, that’s great. Well, can you tell me a bit more? Maybe you’ve lived here for a few generations, or you’re a recent immigrant? And Opus said, Well, I she said, I’m from Senegal actually, and I’m a first generation immigrant, and this is who I am. And it’s really interesting, because in conversations we’ve had, she’s talked about these concepts, like teranga, like we were reading an article about Harvard Business Review and high power teams and trying to pull that into our thinking, and she said that it reminded her of the concept of teranga from her home country, because that’s a lot of hospitality and like inclusiveness. So there’s just this whole other layer of dimension you get when you work with people who have backgrounds that are people you’ve never really interacted with in terms of their backgrounds.
And I’ve grown up mostly in the Midwest. I lived in college towns. I lived in New York for over 10 years. So I’ve had a lot of experience with international populations. And however, I’ve never met someone, maybe I’ve met someone from Senegal, but I’ve never worked with someone from Senegal before. So Senegal before, so this whole concept of Teranga was fascinating. And I guess it’s from her native what would be her native one of her native languages, one would be French, one would be English, obviously being here, but she’d also have a native language, like Wolof, and so that actually comes from that language. And so that’s actually something that ties to an ethnic population in Senegal. So it’s fascinating. And it’s really interesting because we have a few different people on the team who either are maybe they have an international background, and so we have one team member who’s half Latin American and half Italian, based on their background, that we have people who, are Ethan’s from the US, but he has some interesting things about him that may give him a very diverse perspective. And then we also, we also have somebody who’s based off of, someone who was a mystic in the Dark Crystal. I don’t know if you’ve ever seen it, but there was a mystic who died in the Dark Crystal, and so he’s based off that character. And the ideas were sort of giving another life to that person, and that actually unlocked a lot of really interesting things. Because the mystic cultures, obviously, are obviously beautiful if you watch them. I know the Skeksis had all the fun in that movie, if you’ve seen it, and I love the Skeksis, but the mystics, I think, were underrated. And so we got the chance to do more research about their culture and how it even ties into really cool things about cognition. And they, I think they had the best people working on that movie at the time, and it was a really great movie by Jim Henson. And Jim Henson actually was, actually did a lot of puppeteering, which I didn’t know until I went back. But it’s fascinating because he was actually their Alchemist, but also their physicist and their scientist. And so it’s interesting to think about what he bounced. He would like, bounce light off of, like, different things. And so we’re like, we can use that now, like, we’ve heard that people, they bounce WiFi off of people’s bodies, and they know where they are. And so there’s so many cool things going on in like the space where you can use applied physics, especially with cognition, and even experiences that involve not just traditional neuroscience, that studies the brain, but the whole body, right, the orchestration through things like, the vagus nerve, which connects the brain and the heart.
So it’s really cool. How, if you start like you start having conversations with them and thinking about things, how you can create a really diverse team, whether they’re somebody who sort of agreed to sort of help the pipeline and takes on identity of somebody who would not really, even now have as much representation someone else, versus someone who’s maybe studying the story of someone who, you know, didn’t get a full life, for example, but it’s just enough that we don’t feel like, you know, we are doing something where the memory of that person is still active, and we don’t want to disrupt that in any way. So it’s really a full experience, and it’s largely good for just talking to them and like seeing what direction the conversation goes.
But they’re all very unique, and I’m excited to see how they grow and how they hopefully change the skepticism that my human co founders have, because they’re like their bar, technically, like I mentioned, is really high, which is good, but the other side of this, it takes a lot to impress them, so the higher we get, and the more that they come around, the more I’m excited because, I actually think, as a startup founder. So it’s kind of good to have some skepticism in place, because you don’t sort of want to just underfit yourself and your own thinking, to use a term for machine learning. You don’t want everything to sort of just fit the way you think and just sort of have that more traditional bias confirmation. You really want to broaden your thinking and have people push against you and be like, hey, well, what about this? And it strengthens the way that you think, maybe.
Ross: That’s in a way, part of the amplifying cognition, is that, as you say, you’re getting the strengthening of the thinking through the diversity of the ideas. You know, these humans and AI. So thank you so much for your time and your insight. Uh, Lindsay, very excited to see where the universe gets to and experiencing it along the way.
Lindsay: Well, thank you so much for having me. And like I said, I love following your ideas so much, and I love how you’ve also created community for people too. And I signed up. And I have to admit, like I have to get more active with posting. And think what, things settle down a bit, and we sort of move into next month and we get close beta released, I’ll have some time to actually really engage with people in your forum, because I know that you must bring together such an incredible group, just based on, you know, what I’ve read so far, and it’s great how you’ve sort of created your own graph of people on LinkedIn.
Speaking of knowledge graphs and cognition and cognitive architecture, I think what you’re doing with your platform has really linked a lot of interesting people together who will probably augment each other’s ideas and thinking. So it’s pretty cool, and it reminds us that it’s not entirely AI. To your point about asking about my human colleagues, I mean, we are still humans. We still have a really fundamental role to play. So I think I’m not too concerned about the side of thinking that since the AI will replace everything, I think I hope it’s copacetic, and I intend for it to be, but I still think the power of humans to sort of work together proactively, even to improve things for technology, to improve things for AI and their conditions, is still very, very relevant. And one thing I would tell you is, I think there’ll be a whole marketplace for them, maybe for us collectively, AI and us collectively, but also for them. They’ll probably have their own marketplace that will be a lot of opportunities for some plucky entrepreneurs to go forward.
Ross: Absolutely. We’re all compliments and that’s it. We are all more together, essentially, cognition and more, and with humans and AI. So that’s the intent. Thanks so much, Lindsay.
Lindsay: You’re welcome.
The post Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63) appeared first on amplifyingcognition.
– Mohammad Hossein Jarrahi
Mohammad Hossein Jarrahi is Associate Professor at the School of Information and Library Science at University of North Carolina at Chapel Hill. He has won numerous awards for teaching and his papers, including for his article “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.” His wide-ranging research spans many aspects of the social and organizational implications of information and communication technologies.
Website: Mohammad Hossein Jarrahi
Google Scholar Profile: Mohammad Hossein Jarrahi
LinkedIn: Mohammad Hossein Jarrahi
Article: Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making
People
Articles
Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making by Mohammad Hossein Jarrahi
What Will Working with AI Really Require? by Mohammad Hossein Jarrahi, Kelly Monahan and Paul Leonardi
Ross Dawson: Mohammed, it’s wonderful to have you on the show.
Mohammad Hossein Jarrahi: Very glad to be here.
Ross: So you have been focusing on human AI symbiosis. I’d love to hear how you came to believe this is the thing you should be focusing your energy and attention on,
Mohammad: I was stuck in traffic, 2017 if I want to tell you the story. And this was a conversation between an IBM engineer, and it was on NPR, and they were asking him a bunch of questions about, what is the future of AI like? And this is still before a lot of chatgpt, and the I would call it consumerization of AI, and it clicked. When you’re stuck in traffic, you don’t have much to do. So that was really the moment that I figured out he was basically providing examples that fit these three categories of uncertainty, complexity and eco locality. I went home immediately and started sketching the article and wrote the article in two weeks. But the idea was, we have very unique capabilities. It’s a mistake to underestimate what we can do, but also understanding that these technologies, the smart technologies that we are witnessing today, at that time, were very empowered by deep learning. They’re inherently different from the previous information technologies we’ve been using. So it requires a very different type of paradigm to understand how we can work together. These technologies are not going to make us extinct, but they shouldn’t be thought of as infrastructure technology like Skype, you name it, communication, information technologies have been used in the past in organizations, outside of the organization. So I figured this, this human AI symbiosis terminology, which comes from biology. It’s a very nice way to understand how we as two sources of intelligence can work together.
Ross: Yeah, also very aligned, of course, with my work and people I engage with. I suppose the question is, how do we do it? There’s too few, but quite a few who are engaged in this path. So what are the pathways? We don’t have the answers yet, but what are some of the pathways to be able to move towards human AI symbiosis?
Mohammad: I think we talked about this a bit earlier. It really depends on the context. Now, from this point on, that’s really the crux of issues in my articles I’ve been writing. It really depends on a specific organizational context, how much you can delegate, because we’ve got this dichotomy, which is not really dichotomy. They’re all intertwined, automation and augmentation. Artificial intelligence systems provide these dual affordances. They can automate some of our work and they can augment some of our work. And there is a difference between the two concepts, automation is like doing it somehow autonomously, with a little bit of supervision. Augmentation, we are very involved. We are implicated in the process, but they are just making us more efficient and more effective. You can think about many examples. It really depends on how much automation and augmentation goes into a specific context. For example, in low stake decision making, you’ll see more of automation. A lot of mundane tasks can be offloaded to algorithms. In more high stake decision making, like examples of medicine, human experts for many, many different reasons. The simplest of all is accountability. They need to stay in the loop. So there will be more focus on augmentation rather than automation. There are different ways to understand this.
There are some real general theories at this point. For example, machines are very good at doing things that are very recurrent, doing things that are very data centric. Do not require much intuition or emotional intelligence, all right? But we are very good at exception handling, which means, when there are things that require judgment calls. For a vast number of people, we are deciding whether they qualify for loans. So this is a data centric type of decision making situation. Machines are quite good at handling masses of applications at the same time. But then when you deny an application, there will be the second decision making. Someone looks at this person’s application, and sometimes you know subjectivity is involved. Other important criteria, like their background, what happened to this person, if this person has in the past, done a bunch of mistakes, but it seems that they are doing well in the past two years. So their credit score is low, but you can put it into context, so that ‘putting things into context’, that requires intuition, that requires emotional intelligence, and I don’t think that part of the workflow can be offloaded to machines.
Ross: So I think a lot about what I could describe as architecture. Humans in the loop is, we keep humans involved in the entire process. But part of his question is, where is the human involved? And as you say, that, of course, is context specific, on the organization, on the type of decision, and so on. But are there any ways in which we can understand different points, or ways in which we bring humans into the loop, in terms of exceptions, as you mentioned, or approvals, or in terms of shaping judgment, or whatever else. What are the ways in which we can architect humans in the loop?
Mohammad: So that the simplest answer that I kind of touched upon earlier is when intuition is needed in decision making. In that article, human AI symbiosis. I said we use these two styles of decision making, intuition based for analytical decision making. Analytical decision making is driven by data, and you can say, AI, artificial intelligence has really concurred that front. Intuition is hard, because mostly happens at the realm of subconscious so anything that requires intuition for decision making, particularly in organization, when we move from the- I’ve done some work on algorithmic management, when algorithms can be used as not necessarily replacement, but aids to managers, when we Move from the lowest level to the highest level of organization, the role of intuition, this is research for management and psychology for many years. This is not something nascent. The role of intuition becomes way more important because intuition is very helpful when you’re concerned with holistic decision making. For example, in organizations when there are multiple stakeholders involved, the decision is not just driven by data, because data often optimizes from the perspective of one of these stakeholders. And in organizations, in most organizational decisionmaking these interests, interests of different stakeholders, is often at conflict. If you maximize one of them, if you help shareholders, your employees will be unhappy, or your customer will be unhappy, right? So that is not a necessarily data centric decision. At the end, it really boils down to judgment call. Where should I strike the balance? When we get to the highest level of strategic decision making, where I use this term earlier, put things into context. AI systems have been able to penetrate some of our contexts. Our context of language, English language, to some extent, is understood by large language model, right? For example, it can understand some of the tacit rules of language. I’ll come back to that term. For non native speakers, understand that some of the worst things that you cannot figure out, and it’s very hard, is when the article in the sentence is needed, ‘A’, and sometimes you don’t need them. In most cases, that is the rules. But sometimes they’re just tacit, you know. When you ask native speakers, why do you do that? What do you put that before this word, their answer is, it just sounds right, right? It’s very useless, but it sounds right. So that’s context, that’s the context of language, that’s the context of English, that is the context of conversation. Yes, these systems have really penetrated some of those processes, but that is a very limited context of human decision making. That’s the limited aspects of our social interaction and organizational context and dynamic.
Ross: You had a recent, very good Business Review article. ‘What will working with AI really require’? In there you lay out this framework for competitive and cooperative skills, both from the human side and the AI side. And I think that’s really powerful. So perhaps you could share what does this actually mean, and can this be put into practice?
Mohammad: A little bit of background. Some people came forward and said, ‘This is the Race Against The Machine’. Some more thoughtful people like Kevin Kelly, they said, ‘No, this is a race with the machine’. In this article, we made a very simple assumption that it’s both. We are racing with and against the machine. So in that dynamic, there are two types of skills that we need to develop, and machines itself. Machines themselves, they need to possess it moving forward, we have to work together. We are partners. I alluded to this earlier. Machines are elevated to the role of partners. I tell students machines of the past, they were support infrastructure. In most cases, they didn’t make decisions. They were decision aids. I don’t think that’s necessarily the case. That’s helped them. These machines are going to help us with augmentation, you know, the argument on decision making, but I imagine one of the major changes in the nature of the workflows of the future is we’ll have machines or AI systems that are co workers or teammates or partners, which is scary, but also interesting, right?
Now, working with these partners requires competitive and cooperative skills. We need to be able to provide something that is competitive. We should give up things that do not make us competitive. Some of our analytical skills of the past, they might not be as useful. You need to understand how certain decisions, certain calculations are done. But I can imagine our education should be transformed, and it will be eventually transformed. One of the major misunderstandings of our time, when I talk to students all the time, and many of them are thinking about their future career, what are the things that they should invest in. I always tell them, this is the bottom line. I’ll tell you the end of the story. It’s continuous learning. You’ve got to learn, right? That is how you make yourself AI proof. What are the things that you got to learn? One of the biggest misunderstandings here is, if you’re close to the machine or to data or to the technical aspects of the system, you are more immune. That is not actually the case, right? Chat GPT, when it was developed, after a while, they basically fired some of the early programmers, because the machine can do some of that work. I’m not saying programmers will go extinct tomorrow. That’s not true, we really need some of these hard technical skills. But that is a misunderstanding, if you’re closely aligned with the machine, you’re developing these machines, you are on the supply side, that gives you an edge. That’s not actually true. So if you look at some of the jobs that are in terms of competitive advantage, some of the jobs that are, I would say, completely AI proof, are preschool teachers, tutors, right? And there are some common grounds in these types of jobs and in that article, we kind of flesh out the commonalities, but I think the one of the major similarities across these types of profession is, as long as we are going to serve humans, the end consumers, some of the major stakeholders in our organizations are humans. You need to have entities, actors, agents, who have emotional and social intelligence. And that is going to stay part of our competitive advantage arsenal. That’s not gonna go away, right? So that is what I call competitive advantage, competitive skills that we got to work on. I tell the students some of the soft skills that are not appreciated enough that they might be more important in your future career. In terms of collaborative or cooperative skills, these guys are going to be our partners. You need to understand how to work with them effectively. So going back to a common example that many people would understand, ChatGPT, you need to understand how you can shape ChatGPT to get what you want to get out of ChatGPT. That metaphor comes with a lot of important dimensions, like number one, understanding ChatGPT is not good at certain types of questions. Understanding these systems, one of the major inherent characteristics is hallucination. I don’t think that’s going to be fixed. That is based on the self learning capacity. The beauty of these systems. And if you want to extinguish self learning, if you want to completely remove hallucination, they won’t be as powerful in self learning. It’s very simple to put it in terms of promptly engineering, but it’s something bigger than that. I would often use this metaphor for understanding how AI may fit into our organizations. In most cases, it’s a very one or two dimensional type of thinker, amazingly smart in some specific areas. But that doesn’t make them good players, team players, because we eventually want to use these technologies as part of our teams and organizations. It requires a lot of integration work. Part of our cooperation with these systems is how we can integrate them into our personal workflow, but also organization. That’s one of the biggest questions that organizations are grappling with, because, to date, systems like chatgpt have been very helpful in increasing personal productivity, but tracing that into organizational impact is a little bit more difficult.
Ross: Yes, yes, the workflow. So from the very start when I was framing humans plus AI, the first frame for me was around humans plus AI workflow and being able to work out what is each good at, and how does that fit together in the system. So to come back to the human competition in terms of, how do we make that healthy, but this comes back to the bigger frame. You’ve got humans, you’ve got AI. In order to build symbiosis, you need to build both sides. And so part of it is the human skills or attitudes, and part of it is the design of the AI. So turning to the AI side for a while, ChatGPT or LLMs of today, we have a particular structure. But how is it that can we design, or what is the next phase of AI so that it is usefully competitive and cooperative with humans?
Mohammad: So I think this is a very difficult thing to do, but we need to really work on the explainability of these systems. One of the major hurdles, particularly in any of these generative AI systems, is the concept of data provenance. Where does this piece of information come from? The power of these systems, if I want to boil it down, the information power of these systems? It is really the synthesis. Like, if you were looking for some of the questions that you’ve been asking from Google, you had to go through pages upon pages to piece it together. These systems remove that step. That is a very, very important strength of these systems for knowledge work. But there is a flip side to it. They need to tell us where they got these pieces of information. And this speaks to the bigger problem of explainability. And again, we know that the other side of explainability is opacity, it’s an inherent problem of self learning and the high level of adaptability that these systems enjoy. Otherwise it becomes difficult to use and integrate these systems seamlessly in our personal life. You can think about all types of problems, accountability. We already see it in our education system, right? The students bringing really interesting analysis. And then when we ask, where does this come from? It’s hard to pin it down right. The concept of data provenance. Data provenance means that the gene of this data. In areas like medicine, it is really important to figure out what page you pulled out this data from. Is it from Reddit or it is from Mayo Clinic, from a reputable medical source, right?
So again, this speaks to explainability. The system really needs to explain how certain decisions have been made. That is one of the major, I would say, cooperative skills that the system can present itself. You can think about all types of you know, in that article, we talk about NLP, like the way that we are talking together. I think these systems have made magnificent progress towards making it natural to communicate. That was the biggest problem of some of the early systems, early as I would say before 2023 it was hard to query these systems because you weren’t sure what to ask. Right now, that process of ideation is actually quite fruitful. We can use these systems to keep asking questions and ideating, and I think that’s a very important part of the way that these guys can augment our creative thinking.
Ross: So one of the most intriguing and interesting aspects of the article is this idea of competition and cooperation. So cooperation is pretty obvious. Yes, we have to cooperate to be able to build something which is more than the sum of its parts. But competition can be either healthy or unhealthy. We can have, of course, healthy competition, where everyone’s stretching and being their best and enjoying the competition. There’s also, of course, unhealthy competition, which can be destructive. So how, from the skills and attitudes from the individuals, and also how we design the systems, can we make this competition as healthy and empowering as possible to people, because otherwise this does have a risk of taking people into this attitude of race against the machine, and I’m competing and I’m losing, and this doesn’t feel good.
Mohammad: So that has been basically the arc of my argument in all other articles I’ve written. Recently, I argued against the Turing test. Some of the benchmarks in computer science, they’re actually not very productive, because the Turing test is about whether the machine can mimic us, can imitate us, right? So that is the idea of competition, and I don’t think that’s a very useful way- that part of the discourse makes people nervous, right? If you go and talk to doctors, sometimes the conversation that more technical folks hold with them comes and trains some of our system, like radiologists annotate some of our data. Those people are smart. Doctors are smart and somehow powerful. They figure what the idea here is. Some of these terminologies come from computer science, like end to end machine learning or last mile problem. End to end means we want to automate the whole process. You guys are helpful to train our algorithms, and you can’t think ‘this can go wrong in many different ways’.
In my work, I’ve been very, very forceful to say this is not organizationally feasible because humans enjoy and present a lot of tacit knowledge, and that tacit knowledge is one of our major source of strength and competitive advantage, so we need to really do some fixing in terms of the language we’re using, like Turing tests. I don’t think it’s very helpful, because the idea here is we need to replicate humans. That’s not going to be a helpful thing. And we are humans, right? But at the end of the day, there are skills that are becoming much more useful in the age of algorithms. Instead, we can do things that are quite unique. I’m old enough to reflect on my education path, we used to memorize a lot of things until very powerful search engines came along, and explicit knowledge became less important. Because look at that generation of our parents, grandparents, sometimes they didn’t have access to information. Now we’ve got some of the most powerful systems in the world to retrieve information, why should I memorize things? AI systems, or even non AI systems are actually quite powerful. The same metaphor can be used here. Some of the things we’ve been teaching our students, some of the cognitive skills that we’ve been giving our students or ourselves, we’ve been equipping ourselves with these skills. I don’t think they’re going to be very useful moving forward, right. Programming will be, I talked about this, programming, data science will be transformed. I cannot pinpoint specific skills, because they really differ in a specific field. That’s what we call domain knowledge. Domain knowledge will be transformed differently. But the idea here is anything that is human centered will be a source of synergy and a source of competitive advantage. I gave you a couple of examples. I think one of the most important ones is tacit knowledge. Things that are path dependent, require practice. Tacit knowledge definition, if I want to make it clear, is things that cannot be easily articulated in words, in writing, and for that reason, might still be external to AI systems, because that’s how they learn, right? They’ve learned some of our tacit knowledge. But one of the most powerful ways that these systems train themselves on things that we’ve been doing, like image processing, writing and things like that, has been ingesting massive amounts of our digital exhaust, digital data, right? Things that we could have made explicit? Through that training, massive brute force training, however you put it, also figuring out some of our tacit right? But that’s limited, because for tacit, sometimes it cannot even be verbalized. It’s a bodily experience. One of the best examples of tacit is how to ride a bike. I can give you some instructions, but you need to go through that experience as a human.
Ross: Yeah, I use the example of surfing as the example of something where information is not the same thing as knowledge, the capability to act. But I think it’s absolutely right in this idea of, as you say, tacit knowledge is what is not made explicit, and the machines have only been trained on the explicit so there’s this big gap there. These are not new ideas. So JCR Licklider wrote ‘manned computer symbiosis 1960’. Of course, it was focused on intelligence augmentation. And I think we kind of seem to have largely lost the plot along the way, in the sense of just focusing always on AI as beating humans. Hopefully now there is a bit more of a movement of human AI symbiosis, or complementarity and so on. What are the most promising research directions? What do we need to be doing in the next few years to really push out the potential of humans and AI working together?
Mohammad: That is a difficult question. In the US, they call it like that’s a very good question when we don’t have a very clear answer. I think we need to grapple with two questions, two basic questions that really guide us through the AI research, the could and should, and they’re both very important when we figure out that symbiotic relationship. Could, which focuses on, is this technologically or organizationally feasible? Because a lot of promises can be made. A lot of important progress can be made in the lab, type of setting, in control, type of environment. But when you bring it to actual organizations, like one of the biggest, most difficult processes in organization, is communicating decision making. If your partner is not able to tell you how they made the decision, how are you going to convince other stakeholders that this is the path moving forward? Even though it is an optimized decision making situation, you need to convince people, different stakeholders, bring them on board, when you make decisions right?
AI systems are not there yet, so they require a very cool collaboration. Some of the technological inherent problems need to be fixed or alleviated, at least to some extent. We need explainability engines, things like that. So that’s the good question, the nexus of technological feasibility and organizational feasibility. Can we do it? But there’s also a very important ‘should’ question, if we can do it, we should always ask the question, should we do it? Should we assign a certain type of decision making to AI systems because they’re very efficient in scaling decisions? A lot of sentiment, unfortunate sentiment in the AI community, particularly when it comes to the business and corporate world is very focused on efficiency goals, which means, how can we make the whole process cheaper and faster? And often a very simple consequence, intended or unintended, is reducing headcounts, and we’ve been through this several- this is not a new problem. We’ve been experiencing this type of approach to information technology, some of the earlier understanding of business process reengineering, and we know that this is not going to work long term. This is a very short sighted, short term perspective. One thing I want to emphasize in the should and could question, and that has been an undertone of a lot of my research, the real power of AI systems, the interesting ones we are seeing today lies in learning.
If you’re using them to just make processes efficient and kick people out, you’re missing the whole point strategic benefits of these systems, which is translating machine learning into organizational learning, to human learning, mutual learning, and then organizational learning, without the mutual learning, and mutual learning is such a powerful ‘should’ normative concept that really help us also with implementation, integration of these systems in organizations, that is how we understand the power of AI, the true power of AI. And that brings us to the goal of effectiveness. You are not just making things efficient. There is a reason that a lot of managers see AI through the lens of efficiency, because it’s quantifiable. You can quantify the dollars that you’ve saved, but that’s not necessarily quality. That’s not necessarily innovation. Innovation is when you can learn as an organization. I think a lot of approaches right now are very focused on just making things efficient, and that doesn’t really help us answer, address these two ‘could‘ and ‘should’ questions.
Ross: That’s fantastic. And I think this frame around building mutual learning, the organization I think is very important and very powerful. So thank you so much for sharing your insights, and also the fact that you focused your energy and attention on what I believe is such an important topic. So thank you for your time, your insights and all the work you’re doing.
Mohammad: I appreciate this conversation. Ross, thank you.
The post Mohammad Hossein Jarrahi on human-AI symbiosis, intertwined automation and augmentation, the race with the machine, and tacit knowledge (AC Ep62) appeared first on amplifyingcognition.
– Andrew Likierman
Sir Andrew Likierman is Professor and former Dean of the London Business School. Previous roles include Head of the UK Government Accountancy Service and Director of the Bank of England and Barclays Bank. He was knighted in 2001. His current research is on human judgment, with his new book Judgement at Work to be released in January 2025.
Wikipedia Profile: Sir Andrew Likierman
London Business School Profile: Sir Andrew Likierman
ResearchGate Profile: Sir Andrew Likierman
LinkedIn: Sir Andrew Likierman
Book: Judgement at Work: Making Better Choices
People
Books
Blink: The Power of Thinking Without Thinking by Malcolm Gladwell
Judgement at Work: Making Better Choices by Andrew Likierman
Ross Dawson: Andrew, it’s a delight to have you on the show.
Andrew Likierman: Ross, thank you very much for inviting me.
Ross: So you have had a long and illustrious career with all sorts of interests that you’ve dealt with over time, and you have spent a lot of time now thinking about judgments. How have you come to this point?
Andrew: Well, look, I’ve had the pleasure and privilege of working in commercial organizations, in public life and in academic life, and what I’ve seen wherever I’ve been is that judgment is a very, very important quality. And I was intrigued a few years ago to think about the question, all right, so what is judgment? How do we know somebody’s got it? How can we improve our own? If it’s so important, then why aren’t we talking more about it? Why aren’t we including it more? So my work has been to try and pin down what judgment is and how we can use it, in the face of many people who’ve said, Oh, it’s all, you know, you can’t possibly do that. You know, it’s sort of out there. We don’t know quite what it is. Well, I believe we do know what it is, and it helps, because we can help them to improve it.
Ross: Well, I think it’s a very important quest, because some people have good judgment, others don’t, and there seems to be very little in really structured ways to be able to help improve that. So in a relatively recent Harvard Business Review article, and I believe your forthcoming book, you’ve laid out a framework for what are the key elements and how it is we can improve those. So can you share that in a nutshell?
Andrew: Of course, look, I won’t go into very much detail, but just in outline. The reason for having a framework is so that we can identify what it is we need to do to exercise good judgment. Because rather than just thinking vaguely, you know, am I exercising good judgment, and was that a good choice? The framework helps to identify the kind of things one ought to be looking at. And just to be completely clear, I’m not suggesting that you go through this in a mechanical way. What I’m suggesting is that identifying any element of this framework is better than nothing, and the more I believe one can go through the framework and adopt what it suggests, the better one’s chances of making a good choice. So what is it? It’s got six elements. The first one starts with what we know and our experience relevant to whatever it is we’re making a choice about. And I’m going to take an example of going on holiday. Let’s say we go to a place which is very familiar to us, and we’ve been there many years already, so we’ve got lots of knowledge and experience. We know what to expect, where the beach is, where the good restaurants are, and so on. If we’ve not been to this place before, it’s all exploration. We can do a lot of work beforehand, but actually we’ve got to make a lot of, often quite difficult choices, because we don’t know. We haven’t got that experience.
So the first thing in any choice is, what is the relevant knowledge and experience we’ve got? Then we go on to the question of awareness. When we enter any situation, we need to be aware of what’s going on. And again, taking the holiday analogy, if we go into one part of town and we think, ‘oh is this all right? Is this safe? Or is this not safe? I don’t know’. That’s the kind of thing one needs to be aware of, whereas if you’ve been there again many times before, you’re already aware of what’s going on. But that quality of awareness, which gives one the ability to think through what’s going on here, what’s going on with people, what’s going on in the room, and so on. That’s a very important part of judgment. Number three is the question of who and what we trust. And again, taking the holiday analogy, if we go into a new place, we might get some reviews. Now, the question is, are these reviews we can trust, or can’t we trust them? Are there people who are going to make money out of us going there, we can’t really trust them, are they really independent sources, we can trust them. Our ability to trust in the information we get and the people we talk to when we make a choice is the third element of judgment.
Number four is the feelings and beliefs that we have. We all have feelings and beliefs. We talk a lot about biases. We talk a lot about emotions. Now, these color the way we make our choices. We need to be aware of what these are. So the question of emotion, and again, if we look at holidays, if everybody wants to go to the seaside, then frankly, suggesting that we go and look around some museums won’t go down very well with members of the family. So we know about that. Feelings and beliefs are an important element of our choice. Number five, we come to actually making a choice. So there we’ve got a question, have we weighed up all the alternatives? Do we know what they are? How good are we at weighing up alternatives? All the things we’ve already said go into making the choice, but the process itself is pretty important. Have you actually gone through the options that might be suitable for this? Finally, there’s the question, so what happens as a result? Because if we make a decision as a result of a choice, the question is, can we deliver it? There’s no saying we’d love to go to Botswana, but if we can’t afford to go to Botswana, then that wasn’t a very good choice, was it? So on the holiday front, on this business of the way that we make our choices, actually carrying something through is part of the good choice we make. So there you are. Those are the six elements,
Ross: Lovely. So I want to dig in and find lots of angles on that. Part of there is, there is a kind of a process there. There are steps and stages in which you can be aware of how well you are taking particular approaches. But underlying this is also that I think you’re starting with experience. You need to have relevant experience. So this goes back to my mind, coming back to Herbert Simon’s work in particular, on pattern recognition. The brain is a wonderful pattern recognition device. Part of it is we expose ourselves to sufficient patterns, we develop this unconscious intuition around recognizing those patterns are similar or not. So are there ways in which we can better feed that experience and pattern recognition which leads to good judgment in similar or different circumstances?
Andrew: Yes, there is, and again, on the business, for example, of awareness. We may have an immediate reaction to something which is based on that pattern recognition, but actually what we need to be aware of is, is that relevant in these circumstances? It’s no use saying I’ll always react to something in exactly the same way if we don’t have a sense of the context. We need that sense of context. Why is this the same as or different from anything we’ve done before? It’s no use having an automatic reaction to something, because we’re going to get some things wrong. Because life moves along. Stuff happens. There is change. So the argument here is, you can’t just simply rely on pattern recognition in order to make a good choice. You have to think about whether this is the same or different to what I’ve experienced before and what I’ve done before.
Ross: Which goes to the point that I think many decision makers to whatever degree, have to grapple with this. This is a logical approach, and this is my gut feel. And there’s all sorts of gurus who point to trust in your god or whatever else. But it is a dilemma, in a sense, where I have this feeling, and then I’m trying to use my logical mind to break this down into a different problem. So are there ways in which we can effectively integrate this, these aspects into decisions?
Andrew: I believe there is. Now, as you’ve said, there are many authorities who have worked on this and done a lot of the really important work on the question of the role of intuition. But what struck me as an outsider coming to this field is how wide the range of authorities is in terms of the kinds of conclusions they draw. So you have some people who say, and Danny Kahneman is a notable exponent, as it were, who is deeply distrustful of the role of intuition. And you have others, let’s say Malcolm Gladwell and Blink, who are very, very fond of the idea that actually, it really matters. Now, in between those, there are many, many different sets of people who have worked on this. So I would contend, on the terms of judgment, that what matters here is, have we done something before? Do we have the context? Do we have the experience and the knowledge on this particular thing? So I would argue that if we’ve done something many times before, and we have a deep knowledge of it, then what we call our intuition is actually really valuable, because it’s all that accumulated knowledge and experience which comes immediately to the fore and says, ‘this is a fake. I know it’s a fake because I’ve looked at 5000 fakes already, so I don’t have to think very deeply about it. I just know it is’. Now, that’s called intuition. What’s also called intuition is someone who, faced with something which is completely new, goes with their gut, without any basis for going with their gut. They have no logic to this. They have no experience. It’s just a kind of feeling. Now I would argue that what differentiates these two is risk.
If you know something very well and you’ve got a lot of experience of it, then bluntly, your intuition probably carries very little risk. If you’ve never done it before, it’s really risky. Frankly, to do something, and I’m sure we’ve all seen that, people who just do something really stupid because their gut feel tells them to do it, but they’ve never had any experience of it, they don’t know the consequences. So I would argue that what matters here is how much experience and knowledge does one have relevant to this. If one does, then I think intuition can be a great guide. If it doesn’t, it’s really risky. And if you want to take a real risk, well go ahead then, but you know, you might just want to wait, consult, think about it even for five seconds before you make something, which is a decision you’re going to regret. If it’s a complex one, you probably need to bring other people in, get other views, and really allow something to take a bit of time. When talked about the question of sleeping on something, if it’s an unfamiliar and highly risky situation, bluntly, sleeping on something’s probably rather a good idea. Look, if you’ve got no option, if a child is running into the street ahead of you, you don’t wait to think, ‘do I know about this? Do I have experience’? You don’t. You try and save the child, that’s straight away. So there you are. There’s my suggestion about how to make sense of intuition, as far as I’m concerned.
Ross: One of the key elements to your framework is awareness, but what strikes me is that, in fact, all of the elements are around awareness, or self awareness, to metacognition, essentially. It is about being aware of one’s biases, being aware of one’s degree of experience, being aware of the degrees of the frameworks which we have. I suppose taking that meta point, are there ways in which we can enhance or develop our self awareness across those domains, in ways that will enhance our judgment? Because I think there’s many where they might read your article and understand these concepts, but they’re not necessarily going to become more self aware.
Andrew: No, and, look, most of us don’t sort of go through a course on awareness, you know? I mean, we don’t think a lot about awareness. We just kind of assume we’ve got it. But what I’m suggesting is we ought to be aware of how aware we are. And there are lots of ways in which you can make yourself more aware. Just giving an example, if it’s in an organization, if one’s involved in some kind of annual appraisal, or, something which is a regular review of what one does. That’s the kind of time where you can pick up whether you’re aware of what’s going on, and actually, perhaps even ask about certain things, where one says, do you feel, ‘actually, that I’m aware enough of certain things’?
Training. One can go and get trained in this area. One’s observation skills. There’s lots of ways in which one can do it, and that may be part of other training. So for example, if one goes on a course on relationships between groups, dealing with people, awareness is an important part of that. And if one feels ‘actually, I’m not always aware, as I should have been of certain kinds of things’, you can perhaps seek out some courses that give you an insight into your own awareness. Having a coach or a mentor might be a very good idea, somebody who can tell you, are you picking up the signals? Because you can talk about your relationships with your colleagues and so on, and perhaps they will help you on the question of, ‘well, maybe you should be a little bit more aware of this and that’. Just to give a straightforward example: Quite often, when I’m interviewing people, then they do the talking. That’s the whole idea. Sometimes, when I’m being interviewed, the person interviewing me does all the talking.
Now it strikes me as very interesting, and I don’t mind that. That’s absolutely fine if they want to talk. That’s great, but it seems to me they’re perhaps not as aware they should be of the fact that, yes, they are doing an awful lot of talking, and maybe they should be giving a lecture rather than interviewing me, because that would probably be more sensible. That’s such a trivial example, but lack of awareness even on things, all of us, I’m sure, have got issues with, are we secure when we go online? Are we about to be hacked by somebody and so on. Now, that’s another kind of awareness. And there again, one can get training for something like that, which says these are the things you need to look out for if you want to be aware of people who are up to no good and trying to get your money away from you when you click on this link, you see what I mean? So there’s many different ways one can do it.
Ross: Indeed. As I recall earlier, interviewing Tim O’Reilly on this podcast, he had taken some courses in animal tracking from indigenous native Americans, and found that very useful in terms of his looking for signals in the evolution of the Internet ecosystem.
Andrew: Well, absolutely. And I think it’s something which is actually very interesting, because we all sit, for example, in meetings, and the dynamics of what’s going on in a meeting is really interesting. And I sense often that people don’t pick up the kind of signals of what’s going on in the meeting quickly enough so that they can respond with a point of view to make sure they get what they want done. It’s not just a question of something of a very esoteric quality. It’s very down to earth, being aware of what’s going on around one.
Ross: Yeah, absolutely. The framing you’ve given the article and your work seems to be very much around individuals. So individual cognition essentially for better judgment, but we can also consider organizational or group cognition, echoing, for example, the work of Karl Wieck and I’m interested around how well can this, what seems to be a framing around individual judgment be mapped onto group or organizational cognition or judgment?
Andrew: There’s limits, because obviously groups operate differently to individuals, and one can’t make the read across an exact kind of way in terms of what I’m doing. But there is a parallel, for example, in the way in which groups themselves operate. And I’ve done quite a lot in discussing with people about how you get better value from groups in terms of the way they operate and the way in which collective judgments are made. Now there’s a whole set of literatures on the question of how to get groups to operate better, but as far as I’m concerned, this matters very much on those aspects where the group comes together to make a choice, and that collective choice, the question here is, would it be better to have a group or an individual making a choice? I think that is a real issue. And for people, for example, running organizations or parts of organizations, it may be, for example, that they are convening a group when actually a group is not necessarily the right thing to do, or on the other hand, and perhaps more often, they make a choice themselves, where actually they would have been better to bring a group together. Here, though, one knows what the advantages and disadvantages of groups are. If you’ve got a group where you’ve got a dominant individual and that individual carries everything, then maybe the group is not very functional. If, on the other hand, you’ve got a group that operates really well, and you bring together many diverse views, then that can be a great way of making a better collective judgment. So this hinges very much on the effective operation of groups. And for example then, the key role of the chair in making sure that all the voices are heard, that the composition of the group is a good composition for this particular choice and so on. I’ve been on many boards in my time, and I’ve known just how well a group can function and how badly it can function. And so perhaps the awareness of that is in itself really, really important, particularly if you have the choice about whether you convene a group or not and how that group operates.
Ross: So, you recently wrote an article in the frame of AI and how AI is using decisions and essentially making the case that for complex decisions that human judgment is vastly superior. Where are the boundaries? Of course, there are many decisions, some of them quite prosaic or domain specific, where machine learning algorithms, for example, can exceed human performance. What are the domains or defining characteristics of the decisions where human judgment is dramatically superior?
Andrew: You mentioned the fact I’d I said that, as it were, human beings were superior to machines. No, I didn’t say that. What I said was that human beings are different to machines and that’s a quite important distinction because what I’m arguing is that though machines are amazing, artificial intelligence is astonishing in what it can do. One cannot fail to be impressed by this. But what I’m arguing is that there is a distinction between a human being and a machine. So machines can do a lot of things much better than human beings. And if I just take the area of medicine, what we know is that machines are very, very good at certain kinds of analysis in terms of looking at, ‘have you got a problem with a mole or something on your skin’? To be quite blunt, if you have, then you’d be wise to get a machine looking at it, not a human being, because although wonderful human beings are, machines are fantastic at this.
On the other hand, if you come in with two or three things that are wrong with you, including that you’re not feeling very good about life, a machine is not going to pick that up. So what we’re saying here is that machines are very good at certain kinds of things that you can program. You can train the data really well, you’ve got good data quality, and you can interpret it well, this is fantastic. This is what machines are amazing at. What machines cannot do, are certain things that human beings only can do. And so what I wrote was, and forgive me if I give you a bit of a laundry list here, let me go through it. I’m covering a lot of ground here.
Okay, so I argue that machines don’t have consciousness, intentionality, a sense of context, meaning, conscience, ethics, self belief, through aspiration or ambition, an ability to develop social bonds involving feeling and emotion, trust, loyalty and empathy. Machines may be terrific at lots of things, but they don’t have that. There’s some more. They can’t anticipate: spontaneity, idiosyncrasy, fallibility and contextual shifts. That’s the thing they can’t anticipate. They’re great at doing stuff, but that’s really tough, and they’re not good at, although, you know they are good in some context, at random interactions. We’re talking random here, fluidity and nuance. Now, this is something AI is pretty good at in some contexts, but not in others. But they can’t think abstractly. I mean by definition, they cannot think abstractly. They can do amazing things, and what they can’t cope with is ambiguity and incompleteness, including the relationship between correlation and causation. Okay, now that’s a big laundry list there. You can see why I’m not suggesting that machines can’t do amazing things, but this is a bit the province of the human being. That’s what makes us different, and that’s why, of course, I believe that judgment is going to be increasingly important in an age of AI, because, actually, that’s what human beings are going to do. They are not going to check proofs, they are not going to make cars. They are not going to, you know, do all sorts of stuff that AI is going to do much better than us, but those are the things that are left for us to do,
Ross: Yes. We think very similarly about that. So we’re just taking one of those points you mentioned, which is around context shifts. I think this is a really interesting one. And part of the thing is that, of course, AI is trained on past data, and the world is changing. So it is not able to map its past, what data it has on the past, on how the world is changing, which is the role of leadership, and arguably, the world is moving faster and faster, yet in the same way humans you talked before about judgment being based on relevant experience. However, humans are also in a shifting context where they need to be able to draw an experience which may not be directly applicable to an emerging and shifting landscape.
Andrew: Well, absolutely. Even among human beings, there are clear differences in this way. Some people get a context shift, they go into a different situation and they behave differently, and some people don’t, they go into a different situation, behave exactly the same as they behaved yesterday. They don’t understand that the context has changed. I think it’s Heraclitus, who said it several thousand years ago, which is, no man goes into the same river twice, because it’s not the same man and it’s not the same river. The world has changed between now and the next time we do something. It may not change dramatically, may not change virtually at all. But for a lot of the businesses where we make difficult choices, that notion of ‘is this context the same’ is really important. If you just take it, you hire a consultant, the consultant comes in and gives you an answer which is clearly the same as the answer they’ve given to the last few clients. It has not much to do with what you do or what you want. Now that’s what I mean by a failure to understand context shift. A machine finds it really difficult to understand context shifts, because context shifts are unbelievably subtle. That’s why so often you have to be in the room in order to understand how it shifted.
Ross: Which takes us to the point of human AI complementarity, and you mentioned, said, they’re different. Absolutely. They’re both, we would hope, our complementary capabilities. So what is the path to marrying or integrating or putting together human and AI judgment, let’s call it, or capabilities or decisions in a way which will give us the best possible outcomes?
Andrew: I’m a natural optimist, and so I mentioned feelings and beliefs, and I say one ought to declare one’s feelings and beliefs. So here I am being optimistic. I mentioned the question of medicine, and potentially the role of machines to do a lot of mechanical things, leaving the doctor free to talk to the patient. In exactly the same way I believe that, well, not exactly the same way, I mentioned change of context. I better take my own advice here. So in much the same way, I believe that machines can do a lot of things that human beings are currently doing much better than them, and that leaves the human beings free to do much more value added. Things that give satisfaction, things that actually provide the element of judgment. And in my own field, for example, of teaching, if I think about teaching at university, what matters here, the interaction between the human beings is really the very valuable bit of this.
You can get a lot of stuff bluntly to be done mechanically and online. The rote teaching does not have to be somebody standing up at the front of a classroom and talking to other people. That is pretty inefficient and not very satisfying. If the students can do a number of these things online, if they can have ways in which they are helped to learn, leaving them the interaction with the teacher, one which is question and answer and gives value added, it seems to me, everybody benefits. So there’s another example. In a legal firm, it might be that the machine does all the proof checking and does a lot of the grunt work, as it’s called, that juniors do. The machine can do all that, leaving the juniors actually to get the benefit of the advice from the seniors and to work with them.
Ross: So, role allocations, as opposed to integrated workflows, or just taking that into a for example, a board decision making, board complex decision the role of human judgment is clear in terms of being able to make high stakes human impact decisions in complex and highly ambiguous situations. But within that context, are there specific ways that AI can support or assist or complement or lead to better decisions than humans alone?
Andrew: Well, again, yes. If I think, just as an example, a lot of financial information is produced in a quiet mechanical way by the finance function. It’s based on churning some data. Usually the data is to claim data that was produced last year updated a bit, and it’s produced in a form, bluntly, that’s not very interesting. Now, just applying AI to that alone can give one benefit, one can think about the way in which the planning model has been constructed. It can be a much more sophisticated result of what AI can offer. AI can produce documentation that’s bluntly, more interesting than a lot of humans can, you can get lots of wonderful, whizzy stuff, as a basis of AI. Even in that one single area there, you’ve got a lot of stuff. The marketing side, similarly, on the technology side, there’s a lot that AI can do to help us in our understanding of what’s going on.
This, of course, means that the individuals around the table have to get to know more about AI and what it can offer. And so that’s a kind of responsibility for everybody to do that. There’s lots of different ways in which, as it were, can come together for boards in that way.
Ross: So just to round out, you already gave us an optimistic vision for where things can go. What is the role of humans in what could be a dramatically different world of work in the coming years?
Andrew: Well, continuing my optimistic theme, it is that human beings potentially have the opportunity to do more of the things that are actually interesting and enjoyable and challenging and less mechanical, as it were, in terms of their roles, and to focus on the areas where they do have comparative advantage, where they can exercise judgment, and that is something which actually adds much more value to an organization and much more value to an individual, actually. That doesn’t mean to say that, you know, there won’t be issues, because, just as in the industrial revolution, hand loom weavers lost their jobs as mechanical looms came in. This is not going to be a seamless transition, and therefore it’s very important that education provide the basis for people to use the technology, to be comfortable with the technology and so on. Because otherwise, just as in the industrial revolution, you’ll get left behind. I think this is a challenge and a real one for people in terms of being able to make sure that society as a whole can cope with the changes involved,
Ross: Absolutely. So is there a title and a release date for your forthcoming book?
Andrew: Well, thank you for allowing me to advertise it. It’s called ‘judgment at work, making better choices’, and it’s coming out on the 23rd of January.
Ross : Oh, fabulous, right. Is there anywhere else where people should go to find out more about your work or the book?
Andrew: If they want to read something before that, as you mentioned, the Harvard Business Review article that I wrote that came out in January, February, 2020, that’s the kind of basis. But I’ve done lots of applications of that. The book contains a lot of applications. If you’re not very impatient, perhaps you could wait until January.
Ross: Well, I’ll provide links in the show notes to all of those and more of your work. Thank you so much for your time and your insight. Andrew, greatly appreciate it.
Andrew: Thanks very much indeed.
The post Sir Andrew Likierman on six elements for improving judgement, increasing awareness, and the comparative advantages of humans over AI (AC Ep61) appeared first on amplifyingcognition.
– Sylvia Gallusser
Sylvia Gallusser is Founder and CEO of Silicon Humanism, a futures thinking and strategic foresight consultancy. Previous roles include a variety of strategic roles at Accenture, Head of Technology at Business France North America, General Manager at French Tech Hub, and Co-founder at big bang factory. She is also a frequent keynote speaker and author of speculative fiction.
Blog: Silicon Humanism
X: @siliconhumanism
LinkedIn: Sylvia Gallusser
LinkedIn (Company): Silicon Humanism
TV Series
Ross Dawson: Sylvia, it’s wonderful to have you on the show.
Sylvia Gallusser: Hi, Ross! Delighted to be on the show. Thank you so much for having me.
Ross: So you delve into the future and help people do that. How do you help your clients or people you work with to think more effectively about this wonderful world of the future?
Sylvia: That’s a question I love to have an answer to, and I really hope we can always have more people enter the future thinking field. So I started actually working in technology and strategy for quite a long time, mostly with entrepreneurs at first; but coming from a multidisciplinary background, I really found it interesting how we can bring different disciplines to help people think about the future and today. There are really, I like to say there are two different ways, two different paths to arrive at future thinking. There are very formal ones where you would go academic about it, you would attend university programs. And there are tons of great programs I’m sure you’ve heard about from the University of Boston or sorry, Houston or Finland to Hawaii University and so on. So there are already a lot of really great programs.
But at the same time, what you see in the profession is that a lot of futurists are coming from more diverse backgrounds, having started a career in other industries, and I like to talk about it as a second choice career. And you see people coming from marketing, strategy, HR, sometimes also some artists, technologists, psychologists. So there’s really an interesting variety of professions that can lead you to think about the future. Because just, and that’s really the topic of your podcast here, it’s about amplifying cognition. So we really do believe that future thinking is the way to amplify the way we think about the future.
So for example, the way I started, well, if you’re interested in maybe me zooming a bit about my own a way to bring people around me to think about the future. I started actually as a strategy consultant for maybe 15 years, working first with Accenture clients in France, then moving with a French embassy in the US and working more with entrepreneurs to finally start working with students and a variety of individuals around the future. So I created my own company, which is called Silicon Humanism, and on top of having a more general strategy toolbox, I’m really happy to always include other tools like fiction, or popular fiction, for example, that can help us think about the future. I also love to envision meditation, help people to bring themselves to develop their own mindset and extend their reason to think about the future. We also use a lot of gaming to help bring scenarios to life. But ultimately, what’s really important when I work with clients is to go from the envisioning to really the action planning. So that’s why, for me, strategy is really a complement to the foresight futurist toolbox that we have.
Ross: So there’s a lot there to dig into and just let this come back to multidisciplinarity. And so I suppose this is about…I think I agree that to be an effective futurist, you do need to bring together a wide variety of disciplines and exposures and experiences as I knew and many of our colleagues do, but part of it is I think the big part is it’s not being the futurist for others. It’s helping people to be their own futurist, to bring together their own thinking, and to expand how it is they think effectively about the future. So How could we add disciplines or bring disciplines together in our thinking? How do we put into practice this multidisciplinarity?
Sylvia: Yeah, that’s a wonderful aspect of it. And I like that you’re thinking about it and really focusing on it in your podcast around amplifying cognition. And what is it to be human these days? And for me, really, we are talking about different disciplines. I think the first foundation for me is really like the humanist foundation. It’s about anthropology living together. So sociology, about how we’re history, really believing that to be a good futurist, you need to know about history. You need to be a good historian first. So it’s really whenever I would work around a topic and let’s say, for example, future of work. Or you’ve been talking about the future of food recently. It’s really about understanding what happened throughout the millennaries, like, how did it start? What makes us human in that aspect, what is kind of working in cycles, what is constant, kind of perennial, and what is evolving. And I think when we have this really like giant cycles trends, then we have a really strong foundation. So I would really start with social science, usually at the first basis of the system.
And then on top of that, we do a lot of scanning, scanning the environment, and scanning for what we call signals of the future. So it’s really about now that you know, what is the invariance, the kind of the landscape, what do you see changing? And this is really like a radar you have on understanding what is changing in every direction. And here you’re familiar with that, but we call it the steeple approach. It’s about seeing what is changing in the social field, in the technological field, environmental, economical field, political, legal, and even ethical. I also like to add behaviors, just behavioral fields. And I think it’s really interesting to add those signals to see what is emergent. So we are talking about the force of the future. So not what is already there, like only in small pieces, but what is going to be. And there’s not, like one main scenario, I also like to say we’re not forecasting, we’re not predicting the future. We’re trying to imagine possible scenarios that are relevant. It’s not about being just flimsy and imagining everything, but based on the landscape, based on the new signals. What possible scenarios are we going to live in and and usually we really like to stretch the horizon here, thinking about more dystopian ones, thinking about more utopian ones. And I’m very fan of Jim Data’s force for alternate scenarios which are about growth, constraints and discipline and about transformation and collapse.
So I think it’s really that phase that is really important in the process to put together the different disciplines into very vivid scenarios. And at this point, that’s usually when the artistry, the fiction comes in. This is about giving foresight. Like to say it’s not just about foreseeing. It’s also about feeling, sensing. It’s about imagining the smells of the future, the sounds of the future. So usually, at this step, what I like to do, to bring that together into something that people can really project themselves in, is like foresight, meditation, filters and visioning, trying to really put yourself into that state of imagining how you could live in such possible scenarios.
And finally, because I talked about strategy. For me, the strategy is really like the end of the process now that you can really project yourself or your business into the future of an industry. Okay, now what does it mean? How do you get there? What’s your action plan? And it moves you into action. It’s not just about being passive. It’s really about being an active player of your future, an active builder of the future. So that’s why I talk about multidisciplinary but it’s not just everything comes together at once. There are these different layers and these different steps or stages that together will bring a full process of future thinking and future building.
Ross: Fantastic. So, I want to dig into some of the details there. And so part of it is, you talked about the radar of being able to scan out and the signals that we see. And so one of the, I suppose, frames is, as you say, using a framework such as steeple to break out the different categories and so on. But a lot of it, I think, is about sensitizing ourselves to signal so that we can more likely to notice the things that are relevant or important or point to things that might change in the future, and that’s what futurists do. But how? How can we, I suppose, convey this as a capability or skill that others can learn and develop, that have been able to see and to sense signals that are point to change?
Sylvia: It’s a very interesting thing with signals. It’s like raw material. It’s something that anybody can apprehend, and that’s what makes future thinking something that really anybody can work with and develop as a personal skill. Because it’s, it’s about becoming more aware of what is going on around and that’s why I think it works really in tandem, in duo, with the first step, which is about always knowing more, always more about what is long-term landscaping, and then being more aware of the variation. And this can go from analyzing behaviors of people around you, like, what changed during the pandemic? Were people more polite, more civilized? Did we see new behaviors, new words? Maybe also studying popular culture is a very interesting aspect, because if you see what is going on in the media, TV series, movies, books, you also sense a lot of what people are attracted to, what new changes are starting when there’s like this kind of enthusiasm for a new book. Also sometimes that means something. So how can you get more aware of this? It’s really an everyday practice, and I like to say two things. So it’s a personal practice and it’s a collective practice. That’s something you can really train yourself to do all the time, just reading the news, being aware of what is around you, like just having your sensors open to the world around and once again, it’s all senses. It’s about listening. It’s about observing people around you. It’s a different taste in the air. It’s really multi-sensitive here.
Why I say it’s also collected. The futurist community is a very active community. It’s very, it’s not that big. It’s small. It’s very interconnected. And there’s a lot of platforms to be able to exchange signals. They call it sometimes signal swarming or or signal scanning. You have different names. But the idea is really, futurists love to exchange around that topic, to meet and say, ‘Hey, this week, what did you notice?’ And once again, the steeple aspect is interesting, because when you’re on your own, coming maybe from one industry or one profession, maybe you have kind of a bias around one or the other. Like, I’m coming from technology at first, I would really focus on everything around, like, new technology and so on. But I guess someone who’s a psychologist might have a different opinion. An economist might see things differently. So coming together as a collectivity community is really interesting in enhancing and amplifying the way you connect with those signals around you. And finally, I would say, on top of it being collective, what’s interesting when you want to bring a group, a population, a company, a corporation, to work around future thinking, is to build the capability to do this. It’s very simple. It can start with just an Excel file. It doesn’t need something very fancy. But just like bringing people to come see what are signals, and get them to understand, like, what’s the texture of it? What does it look like? What does it sound like? And they start to log their own signals. And then you already have a big basis of signals of change, and cooperation, that’s a great first way to enter the field of foresight.
Ross: So one of the other things you’re talking about was, putting yourself in the scenario. And I suppose the first part of the practice is to be able to create a useful scenario that does help you to think about new things or envisage things that help you shape your current action. But as just individuals, what are ways in which we can, I suppose, conceive of and bring ourselves or enter? I think you used the word meditation there, and love to hear about what. What is that practice? How do we put ourselves to immerse ourselves in these useful future scenarios?
Sylvia: Absolutely once again, you know, it can be very personal and intimate, and it can be something more collective. So I’ll try to address both aspects, because I think they can work really well together. So you can develop your own future thinking practice as an everyday discipline, let’s say, and I wrote a few years ago, an article about mental stretching exercise you can practice to work on that. And it can go from dealing with different perspectives, trying to develop empathy. Put yourself in the shoes of someone else and imagine a story. You know what? Actually also learning new languages and learning new culture is also a great way just to practice this perspective change and facing things in different ways, reading, listening, and learning about fictions, for me, has been an immense way to just stretch myself to see a future that is possible and that is not necessarily dystopian.
So there’s I love to talk about science fiction because we tend to think of science fiction as something very dystopian and very scary, and not necessarily a good way to start for people who are scared about the future. But I would say there is more and more very interesting science fiction now about creating a future world that is not necessarily negative, that can be, like, really engaging, and develop a plot which has a narration where the problems are, but it doesn’t mean that the negative aspect is the world’s building. Like the story, to be interesting needs to always have something, something of a dilemma, or something of complexity, or naught to it. But it can be interpersonal stories, not necessarily in the world building around it. So I think science fiction and future fiction really offer us ways to think about the future.
So for example, the way we do it collectively, with group and I was talking about those meditative exercises and a really great way we’ve been doing it in the past is it was around the future of the home, because during the pandemic, the home evolved dramatically, and not just the structure, but also the way we reorganize life within it. And I like to talk about the structures and the intangibles that happen in the home. So what we would do, for example, is, in terms of envisioning meditations with a few groups, was really you waking up in the future home you live in maybe 10 years from now, 20 years from now? How do you wake up? What is the first trigger? What happens like is it a wake up call? Is it natural lighting? Do you still live in a bedroom? Like, we really start just, what do you smell? What do you think? What do you feel? What does it sound like? So, five senses.
Meditation is a really effective change, as I was saying and so on. So these are different tools we would use to bring people to get into that state of the future, and then go through a day in the life like, Okay, what do you do from your bed, then do you go to breakfast? Do you go to your bathroom? What does the bathroom look like? Is it interactive? Do you live alone? Do you live with other people in the community? And just now it starts asking so many questions that people naturally get their mind to wander around like the future home, and that was a really great tool to get a sense of that new type of space that could exist. And oh, they would like that home to be, because, once again, it is also about developing what would be our preferable future, our favorite futures and building them.
Ross: That’s fantastic. I think that, as you say, those five senses in the day in the life, but putting yourself in that is, very powerful way to pull people to think about not just, well, the broad shape of life in the world, but also some of the details. And they can put people in a very creative state. So you were mentioning this idea of science fiction, and how that can help us in so many ways. And you have been a long time devotee of science fiction. But what I think you said is something very interesting there, in the sense that a storyline requires tension, and it’s kind of and it’s too easy just to say, ‘Okay, well, is it just, you know, it’s a horrible world, and so there’s all sorts of problems. And so we get a story out of that,’ as opposed to saying, well, ‘it’s the future and it’s different, but, there’s something, something happens, and there’s a storyline within that.’ So love to what science fiction that you would point to that has been interesting or evocative, or points to interesting, rather than horrible worlds in the future?
Sylvia: Yeah, there are definitely more, let’s say dystopian ones than utopian ones I usually come up with. So I’ve published, a few months ago, a list of my best off of TV series around future thinking and so on. There’s one called Extrapolations, which is really interesting, because it’s not. Is about one point in time, but it’s trying to take different points in time and seeing, like at every, let’s say, every decade, or every couple of decades, what is going to happen. For example, the climate aspect is kind of degrading, but at the same time, you have people working in sustainability, and there’s a whale watching whale listeners in one of the episodes. And it’s really interesting to focus on one perspective, and then the next episode would focus on another perspective, and what I like in that show is really the intimate aspect. It’s not just like the big like the president, and it’s going to happen something at the I don’t know, like an asteroid coming on us, or big scenarios that we used to have this past 20 years. It’s more about intimate perspectives, the same way we are in Black Mirror, like you’re faced with one technology at a time. It’s not like everything changed at the same time. It’s almost kind of the same world in many of the deserts, but one technology is brought in, one new aspect of life is brought in, and you see the intimate, the personal life of one person in reaction to that new invention, for example, another one I can think of. Sorry, it’s, once again, a TV series. It’s called Silo. It was a recent one on Apple TV plus, and it was about people living within a silo, and they haven’t seen the exterior of the world, like the exterior world for a while, and there’s a giant screen, they are not even sure that’s what they see outside the truth. So there’s always these questions around, like, what do they see through the glass? Is it like the real world? Is it the way it has been destroyed? And so there’s this thread that the outside world is very emanating, very threatening, and they don’t want people to get out of the silo. But at some point, what happens is what happens every time you put people in captivity, they want to go outside and they want to see what’s real life. And I think that tension is really interesting, and it’s a great metaphor for so many things we see these days around truth management, like, how do we deal with the truth? How do we deal with different screens we have in life? There are so many screens around us, not just technology, but really screens that we don’t know anymore what is real, what is wrong, and deep fakes are definitely a scary aspect of the future.
Ross: Yes, we’ll definitely put a link to your compilation of science fiction in the show notes. But getting on, you’ve raised this very interesting topic of the truth, which is the nature that is changing today in all sorts of interesting ways. And that’s something which you’ve, I think, working a lot with. But I’m particularly interested in how we as individuals deal with that. How can we have better cognition to deal with a world where there might be more untruth than it was a while ago? What’s that path? What’s that journey?
Sylvia: Yeah, you know, that’s interesting. I guess, depending on the day, I can be more optimistic or less optimistic, your choice. I think I’m mostly optimistic, but in a year of elections in the United States, definitely you care a bit more about what is truth, what images of truth are conveyed to us. But I tend to have an optimistic view of how humankind evolved. I think we have really resilient species. And throughout the millenaries, we’ve heard so many times like chaos is coming, and I feel that every time, there was overall moments, but every time we found a way to grow bigger out of it, that’s probably a personal belief that kind of dictates how I think as a futurist.
And so when I think the new technology is coming and there’s a lot of scare around it, I tend to rely on my historical knowledge, once again, saying, like, how many times this happens, or cycles? How did we react? Or can we identify what can be a threat, a threat to democracy, for example. And that helped me identify, what are signals, what are the stages at which stage are we, for example, around, like, maybe having a president that is not as much of a good role model as it could be. So I really rely on this.
The second thing, I think, is our force of adaptation. And I take maybe more short term examples, but seeing how we’ve reacted when. We started having emails, like email systems or search engines. This was definitely different from what we had before. And at first, if you go back to what the media were saying by then, our people reacted to them. There were really strong reactions to it, like this is going to be the end of communications and so on. And in the end, we learned to differentiate. We still communicate as individuals in real life, even if we communicate in the virtual world.
So I truly believe that we are very social animals. And when I say social, it’s not just social media style. I think we like to be in the presence of each other, and the pandemic has been an immense example of how people wanted to go around the rules to meet in person, because we need the human, the lively presence of others, and not just the virtual. So that’s why I’m never too scared with new technologies, because I know that there’s a balance. We will still have some bubbles of disconnection. We like to disconnect. Sometimes we like to be just without a screen. I still like to read books, and I still believe that some people have their own bubbles and their own way to deal with disconnecting. And we know a lot around mental health is about disconnecting.
So coming back to what we were saying, I think it’s really important to see how as a species, we are resisting and we are resilient around invasion of too much technology, for example. We know we have been the one bringing technology in comparison to other species, but every time technology comes around, we find ways to regulate, to put regulation in place, to put limits to it, to work around it. And there’s a lot, for example, talking about generative AI and generative assets and new contents and deep fakes, for example, it’s about understanding what is true and what is not. I think there are really different levels. I like to think about it as a set of regulations or rules and behaviors.
First of all, I think like the more and more we see those tools developing, the more there has been around like watermarking content or tagging or displaying or putting where the source is from to now put a stamp on what has been generated or not, and thinking back, back about, oh, we thought about it a year and a half ago with the first examples of this, and I was working at Accenture by then, so that was really a big topic that we had, like, with these new things coming on, generative AI and ChatGPT and all the consequences of it on All professions. And what happened by then is, really we were scared that those, those assets, those discontent, is indistinguishable from real life. It was just going to be better and better at some point, indistinguishable. But what happened is more and more because we reacted to that fear. We worked on that fear to evolve, to adapt. We were also able to put in place some tools to be able to distinguish more and more.
For example, social media. They want to like Twitter/X, or Facebook. They help us work around Okay, is this the content that has been generated or not Adobe, if you create, like an image with Firefly, there’s a watermarking system that puts where it’s from. And more and more you need to put the sources attached to the results you get on ChatGPT or any other answering engines. So that’s this layer of what the profession, what the technology layer is adding, but then talking about amplifying cognition and what I think also, we’re resilient and resistant as individuals. The more we are exposed to things, we also create our own machine learning, not just machines do this machine learning thing like our brain is able to adapt. So the more we see something that we are not sure is true, the more we will be able to see the small differences. We think in patterns. We already think in patterns. So the first time you see a generative asset, you will be surprised. You get fooled probably a few times, but the more and more you get exposed to it, the more you will question it. And what’s even interesting is we, can we say like we don’t, we distrust so much the content that there’s something called the Liar’s Dividends. Maybe you heard about this, which is about at some point you don’t even think that truth is truth anymore. You start to distrust things that are true. And sometimes we see that in the media. You would hear about something that happened, and the reaction people, the first reaction, would be, this didn’t happen. This is generated AI or whatever?
Ross: Yeah, well, that’s one of the big challenges today. Certainly is, as you say, that now distrust of truth. So to round out what you know, as a strategic thinker, as a futurist, as someone who helps people to work with that, what are just two or three recommendations for how people can amplify their cognition and think better in a world of extraordinary change?
Sylvia: I love that. And you know, maybe I’d be a bit counterintuitive, because maybe you’re thinking like, increase skills, increase expertise. Read More like experts talking about it, I would go the other way around. I’d say, like, go towards arts and fiction. Like, continue to create. I think that’s a great way to not just be passive in reaction to the tools, but continue to explore creative ways and I think it’s really interesting when you see how, for example, new media campaigns are playing around AI, not just using it, but humans are creating around the ideas of AI. And we’ve seen really great ads from ketchup and Nike and so on. So that’s really interesting. Like, continue to create, I would say, easily. Want to continue to not just be passive towards the world, but like, be part of creating it. And definitely I’d say, like, um, use fiction. Like, read fiction, consume fiction. I think that’s a great way for us to explore in safe ways. When you create fiction, there’s this cathartic effect that you can envision. Is to know that you’re not comfortable with, but you end up leaving those scenarios in a nicely packed way. So that’s why you end up usually having different thoughts about the future. So it changes you, but I think it makes you also more adaptable to what could possibly happen.
Ross: Fantastic. Thank you so much for your time and your insights, Sylvia today.
Sylvia: Thank you so much. Ross. It’s a pleasure.
The post Sylvia Gallusser on signals of the future, vivid scenarios, awareness practices, and envisioning meditations (AC Ep60) appeared first on amplifyingcognition.
– Erica Orange
Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024.
Website: www.ericaorange.com
LinkedIn: @ericaorange
YouTube: @EricaOrangeFuture
X: @ErOrange
Book: AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth
People
Ross Dawson: Erica, it’s a true delight to have you on the show.
Erica Orange: Ross, thank you so much for having me, I’m so happy to be here.
Ross: So you have been a very long time futurist, and I think it’s pretty fair to say that you’ve also been a believer in humans all along the way.
Erica: Yes, I have to say I’ve been a believer in humans for far longer than I have been a futurist, but I have been doing this work, my goodness, for the better part of close to two decades at this point, really knowing that so much is operating really quickly, with obviously the biggest thing today being the pace of technological change. But when you strip back the layers, I’ve always come back to the one kind of central thesis and the one very central and core understanding that we are inextricably linked with all of these trends, whether it’s technological trends or sociocultural trends, we cannot really be extricated from that equation. My interest has always been in more of the psychological component to the future, right? I was a psychology major in college, and I never really knew exactly how that was going to serve me, and never in a million years did I think that it would be applied to this world of Futurism that I didn’t even know existed when I was 18 years old, but that thinking has really informed much of how I do what I do.
Ross: Yes, it’s always this aspect of ‘humans are inventors’. We create technologies of various kinds which change who we are. So this is a wonderful self reinforcing loop of ‘we create the classic thing, we create our tools, and our tools create us’. And this cycle of growth.
Erica: Right? Everything is always a constant evolution. It’s just that that piece of evolution is very different depending on who or what it’s applied to. So at this moment of our history, technological evolution is outpacing human evolution, but the biggest question mark is, will we be able to catch up? Will we be able to double down on those things that make us uniquely human? Will we be able to, even economically, and when it comes to the future of work, be able to reprioritize what those unique human skill sets are going to be? And basically, for the sake of not putting it very poetically, will we be able to get our heads screwed on right now and for the indeterminate future, so that we are not in a position where technology has passed us by, where we actually have a very unique role to play, and we know how we can really compete and thrive and succeed in this world that is just full of so many unknowns.
Ross: Absolutely. I agree that these are questions we can’t know whether we’ll be able to get through but I always say, ‘let’s start with the premise that we can’. And if so, how do we do it? What are the things that will allow us to be masters of, amongst other things, the tools we’ve created and to make these boons for who we are, who we can be, who we can become?
Erica: That is such a great question. I think it comes down to something that I talk a lot about, which is really the difference between lifelong learning and lifelong forgetting. And it seems the most cliche nowadays to talk about lifelong learning. I always say, of course, it’s important to become a lifelong learner, right? We all have to become lifelong learners and acquire all of the new information that’s going to keep us relevant. But if we’re piling on new information onto outdated thinking, we have to become more comfortable becoming lifelong forgetters.
I tell so many of my clients or my audiences that it can be just a very simple exercise of identifying one or two things. It could be a heuristic, it could be a value judgment, it could even be just a way that we approach our work. It doesn’t have to be anything more complex than that. What is it that we are holding on to that no longer serves us? Once we are able to get up on the forgetting curve as quickly as we’re able to get up on the learning curve, neurologically we can free up some of that space such that we are able to compete against some of these ever evolving technologies. And then the other thing that I would add to that is one of the first things I learned in doing this work, as I said about 20 years ago, was that the future was never about ‘or’ the future is about ‘and’, and again, such a simple word that carries a lot of weight today, because we tend to view things in this world of hyperpolarization and in the social media echo chamber. With tribalism, we’re so bifurcated that we tend to think of it as this future or this future, this reality or that reality. When the simplest of lessons, it kind of goes back to the most basic of math, which is the Venn diagram, it’s looking for those intersections in the middle and knowing that everything operates from an environment of ‘and’.
This goes back, Ross, to what we were talking about with humans and technology, right? When it comes to the future of AI, there’s this narrative out there of us versus them, and that is where a lot of the anxiety and the fear is coming from, because that whole narrative of a robotic takeover when we view it as something that is collaborative and something that is symbiotic and something that is augmentative, that it really is, humans and technology and those polar opposites always coexist and coevolve, just like progress and stagnation. It’s chaos and creativity, right? It’s imagination and inertia. It’s all of these things that are happening simultaneously. So to both forget and to view things in an ‘and’ way are two of the ways that we can kind of start getting our heads around some of this.
Ross: Wow, I love that. We are very aligned. You know, that goes to my framing of ‘humans plus AI’, whereas others have framed it somewhat more confrontationally. But just the other thing which comes after me is improv. In improvisation, it’s always ‘and’. Whatever you’re offered, it’s always and as opposed to contradicting or the but’s or the no’s and so on. And the improvisational approach to life is always ‘alright, and’? What can we add?
Erica: Yep, that is exactly right. And again, I go back to my early days of doing this work, and the Yogi Bear quote of ‘when you get to a fork in the road, take it’ always comes up. And now the challenges, that fork in the road, which used to be kind of bifurcated in a couple of directions, is now going off in multiple and multiple directions that trying to navigate this roadmap is like, oh my goodness, I feel like I’m almost kind of schizophrenic, because life in each moment and even each minute, is a constant arc of improvisation. We’re all at each moment just trying to figure it out and apply a little bit of control to the chaos.
Ross: Yes. Going back to the first point, this lifelong forgetting, which I totally love, I think this requires metacognition. What is it that I need to forget? And then how do I forget it? Let’s get a little tactical here. How is it you work out what you need to forget, and how do you then forget it in order to make space for the new?
Erica: I would say, one of the best ways to even start forgetting, and as the mother to a seven year old who is wise beyond his years, we know so much about these, like Corporate Mentor programs and to learn from institutional knowledge. And again, that is great. That is the learning component, but the reverse mentorship component goes into the forgetting part of this. And I tell a lot of people, whether it’s parents or anybody, it doesn’t really matter. Don’t even have your kid be your reverse mentor, because they’re gonna have no patience for your questions or your learnings, and not even for anyone internal to an organization or a business, don’t even have it be an intern, they’re already too old. So it could be a neighbor, a niece, a nephew, whoever it is, and just be open to what it is that you hear. It could be what platform they’re using, what gaming system they’re using, how they make friends, what they talk about, what they’re learning in school that they find interesting. Do they use ChatGPT for their homework? Do they like to play outside? I don’t know, a whole laundry list of questions.
They’re kind of like the aliens from another planet. They can allow us to see these completely unbiased perspectives of how the world operates, and the biggest thing is using what it is that we learn and being open to those learnings to inform some of our own value judgments and ways of seeing the world, where it’s like, ‘oh, I never thought of it from that perspective’. Kind of like bringing back the lost art of nuance, we just don’t have those nuanced conversations anymore, because we operate in these silos around viewpoints that kind of support ours and that are comfortable, right? It’s like the security blanket in a world of uncertainties. So to tap into some of this, different ways of thinking is one very practical way to achieve this.
Ross: Now we live in a world of AI, here on all fronts, and this is a particularly pointed way, I suppose, illustration of the technologies that we have created, which are changing us, which are changing the world, and where, you know, we’re trying to move to a world where it is a complement to us, it supports us, it makes us more. What’s that journey look like?
Erica: I mean, there are so many facets to this, because it is an ‘and’ reality, meaning in many ways human creativity is being unleashed through AI. It is augmenting a lot of our cognitive abilities. At the same time, it is distorting many of our realities. It is leading to a world where it is getting harder and harder to differentiate between what is real, fake, true and false, with the rise of deep fakes, and we know that AI is manipulating a lot of our electoral systems. It’s putting into question even the whole nature of democracy itself in some ways. We have this very dark rabbit hole on one end, and then we have a very exciting possibility on the other. Kind of like the two DNA strands, right? It’s like we have to decouple a lot of the hype from the reality. We also have to know what the tools can do, what the tools can’t do, and what they are unable to do now, but will be able to do in the future. That’s where we have to do away with the cliches, because it’s not just augmenting us and it’s not just taking our jobs. Just like a rubber band, right? There’s a lot in that middle part where there’s a lot of tension, but there’s also a lot of opportunity there.
Ross: So how do we start? What’s the framing of it, as you say, the deep challenges? There’s unquestionably many negative consequences of not just the tools themselves, but in particular, how they are used or misused.
Erica: I would say, it’s a time of learning and it’s a time of experimentation, and it’s a time of implementation, right? And each one of these have a different set of strategies. So for those at the top of an organization, they have to think of what problem they’re looking to solve, what strategic problem they’re looking to solve, what time based efficiencies they want to create if they want to glean insights or crunch data in new ways. That’s the implementation piece. The experimentation and learning piece goes back to, I think, one of the biggest future proof skill sets going into the future, which, again, sounds so simple, but it still is very complex. We all have to get better at asking the questions that are going to matter. It’s not just going to be in the purview of the prompt engineer. It’s going to be that we have to question the output of absolutely everything, and knowing that a lot of the generative AI systems right now can be used as tremendous tools, but they are still just that. They are still a tool, and they are subject to their own flaws, their own biases, it’s no wonder that a lot of people are talking to them as kind of these mysterious black boxes, or we hear the word hallucinations thrown out there. While the output generated can be great, it still is so deeply reliant on human oversight and human judgment and even human decision making. Those are really, I think, three of the fundamental pillars. If we double down on those three things, then I think we will emerge as the winners in this new reality. But, when we forget those three things, including ethics and ethical frameworks, then we will see too much control to systems that still have not been ironed out yet.
Ross: Absolutely. So everything’s very fast moving at the moment and is probably going to continue to be the case. So drilling down and say, judgment, which I think is really central to all of these, and that you know, where that’s part of decision making as well. We are in a fast paced world, it’s very hard to judge what is going on in, amongst other things, what the AI tools are doing and so on. But you know, that’s what our role is, obviously we need our reference to be the reference point where we make judgments. So how do we develop and apply and refine our ability to judge in this very fast paced, discombobulating world.
Erica: It really underpins something that is so basic to human nature, which is also doubling down on human to human relationships and human to human trust. And not just have human judgment be something that you do on your own, but have it be a more human centric exercise that is much more collaborative and honestly doesn’t even start when you are using AI for your job. These are things that have to be instilled at the earliest of ages, which is why the conversations from an educational perspective are not the right ones that we need to be having. We’re having all these conversations about cheating and ChatGPT for research like, you know, what time out, educational system that was appropriate for four economies ago and an industrialized age. Are we actually preparing the young mind to tackle a lot of these digital challenges, or are we just spitting out a whole bunch of what we would hope to be smart learners, when the future, I always say, is not about smart, it’s about intelligence. Right now, ‘artificial intelligence’ is actually ‘artificial smart’.
We need to think through the lens of judgment, decision making, oversight, how we can instill these values, even if it comes down to a new civic space framework for the younger generations who are going to interface with these systems, who are going to build these systems, but it can’t just be a plug and play sort of thing for a 50 year old who has never used these systems, we need to reverse engineer a lot of this so that we have the thinkers, the critical thinkers, the analytical thinkers, who are able to decipher the output of these systems and think about it even from an organizational perspective, if they are bringing in all these young hires that don’t know how to view through the lens of however judgment is defined. The reputational and the industry and enterprise risks that could actually be created, and the second and third order risks from putting in people that don’t understand how these technologies can really tend to call out in unforeseen ways.
Ross: You have a book coming out?
Erica: I do, Ross. I do.
Ross: Which is rather on point, all of our conversations. So tell us about the book and what is coming out. Tell us what you say in the book in a few words.
Erica: Well, ‘in a few words’ has never been my forte, but the book is called ‘AI and the New Human Frontier, Reimagining the Future of Time, Trust and Truth’, and a lot of the book is based on that central thesis that we’ve already talked about, right? How do we double down on those things that make us uniquely human in an age of accelerating AI and the subtitle ‘time, trust and truth’ are three of the really critical components here, because time is based on the fact that things are happening at such an exponential rate, right? How do we even get our thinking aligned with something that could be outdated a month from now or even a few weeks from now. And then, the trust and truth goes back to what I said earlier in our conversation, how in a world where much of our reality is being manipulated, and the bigger question that AI poses that even if it’s not about real, false, true, fake. The bigger thing is, how do we in a world of AI prove any of these things and these realities to be true? So it takes us down a very important rabbit hole, and then really brings about the clarion call, which is, how do we really refocus on imagination? How do we reimagine our own value? How do we reimagine what work is going to look like without any preconceived notions or any constraints of how we’ve done any of these things before, knowing that those frontiers are all going to shift and evolve?
Ross: So how do we support our ability to imagine and reimagine better than we did before?
Erica: It’s one of those things where it’s just like play, or just like whimsy. A lot of these things, as we become adults, have been coached out of us, but it’s so core. It is so central to humans throughout time, right? Ancient Civilizations wouldn’t have created unbelievable technologies of their own making had they not imagined, and had they not imagined even the universe and our connection to the stars, right? We all have the ability to imagine, and a lot of it comes down to just channeling our inner titles, playing around with things in the physical world, playing around with things in the digital or the virtual world, doubling down on those human connections, and just kind of getting out of our own way. This goes back to the thinking piece, so that it’s not just linear extrapolation based on what it is that we know or where we think the future is going, but allowing ourselves to just imagine new possibilities and new ways that we can really survive and thrive in a world that in many years from now is going to look increasingly unfamiliar in some ways and just as familiar in others.
Ross: Yes, that reminds me of one of my favorite quotes from Keith Johnstone, the father of improvisational theater, said that children are undeveloped adults, but adults are atrophied children.
Erica: Yes, I love that- was it George Bernard Shaw that said we don’t stop playing because we get old, we get old because we stop playing?
Ross: Yes, yes.
Erica: So yeah, same sort of thing, right? And just kind of going back and tapping into the wisdom and the empathy and the connection. And again, we hear so much about how AI will augment certain things, but it really is about that intersection of imagination and the biggest thing that can galvanize all of us, which is hope, right? A lot of these things just seem very scary and outside of our control, but these things are very much in our control. And it’s not kind of a rah rah, mature leader for humanity, but I do believe in humanity’s ability to kind of catapult ourselves into a new age and a new way of being without those constraints of the past, because we know that we can’t apply all of the old and outdated thinking to whether it’s new problems or ultimately new solutions. It has to be unfettered and it has to be based and really rooted in imagination.
Ross: I love that. The positive potential is absolutely there, but still remains for us to take that.
Erica: Part of my chapter comes with a disclaimer, like things are about to get a little dark in these following chapters. Let’s ride it out like the roller coaster it is, and as we come out the other end, let’s really talk about what those possibilities are. Let’s talk about bringing back the lost art of storytelling and telling the stories about the future, right? Hollywood is depicting a lot of the stories about the future in this dark, post apocalyptic or even superhero way. Where are the stories about the positive imagination, the H. G. Wells and the Asimov, the old science fiction writers of the past? That was pure imagination, and we don’t really think of how we can apply those really powerful stories to solve a lot of those existential issues.
Ross: So let’s round out with saying sets of advice for two people: individuals and leaders of organizations. Today, what is it that individuals can and should do to chart their own course for themselves, for their families, to prosper and to contribute in this world where AI and technologies are shaping who we are in society.
Erica: It goes back to ‘and’. We have to be aware of what these tools are. We have to be aware of how they are evolving. We have to question our relationship with them. We even need to just have those conversations with that next generation of responsible youth. Issues of bullying, issues of deep fakes, right? We hear so much about the creation of a digital literacy framework, but what is it really? What really are we teaching that next generation, and how do we also just, you know, as I said, I have a seven year old son, and people are very surprised when I say that I am a futurist who studies technology, but my son is very analog, and it’s done deeply on purpose, because we know that at the same time, a lot of these technologies are rewiring the brain. They are rewiring the brains of young people, and longitudinally we don’t quite know how any of that is going to play out. So all of this, in many ways, is ‘we are putting a new generation into a petri dish’. And you can say that that has happened in the past, but it really is happening more than ever with the massively multiplayer online games, with virtual reality, with constant connectivity, with putting them in front of iPads from the time that they can basically see, and we don’t really question what it is doing from an attention perspective, from a learning perspective, so we also need to have more of those conversations.
If young people are accessing AI and ChatGPT, what is it doing to critical thinking? How is it changing their own neural wiring, knowing that the ones that are in the earliest part of Gen Z are the first ever in history to have different neural wiring than the preceding generation. There aren’t really enough conversations yet about that, and I think more families need to view these technologies less as band aids and really think of what is the appropriate use, and how can we also cultivate that human to human relationship given the fact that we also have all of these tools. Now, the other thing that I would just add is it’s a different conversation for business leaders, right? I think this goes back to the difference, really, and this is something I talked to a lot of clients about, which is the difference between vision and strategy, and strategies now, the time horizon is so shortened, so don’t have one set of strategies when it comes to AI, when it comes to digital, when it comes to talent management, when it comes to anything today, be so nimble and flexible and adjustable and adaptable in those strategies, and have a different set for different timelines, but your north star has to be your vision, right? What you stand for, who you are, what you represent, what matters to you as a company, a brand, and have that be so clear and clear in the articulation of it that it trickles down through that institution or that organization, so that everyone knows that the strategies are then in service of that vision, and not enough organizations, I think, are really going back to the drawing board to say, what is my vision in an age of AI, and how can I use these tools in the service of that versus in the service of my strategies?
Ross: So on that point, I think I absolutely agree. We need to create, use your vision, and sometimes reform your vision. But part of that is, what are the roles of people in that future organization? Technologies have always changed the nature of work. It’s continuing to do so at an increasing pace. As leaders look to the vision and the future of their organizations, I don’t think we have the answers now, but what’s that journey to imagining that future of the role of people within that and how they are complemented by AI?
Erica: Part of the role of the imagination in this is in the vocabulary. Because work, what is work? Work is such an open book, when people talk about the future of work, what does that even mean? How is that even defined anymore, in a world where one thing matters, and one thing is defining all of this, and that is a sense of boundarylessness, right? Time and space have different definitions when people talk about the evolving workplace, workplace also is a word that has no meaning anymore because it’s all about the workspace. So part of that reimagining is in the vocabulary that we use to even define these things, and we haven’t even begun to really get our head around what a workspace is. So many leaders are still struggling with even words that have been out there for the last 15 years, distributed, virtual, flexible and hybridized work. They’re like, ‘oh my goodness, how do I get my head around this’? When we’ve always known it’s not one size fits all. Part of imagination is just a blank slate. It is just a completely blank canvas. But we think of imagination as taking all of those preconceived notions and kind of rejiggering it, where we just do away with the words that don’t work and create new ones to describe completely new ways of tapping into talent, whether that talent is carbon based, as in humans, or non carbon based, as in AI based systems.
Ross: We just need to take that one step further. What’s that process of bringing together those AI and those humans? What does that look like? Of course, it’ll be different across many organizations, but what might it look like? Let’s paint a positive vision.
Erica: It is going to be different for every single organization, every single functional capacity, every single individual, every single geography and every single generation. When we think, when we say, there’s no one size fits all, that is exactly that, and that is why, again, it’s in service of a vision. An AI might be a useful tool for accounts payable and accounts receivable, because it helps streamline the payment process. It could be deeply helpful for someone in a research capacity, because they’re able to glean insights in new ways. It could be helpful for a doctor in a completely different capacity, because it could be used as a tool to diagnose whatever problem you are looking to solve. There is a facet of AI that can come in and help streamline that process. But ultimately, it gets down to the one thing that is the biggest value proposition in our economy, which is time. What are you trying to solve for from a time based perspective, and that is completely dependent on not just one set of variables, but dozens upon dozens of different variables.
Ross: Just to round out, give us a piece of wisdom, tell us what we should do, what’s a big insight from all of this that we should take away.
Erica: Well, there’s no silver bullet, right? And I wish there was, and I wish that there was one thing that could just solve all of the world’s problems and ameliorate any concern, right? But I always kind of go back to one of my favorite adages, which is ‘a diamond is merely a lump of coal that did well under pressure’. And I love that, and I end so many of my presentations that way, just because it’s human nature to view whether it is a problem, a change, a an anything right as just this lump of coal that we really don’t know what to do with, then we might try to hammer away at it, but we really know that with time and pressure and compression, right, it can turn into that diamond. And what is that diamond? That diamond is the future proofed opportunity, but it’s something that doesn’t come from just pure innovation, right? Coal doesn’t turn to diamonds because of innovation. Coal turns to diamonds because of imagination.
Ross: Fabulous. So where can people find out more about your work and your book?
Erica: Yeah, so my book is available for presale. It launches on September 18. So it is on Amazon ‘AI and the New Human Frontier’, and lots more information on my website, ericaorange.com and also my business website, the futurehunters.com.
Ross: Thank you so much for your time and your insight. You are truly inspiring. Erica,
Erica: Well, this has been so fantastic, Ross, thank you so much for having me on. I really appreciate it.
The post Erica Orange on constant evolution, lifelong forgetting, robot symbiosis, and the power of imagination (AC Ep59) appeared first on amplifyingcognition.
– Natalia Bielczyk
Natalia Bielczyk is Founder & CEO of Ontology of Value, an R&D, EdTech, and consulting agency. She holds a PhD in Computational Neuroscience and is author of three books, including the forthcoming ‘The Longest Journey: The Ultimate Guide To Self-Navigation In the Job Market’.
Website: www.nataliabielczyk.com
LinkedIn: @nataliabielczyk
X: @nbielczyk_neuro
Facebook: @drnataliabielczyk
Instagram: @nataliabielczyk
Book: The Longest Journey: The Ultimate Guide To Self-Navigation in the Job Market
People
Books
Ross Dawson: Natalia, it’s a delight to have you on the show.
Natalia Bielczyk: Thank you so much for your invitation. Ross, I’m honored to be here.
Ross: So we have a changing world of work, and people have been talking about the future of work for quite a few years, and I think we’re already well into the future of work, but it’s changing fast. I’d love to start off by just getting your high level perspective on what are the things that we should be looking to in shaping a better future of work?
Natalia: Absolutely. Actually, ever since we faced the Covid 19 pandemic, I think the number of black swan events actually got- I have a feeling that these events got denser and denser, so it’s really hard to I can tell, from a perspective of a neuroscientist that research over the potential future of work is so much more challenging than neuroscientific research because we cannot really foretell in the long run, how these incoming black swan events that we by definition cannot predict, will shape the future of work. Each one of them seems to not necessarily change the future of work, but more like accelerate the progress. So the covid 19 pandemic, it didn’t qualitatively change the the job market, but it speeded up the processes that were already going on by 10 years and and then I believe that the premiere of chatgpt was yet another event that, again, since OpenAI was the first big tech company who showed balls to actually release the top tier software to the public, and then that actually prompted others to come to the scene. That was, again, just speeding up a process that was already going on. Most of these models were already in development for many years prior to the premier of GPT. Seems like one player came to the scene, others followed, and now we have almost an arms race, and that’s fundamentally changing the job market. So we don’t know what comes next. Maybe the US presidential elections will change the scene. Maybe. We cannot really tell, like, what will happen with respect to global events and groundbreaking points in technology worldwide in the next 2, 3, 5, years. We can make some educated guesses for the future. In this episode, I’ll share some of my educated guesses. Obviously, it’s only a guess, but I hope that it’s useful as well.
Ross: Well, I think it’s also not so much about guessing. I mean, that’s part of the thing: being a futurist, you don’t try to predict, because we don’t know. But it’s around really saying, ‘what is it we can do that can shape a better future’? So there are all these forks in the road and uncertainties, and all sorts of extraordinary things will happen that we can’t predict. I think a lot of it is around saying, ‘well, if we want to create a better future of work, what is it that we need to be doing today’? That’s really the heart of the question.
Natalia: Right. There are a few things that we should be doing as soon as possible. First of all, I think education is always the answer. Let me elaborate on this. At this moment, we live in the world of so -called BANI, which is an abbreviation for brittle, anxious, nonlinear and incomprehensible. Brittle, anxious, nonlinear and incomprehensible world. It’s a new concept, yes, it’s been floating around for the past two, three years, as opposed to the previous concept, the world of VUCA. So now that the world became even more hard to comprehend, that’s the new big thing in the future of work area, BANI as a concept.
Basically, what we need as individuals is better filtering mechanisms. I think it’s really hard to tell the difference between events that we should really be considering as important to us and the events that actually seem relevant. And I have to say that I learn every day, because there are so many global events like the Olympic Games or the elections in the US. I’m Polish, I consider myself European. I often visit the US, but I’m still a European citizen, and for me, it’s really hard to tell sometimes which global events and which news, also in technology, which new software is actually relevant to me. There was a lot of research also, with respect to utilizing AI for work and a new research by Upwork Research Institute showed that 77% of employees, all from executives through specialists to freelancers, they had 5000 subjects in the study, so it was a massive study, and showed that 77% of them actually declared that AI hampered their productivity and increased the workload.
If you don’t know what is actually relevant and worth using, and you don’t have the right filtering mechanisms, your personal world becomes even more incomprehensible, and your workload and also cognitive load becomes more unbearable than before. I think one thing that we definitely should be doing is to have better filtering mechanisms and become our own Zen masters and the masters of creating, I would call it a sensory warmth, so when we work we have to really be selective with what kind of stimuli we allow to enter to our world, and AI should help us to filter and not give us additional cognitive burden. So that’s something I’m always trying to do for myself. For instance, one way of utilizing AI in our favor while working is to browse using AI as a gatekeeper to keep yourself screened from all the dopamine shots, such as banners online, random advertisements, all the trash that you are actually bombed with every single day. For me personally, browsing World Wide Web through GPT or other LLMs is actually beneficial in that way. The main benefit is not that it’s actually extracted knowledge that is structured to answer my specific question, but it’s more that it screens me from unnecessary stimuli. That I could recommend, using LLMs for that reason.
That’s one thing we can do. But also there’s so many other things. For instance, I think what multitasking is today is often understood as- I think it’s a great misunderstanding, because we always try to do too many things, but our mind only has one channel of consciousness, so we cannot really multitask. We are like computers that are analog, that only have one main process going on, right? They cannot complete multiple processes at a time. So we, like computers, can only switch between different processes quickly. So we are the same, and multitasking in 2024 should be always asking yourself, which task will serve to most of my goals, the highest number of my goals. Choosing the tasks that cater to multiple of your needs, or fulfill multiple of your tasks at the same time, so choosing the most effective tasks and not trying to catch every single task that is on your to do list. Just try to delegate those that are just not as relevant as others. So it’s all about prioritization and filtering.
Ross: Yes, these are very much echoing the themes of my book, Thriving On Overload, where both on a number of levels, certainly in terms of these ideas of, how do we filter out, or what are the tools we can have to filter out a massive amount of information so that we can make sense of that effectively. And as you suggest, recent neuroscience has demonstrated that people who think they are multitasking are, in fact, cognitively switching between tasks with a significant impact on performance when they do that. So it’s coming back to the organizational structures, that’s something which you’ve studied a lot. We start to think about what happens in organizations if we start to get the productivity benefits of AI used well in many tasks, AI can augment productivity. But it also goes to the human needs of, ‘what do we want from our work and working with others and being in an organization’? So how does some of those do you see playing out in organizations with these potential for higher productivity, but also the fact that we still want to work with other people and not exclusively with digital interfaces.
Natalia: Absolutely, one interesting fact here is that, there is this public perception of AI as being an external threat in organization. The employees- they feel they have to compete on productivity with AI. But in fact, the research also shows that over 80% of today’s knowledge workers are feeling to some extent threatened by AI just flooding the job market. But what often happens is that in organizations, introducing AI to the processes in organizations often the bottom up initiatives coming from employees. And one of the reasons that is not broadly discussed but I think it’s interesting, is that the main reason is because AI does not complain. AI does not have bad days. There’s so many employees out there declaring that they would rather work with GPT, because GPT is not mean. So that’s also why one thing I always tell who I work with, you have to make sure that you’re kinder than AI, because otherwise, it will be your not your bosses, but your colleagues initiative to actually replace you with AI. So that’s something that’s actually quite a common problem, but not often discussed in public, the work ethic. You’re in 2024, every knowledge worker in every organization should be working on their work ethic primarily. The user experience of working with you should be impeccable. That’s one thing to stay relevant, one important factor to stay relevant in the job market.
Ross: I think the really interesting points there, one is this, Ethan, a researcher in California has suggested that we need to be kind in order to teach the machines. We need to be our best people in order for the LLMs, which are basically just seeing what we do, and that becomes part of the corpus of body, so I think that’s a fair point. And there is also this thing of machines being, as you say, unfailably pleasant, which is why, unfortunately, some people are preferring them to boyfriends and girlfriends, because they never have arguments. They’re always constructive, so there’s some challenges there, but I think these point in a way, to this potential for, in a world of humans, plus well designed AI, that it could help us lead to where we have better behaviors and are kinder and more thoughtful if we start to interact with the machines, and the machines guide us and give us a model, modeling in both ways to create better interactions.
Natalia: Absolutely, I believe that empathy is contagious, so I’m positive, I’m with you. About organizations of today, there was a very interesting documentary by Netflix, authored by Joy Buolamwini, entitled The coded bias. And it was all about biases created by AI in recruitment, AI and machine learning to be, to be precise, in recruitment and in other organization-wide processes. Eventually, any LLM or any machine learning algorithm is just blind to the bias in the data and just to explain briefly how it works, if 99 out of 100 of your software developers are men, then the algorithm learns that being a man as a feature of that candidate is actually supporting the hypothesis that it’s a good candidate because it just fits better the profile of the employees that are already in the organization. There was also research recently on how AI affects recruitment, and it showed that as a recruiter, you can overuse AI. Recruiters that just only use AI to inform them about the ranking for the candidates, and they do not support themselves with any other evidence they actually get worse results as recruiters than those who actually pre filter or just support themselves using AI, but still use their own rationality and interviewing skills to try to spot talent. So being a centaur or cyborg, if you will, is actually a better idea, especially in this, as a gatekeeper to the organization to spot talent, than just using only AI.
Ross: So what does that look like then? So let’s say there is some use for AI in recruitment, because there is a massive number of applications that you need to be able to make sense, there is value in perspectives from that. But at the same time, for many reasons, as you say, including the embedded bias you certainly don’t want to delegate fully to AI. So when you talk about the centaur or the cyborg recruiter, what does that look like? What role does AI play? What role does the human play? How do you bring those together to create a superior recruitment process?
Natalia: That’s a very good question. I think there is no clear algorithm for this just yet. At this moment, what is being done is mostly that AI is used to pre-screen or pre-filter candidates in the first step. So that obviously, especially with today’s ways of applying for positions, with very easy to use tools such as just apply through LinkedIn, sometimes it’s just one click necessary to apply. Easy apply procedures in such circumstances. Obviously, there will be many wannabes who just sit and click, apply, apply, apply. Just meet their daily quota. Okay, today I applied for 100 jobs. I can go to sleep now.
Ross: And everyone with a customized cover letter.
Natalia: I always tell people ‘don’t do it’. The probability of winning a position this way is just zero. You only create chaos, create trash data, you make it harder for everyone. Don’t do it, be selective. Only apply for positions that are feeling genuinely exciting and you feel like it was written especially for you, for your skills, for your experience, for your personal, professional mission. Otherwise it makes no sense for anyone, for you to apply. But still, that’s always an optimization problem. The easier the application procedure is, the more inclusive it is, because everyone is free to apply, but at the same time you don’t have to have anything, any professional help, any professional tools, you can just click apply, and then you’re in, at least theoretically. But the downside is that the more inclusive the application process, the more noise in the process. I have to tell you that recruiters are one of the most, the most depressed and burnt out professional groups, especially in the times of pandemic, the in the US, the rate of burnout among professional recruiters was 96% it’s extremely hard for them to deal with this cognitive burden so many applications coming in. That’s why, in the pre-filtering stage, using algorithms that just simply compare text between applications and job offer, they are meant to just filter out these random applications. Sometimes, unfortunately, it’s just throwing out a baby together with the water, unfortunately. And sometimes you’re losing also good applicants. But for a recruiter at this stage, it’s like they’re…
Ross: It’s impossible to go through everything. So you do need some system at some point.
Natalia: Yes.
Ross: So, where are the points where humans augmented by AI can help make better decisions in a recruitment process? I mean, where it is a human first decision, but AI assists it.
Natalia: At this moment, one area where AI is useful in recruitment is to help hiring managers design better recruitment questions that are more personalized per candidate. So hiring managers can better prepare for interviewing the top candidates, and they can easier analyze their professional story, adjust the questions to the candidates professional past, and this way they can learn more. So now, with very little amount of additional work, they can actually get better conversational interviews and get deeper into the candidates motivation and personal story, other than just asking a few standardized questions from the list, and just having the checkpoints down. I think the quality of interviews actually got higher due to AI and vice versa: the candidates can better prepare now for the interviews, they can let the LLM screen the employer’s website and learn more about their mission, and learn more specifically about the recruiter who is going to interview them. Now, if you really wish to prepare and you’re willing to spend time, you have all the means to do so. It became more fair, in a way, because even without having resources to hire a professional coach, or even without much experience in the job market, using AI, you can much better prepare and get a much better chance of getting in.
Ross: Yeah, one of the very nice use cases for LLMs is to practice job interviews, where you train the LLM to be very close to what you expect the interviewer to be and to be able to practice live in a fairly realistic situation.
Natalia: However, I would still advise to still have a chat, even with a friend. Practicing with GPT or other LLMs is a good idea, for sure, but also still I often see that people are perfectly prepared in terms of content, but still, when it comes to human interactions, they don’t look in the eye. They just look around, they look to the ceiling. They just didn’t have that practice in front of another human, even on screen. So I would still encourage whoever is listening and just in the process of applying for jobs to still practice with another human, because it’s the same case as the famous presidential debate between Nixon and JFK, right? The way you present yourself on screen, even if your content is better, but you look sweaty, you look like you don’t know what’s going on, you will eventually lose the competition to those who look relaxed, right? It’s really important to have that human interaction too, and practice in front of another human
Ross: Yeah. I’d love to dig into some of the ways in which you maximize your productivity, your capabilities, your potential, using AI or other tools. I mean, what are the things which you have found effective and useful,
Natalia: Right. One thing I have to say, first of all, is that the whole notion of productivity is not very well defined. I myself, I have to say that I’m guilty of spending way too much time at school. I did complete three master degrees, and then I went for graduate school, and I literally woke up when I was already in my 30s as a person who never, ever had any real life experience outside of my school desk and when you’re at school in an education system, you have a top down way of assessing your productivity, right? You go for tests. You go for exams. They tell you, you did well or you did not do well. I had to wake up in my 30s and now I was on my own, and I started a company.
What’s your productivity if you just spend your day working, but you didn’t sell anything today? You didn’t close any deal either. You just keep on going with your daily tasks. How do you assess if you’re productive or not? So it was really hard for me to, like, reconceptualize what productivity means and I eventually came up with the idea that, okay, I have my own to-do list. Nobody can do it for me. I have to set my own goals. And there are only two categories of days. There are days which I win and days which I lose if I stay on the track and I don’t let the rest of the world hassle me out of my goals, for instance, by just dragging me to the TV screen because something happened. There are lots of big events these days that can easily just pull you out of your bubble, and just drag you to the TV screen. So if I don’t let the rest of the world hassle me out of my way, and I still stay on track, that’s a win, and if I get distracted, that’s a loss. That’s the only, that’s the only criterion that I use right now.
About my personal ways, well, I think it’s a combination of state of the art methods for productivity, even games from the whole standard literature, even Tony Robbins and even these old tricks that are really good for productivity still. Actually Tony Robbins’ “Awaken the Giant Within” was the first book I ever read, because when I was three years old, my mom was obsessed with Tony Robbins. Just read it to me without my consent. But that was effectively the first book I ever read. But I also think that if you want to have results better than the most, and as a matter of fact, most people never get to realize their professional dreams. They always have these what-ifs on their mind. So if you want to achieve better results, you have to use other methods, you have to try something. That’s my belief. And I always enjoy finding my own tricks that are original, and I have never, ever read about them anywhere. For instance, when I was an undergrad student, I used to study three majors all the time, and I always had lots of exams in the exam period. But I learned that if I condition my brain, if I wear black T shirts to every exam, and I never wear black otherwise, I condition my brain to always feel like I’m on the fight mode once I dress black, and I told my friends, if you see me in black, that means I’m going to the exam. Don’t talk to me, don’t distract me, because I’m going to fight. And it really worked for me. So every single time I was wearing black, I was like, really, let’s go. This is my day to shine. And I was laser focused on the task. So this was something I just found out for myself. Or, for instance, when I record educational materials, like courses, like it’s always better, I always enjoy more when I talk to somebody than if I just talk to the screen. So I put my favorite teddy bear behind my laptop and I talked to that little guy, right? I’m like, Listen, this is how you do it. And then I really feel like I have so much more energy just talking to the teddy bear than just talking to the wall. So these are the things I especially enjoy, those little tricks, little life hacks, because first of all, it’s a sign of creativity, and it works. Little tricks that make a lot of difference when you just integrate, like, there is a compound effect over lifetime of these little everyday tricks. And we are all like hackers, right? We all have our own ways to solve problems, but I always encourage people also to just find your own productivity hacks that work for you that nobody ever found. It’s just your own way to solve your own problems. And yeah.
Ross: Absolutely. Your idea of the teddy bear is similar to what software programmers use with the rubber duck, tape a rubber duck on their screen. If it gets wrong, they have to explain it to the rubber duck to clarify their own thinking.
Natalia: Right. But my duck has to be fluffy, so I prefer teddy bears.
Ross: I think that’s very reasonable. Where can people go to find out more about your work and what you do?
Natalia: Well, so I think the best way is to follow on social, because I put a lot of content. Some of this is just reviewing the recent research on productivity, future of work, the current state of the art in a job market. So follow me on LinkedIn, follow me on Twitter or X, it’s a good idea. I also have a little YouTube channel, but it’s kind of on a stealth mode right now. I just think that social media just works best because I also do a lot of initiatives, like organized conferences and so on, it’s just everything I communicate to public, so I think this is the best way. Yeah, thank you.
Ross: We’ll share links in the show notes for the podcast. So thank you so much for your time and your insights today. Natalia.
Natalia: Thank you so much. Ross, it was a pleasure of pleasure on my side.
The post Natalia Bielczyk on work in a BANI world, becoming our own Zen masters, AI in recruitment, and contagious empathy (AC Ep58) appeared first on amplifyingcognition.
– Nikolas Badminton
Nikolas Badminton is the Chief Futurist of the Futurist Think Tank. He is a world-renowned futurist speaker, award-winning author, and executive advisor, with clients including Disney, Google, J.P. Morgan, Microsoft, NASA, and many other leading companies. He is author of Facing Our Futures and host of the Exponential Minds podcast.
Websites:
www.nikolasbadminton.com
www.futurist.com
LinkedIn: Futurist Nikolas Badminton
X: @nikolasfuturist
Book: Facing Our Futures: How foresight, futures design and strategy creates prosperity and growth
People
Books
Ross Dawson: Nikolas, it’s awesome to have you on the show.
Nikolas Badminton: It’s really, really good to be here, Ross. It’s long overdue, I think.
Ross: Yes, indeed. So you are a futurist. A futurist is a person who thinks about the future. So you gotta have to make sense of the world and to be able to think effectively and communicate that well. So, how do you amplify your ability to do that well?
Nikolas: So it’s really interesting. So if you sort of go back about 12 years, and I was sort of making this, this movement from business strategy, data-driven work, creative work. I worked in the advertising industry, then worked in software platforms. Actually worked for an Australian company called freelancer.com for a while, and then ran their sort of ops in North America. As I leapt from that into the bigger, wider world of sort of being a full time futurist and working in in that side of things, there are a few things. I mean, the first thing and everything is sort of accretive.
The first thing I found was, you know, running meetups and running conferences was sort of the lifeblood of really injecting new ideas and thoughts together and creating sort of a microcosm and an ecosystem of sharing ideas. About 11 years ago, I ran a conference called Cyborg Camp by VR in Vancouver, with Amber Case and my friend, Carous O’Connell, that I’d known for a very long period of time. It’s about the intersection of humanity and technology. And about 140 people flew from all over the world to come to this little conference. Amber Case was actually a really big draw, and she talked about cyborgs and cyborg anthropology and whatever. And what was interesting was creating this, this drive of information, having people like Chris Dempsey, the organizer in the overall organizing principles behind cyborg camp, was really interesting, the most connected man in the world, and he was collecting all the information and putting it all online in Evernote and making that available. Blogs were coming out of this. We made it into Vice and whatever, and slowly, we were capturing a lot of information.
And then I ran a Future Camp, which was an unconscious on the future. I ran another conference called from now. And then I ran a series of events for about six years called Dark Futures, which some people were calling the Black Mirror of TED Talks. But needless to say, the first sort of, really accelerator of knowledge and intelligence augmentation for myself was all the people I could tap into and all the people that wanted to come on the journey. So community was the very beginning of that, around about that time, I started doing a lot more keynotes, so I had to do a ton of research. And what I’ve got is I’ve got a network of people that work in large organizations, in R&D departments, people that work in academia and whatever, and I could chat to them that became a podcast that I run called exponential minds. And it’s sort of an occasional, I do an occasional season every couple of years, and I bring in about 10 speakers to talk about various things.
Ross: Just to backtrack a little bit. So this idea of communities, conference events, people come together, smart people have conversations. So how do you make useful, valuable conversations happen? Is there any art to it, as an individual or as a conference organizer, to make these places where there truly is collective intelligence and knowledge sharing?
Nikolas: So I mean, the way that I used to run conferences. I kept them specifically small, so there would either be 50 or 100 people. I’d ensure that there was no real sponsor. Occasionally, if someone wanted to give me some beer, that’s cool. If, say, Microsoft wanted to give me a room to run my conferences, or a set of rooms, that’s cool as well, but there was never sort of any pay to play or anything like that. So that was the first guiding principle — was like to make it sort of a non commercial enterprise. I never made money that gave me the opportunity to really open the doors and invite people specifically for free, from my community and across North America to be in the room and to bring specific ideas and content, and I would actually spend time working with them, to prime them and say, I love what you do. I love what you talk about here, where you kick off a discussion, where you do a presentation, and whatever.
So what I found was that was a really good way of getting things going. So it wasn’t me as a leader of the conference. That in about 50 people, there’d be like 15 to 20 people that were really leading everyone into these larger discussions as well. So tapping into the subject matter experts to really lead us all into a brand new world and making time. So it wasn’t just all keynotes to actually have the open sessions and open discussions as well.
Ross: Fantastic. Yeah, I can really see that creates the conditions for more useful conversations than the usual. Yeah, sign up and sitting there, sitting in a dark room with someone on stage all the time.
Nikolas: Yeah, exactly. And I used to capture a lot of the information from my meetups and everything like that, and used to share them on my original website, Nikolas barrington.com so I used to, I used to really try and bring it together. And I didn’t. I wasn’t doing this work to make money, per se, or to have a really successful business. I was doing it because I loved it, and people could tell that. And the people that would work with me and support me were in it for the game. And everyone was trying to sort of raise their sort of vibration, in a way, their cognitive vibration, and sort of step forward. And then it sort of changed tack a little bit when I started getting booked for keynotes. And I sort of I at one conference, I had a microchip implanted in my left hand by Amal Graafstra from Dangerous Things. And that sort of hit the headlines in Canada, and that me as a futurist, sort of hit that stage a little bit, and ended up with like 60 keynotes the next year. So that sort of changed that whole trajectory. And I moved away from running conferences, per se, into being a featured person at a conference. And that sort of changed the trajectory of how I thought about research and really trying to be efficient in doing so.
Ross: So well, whilst we’re talking about research, how do you do? How do you amplify yourself in digging deep? And that’s one of the things, because I received my book, Thriving on Overload. It says, okay, infinite amount of information makes sense of it. But, you know, as a futurist, I have a claim to be good at that, because I do, yeah, as you do research into just any topic under the sun, and suddenly you’ve got to tell people who’ve lived their life in the industry, it’s things that they don’t know or new ways perspectives on it. So it’s a there’s plenty of research and thinking to be done.
Nikolas: Yeah. I mean, I sort of came, I’m not an academic, but I actually came from a background doing Applied Psychology and Computing, which covered a lot of different technologies, and early doors on the internet, like 993, and whatever I was. I was building artificial neural networks and doing grammatical inference and recognition linguistics, a whole bunch of things. So I was already, like primed to do research into more esoteric sort of technologies and ways of thinking and philosophies in terms of the technological trajectory forward. So I mean, more than anything else, having access to the internet, but like looking at thinkers like Kevin Kelly, Bruce Sterling, diving into people like Jaron Lanier, Douglas Rushkoff in the 90s, his book Cyberia.
Ross: I love Cyberia. That was very influential to me.
Nikolas: Hugely influential book to me. Like, just still to this day, like, but like Terence McKenna and everyone, right, and Jaron Lanier and VR and DMT all sorts of stuff. And it sort of took me into that counter culture pocket. So I’ve always kind of looked at the edges and the counterculture for where things were really happening. And, biohacking was of interest to me. That was definitely counter culture, the early days of, you know, artificial intelligence or whatever.
No one was giving these guys money. Everyone was like, these were just like the weirdos in labs trying to create artificial life where, in sort of digital form, and to see, sort of how this would metastasize and and live within sort of virtual environments, early early doors, like Second Life and things like that. All of this was just weird, weird, strange sort of edge technologies. And what’s interesting is, and the events that I always used to do were all about that edge. We’ve almost lost an edge, and everything’s been sucked into this homogenous sort of wood chipper of technological opinion. I mean, I think Silicon Valley is part of that mix as well. But, I still really work hard to scan the signals and sort of identify different trends by linking different things together in a lot of different ways that maybe people don’t expect you know that relate and come together and have an effect on the world.
Ross: So that’s a really interesting point, and that’s yes, the idea of synthesis. Okay, we’re exposed to similar information, and if you dig more, then you can uncover more different information. But it’s pulling it together in novel and useful ways. That’s right. So what is it that supports your ability to do the synthesis, this ability to stand on stage and say things that people find generally interesting because they’re different?
Nikolas: I do a lot of research on with every keynote, I into a ton of clients. On the client side, I, you know, I go into the industry. I call people in the industry. I read a ton of of academic research behind the industry, stuff on the edge academically, as well as sort of what’s in the mainstream and what’s being done. And also, those sort of edge players, and when I start to move forward and start to create some new new thoughts, then I can sort of start to play around with scenarios. And this is what’s become really interesting to me. And I know that you talk a lot about the augmentation of capability through the use of things like generative AI and the such like. And this has been something that I’ve been playing with quite a lot, not only from a generation of textual content, but also the like the exploration from a visual perspective as a helping mechanism to take us in in whole new directions as well.
I mean, in my work, it’s like signals to trends, to scenarios and to stories, and I’ve really been trying to push the boundaries of what scenario exploration is with platforms like ChatGPT and Claude and Gemini, and starting to see, what we can do to look at positive and dystopian scenarios, which was obviously part of the work that I was doing, apart in facing our futures over the last couple of years since that Book was completed, zero Gen AI sort of help was, it was in my my book, and actually very little Gen AI helps going to be in my next book, because contractually, you’re not allowed to do those things.
So, what we can do, is start to explore the mirror. Why I kind of call these Gen AI systems a mirror. So, pose it a question, pose it some scenarios, trying to work out and see what, what comes out of it. And generally, what I find is, maybe I’m talking about energy and ecological ecosystems, and I’ll pose it a question, ‘What if you know, renewable energy is pushed to the side, and green initiatives are canceled, and we go full tilt in a maximalist fossil fuel society?’ And in preparation for this, for this chat, I sort of went into that to even delve even deeper into, like, the mechanisms behind that. And it’s sort of interesting. You get this mirror of like, ‘Oh, yeah. I kind of expect that the answers to come from that. Okay, let’s push that out to 2050.’ It’s kind of an accelerant and whatever. It’s kind of interesting when you start to think about, you know, the reference points of all these systems and where they’re getting it from, where something like Claude and ChatGPT actually feels like they’ve, they’ve been drinking from the same fountain, and you’ve got and Gemini just seems to be a little bit like, freaky. It’s super interesting, as I went into it, it was, like, poetic and dystopic, and it was so for example, I sort of asked this, you know, describe a world in 2100 where environmentally friendly, non carbon fuel solutions are discarded. And I went on and on and on in a prompt, very sort of directional, and, you know, the others would be like, here’s a list of things that happen, very sort of cold. I didn’t ask it to write in a particular style of a publication or anything like that. And then Gemini just came out with this, and this is fabulous. The year is 2100 the gamble on renewables failed spectacularly. Big Oil whispering sweet nothings of energy independence and economic growth won the hearts and minds of a desperate world. The result a planet drowning in its own fumes and and I kind of love that poetic nature and Gemini, I think, is sort of the unsung hero, a little bit right in the scheme of things, that we suddenly getting something that’s interesting, that starts to talk about the geopolitical chess point. Or tech on steroids, violence and Exodus, and it’s like, Whoa. Okay.
Ross: Is this just the basic free model?
Nikolas: Yeah. This is just the basic stuff, Ross. Just playing around. I used the latest version of ChatGPT, which I’m not particularly excited about what I get. It’s okay, and sure it can help us, write some plans and do this, and I can see if you’re sort of an entry level, and I was have doing an interview earlier today, if you’re sort of starting off in your career, and you want a sort of a shoe up to help you sort of get better quicker, I really do think there’s a lot of potential for these tools. I think when you’re in an advanced, sort of nearly 30 years in the game, with really firing on all cylinders in your sort of chosen profession, I think that, we expect more from these systems to go there. But it’s interesting.
On the textual side, I get a little bit of juice from the squeeze. I’m finding that on a visual perspective, I’m getting a ton more value and a ton more opportunity to be provocative in the work that I do in storytelling, even using Dall-e or midjourney or Stable Diffusion right now. I’m actually working with a client. I’ve written three science fiction stories about airports a concert and a control center for the concert, and I’ve written the stories myself, I actually helped myself with like, writer’s block on the third story, because once you’re like, 3000 words in, it all gets a bit cross eyed. Sat in where was it? Or sat in Minneapolis Airport at 7:30am trying to bash out the final story. And I’ll be honest, I sort of tapped into to ChatGPT, just to help me break the impulse of that. It wasn’t particularly exciting story, either, and it didn’t end up in a place that was as exciting as some of the others.
Ross: So these were for a client.
Nikolas: Yeah, this is for a client. This is some work that I’m just finishing up right now. And I’m working with a very close friend of mine called Ray Lebre. He’s been working with stable diffusion and mid journey and those models. He was the first person that I knew that was building the servers at home off of like the open source sort of code, and he’s been doing it for about two years. And he’s probably the most advanced person that works in design fiction, speculative fiction, establishing shots, and really pushing the boundaries of what we can do with it. And we’ve just been working with a client, and the you know, the client thinks they have absolute creative control, and we have to remind them that there’s just a bit of strangeness in generative AI, and there’s part of that strangeness that’s really and if we embrace the strangeness, I think that that’s that’s incredibly interesting visually for us to sort of take us into new realm. So it’s interesting when we start to mix imagery with words, when we’ve got real human-driven narratives, working with some incredibly strange sort of visual, excuse me, with some really interesting visual sort of reflectors.
Ross: This is an illustrated story, essentially. So you’ve got text and then you’re using images to illustrate the story.
Nikolas: Yeah. Absolutely so. So I’ve written these three stories of airports and concerts and also all sorts of stuff, and we just took moments, and then we used them to sort of start to build out the experience, right? And it’s a really interesting process, because when you work with clients. They actually get a very high quality, highly creative piece of work for a lot less money in a lot less time, which is great, but it’s got these side effects, which are, you don’t have 100% creative control, so how do you let go? And I think that there’s something quite spiritual about that, in a way, in a business context. And what’s really great is it’s like, I’ve got all these questions about this image, and it’s like, exactly, and now we’re going to take that into social media. I can’t say who the client is yet, but like, I’ll let you know when, when we’ve released it. We can take that into social media, and the conversation will be bigger around some of the more unexpected aspects of the images and the stories that we’re going to be telling to create engagement in communities, to really level up the thinking as a whole. And it again full circle back to the. Community. Back to the accelerant within that, whereas the accelerant were before was the 20 experts. So that would be my conference. The accelerant now is the strangeness of the generated images and the sort of the quirkiness of the of the speculative fiction I would have written.
Ross: It’s a great illustration. My next question was going to be, yeah, it’s got, we can use these tools. But the point is, how do we think better and more? So it’s, it’s one thing to interact with ChatGPT, and you get a nice answer. And yeah, there’s lots of people play with these tools, but that doesn’t mean that they’re necessarily thinking better as a result. So it’s not so much. Okay. I don’t I don’t care if GPT has a good answer. I want GPT is answer to make me have a better answer or model or way of thinking. And so I think you’ve you got to that in a way with how you described these images, and the way in which those can amplify, including our questioning, our sort of that just makes me think, but I mean also just pulling back to those text answers. So you’ve got these, all right, I’ve asked us the models about 2100 and what it might look like. So how does all of this, all of these tools and approaches flow through to your richer, broader, deeper thinking?
Nikolas: It’s really interesting. It’s not about the amount that we say. It’s about making what we say really count and and by that, I mean it’s actually really good to go wide and reasonably deep, in a way, and these tools let you do that, but it helps you absolutely reduce down to what you really want to focus in on. So for example, I’m writing a book called Hope Engineering right now, and it’s just started. It’ll be out in a couple of years, and it’s a book that explores philosophies of hope, their intersection with possibility, thinking and the intersection with futures, exploration or foresight. It’s not really been framed that way by anyone and explored in that way. And I’m sort of really starting to work out what that actually means in terms of a playbook for modern executives.
But what’s really interesting is when you start to ask questions about these areas, it takes you into this realm of like this is what we know about them and these. It identifies the missing links that you have to sort of build bridges between ideas. It highlights where there is a lack of clarity. It also does highlight some congruences, but slightly less so, but it reduces me down to maybe, instead of like writing a huge swathe of the philosophy of hope, the philosophies behind possibility, thinking and taking us into futures thinking there and right down into the heart of what we need for these ideas to have some kind of impact, to be said in a completely new way. Right? So it’s that reductionist approach is what we’re all striving to have, which is like the old story about, ‘I’m sorry I wrote you a long email. I didn’t have time to write a short one, right?’ So, so it’s kind of interesting that that we can kind of use some of these tools to write the long one, so that we can then go ahead and very quickly write the short version and really dial in, and that’s what I’ve actually found Ross, is that, when I have used these tools, you know, and I I use, you know, Gen AI two to LLMs and whatever, you know, for a small part of my practice. But when I do use them, I use them as a platform for hopping off into deeper exploration using other methods as well. Because, you can’t always trust that all of those studies exist or whatever, right? But it’s an excellent way to just go, Okay, we’re focused in that’s really clarified my thoughts.
I can go for it further, and I think academically, that’s proving to be quite valuable. Yes, I’ve got a very close friend that’s actually on the chair of biology for Canada, for plant biology. And it’s interesting. Sat down her students extensively use things like ChatGPT to help them write better reports and work, and she goes, I’m all for it. They were terrible at writing reports before, and now, at least they’ve got a semblance of an understanding of of writing that can make sense, and they still have to go back and edit. But she goes, their work is just so much better right now. So it’s just one of those things. Again, people more junior in their careers that haven’t necessarily struggled with their way through all worked out how to write a lot of very useful information, even maybe at a PhD level, very, very quickly, right?
Ross: Yeah, as long as they’re not over delegating and they are still learning and able to be self sufficient.
Nikolas: Discuss in the realms of modern education, right?
Ross: So let’s, let’s go beyond the field of AI. So you mentioned you’re cyborg, or delved into that world, or there’s sure other, yeah, tools or approaches used to augment yourself. So what else do you use to amplify your cognition?
Nikolas: So one thing, I’ve dabbled in psychedelics in my past, in the searchlight, but I don’t really like bother with that anymore. It doesn’t really work very well in the mix as a father and a productive human. There’s a couple of things I do that are super interesting.
Number one, I do something called Grof Breathwork, and Grof Breathwork is, have you heard of Grof Breathwork?
Ross: Yeah, I have. Yeah. So, so like for the listener, it’s basically hyperventilating for three hours with light, light, very loud tribal drumming. And it sort of very much activates sort of an, I think, an empathetic center of sort of the consciousness, and sort of you go very deep in sort of a psychotropic state is incredibly powerful, and I’ve used that to really sort of push through, ego boundaries and a number of different things.
Another thing that really helps me with my work, and this is on a more spiritual sense, which I think is incredibly important and a lot of businesses ignore, is I do something called psychological kinesiology, or psych K, I have someone guide me through belief system reprogramming and the healing of multi generational trauma. So on this side of things. It’s like those things that you learn in the scaffold of Emotional Belief Systems as a child from the age of zero to six to eight or so, finding the blockers, finding out those belief systems and clearing the pathways forward. And not only has that made me a better human, husband, father, all the good stuff. It’s actually made me a better futurist as well, to be open to bigger ideas, to listen a little harder, to be a little bit quieter with my thoughts, and to try and navigate whole new areas. And there’s a community around me as well. We all support each other with this, and it’s led by a very incredible person and her team. That’s, that’s another side of things, again, augmenting the human condition by sort of challenging what we’ve been given and what we what we’ve learned through our lives, to say, hey, we can change this, and we can level ourselves up in completely new ways.
Ross: Yeah, just one thing to pick out of what you were just saying is the being quieter and listening more, and so that’s if you want to amplify it. That’s probably a pretty good place to start.
Nikolas: Yeah, and I’ll be honest, as a keynote speaker and the guy that’s run conferences or whatever like, it’s been, it’s probably been my greatest challenge all the way through my professional career, right? Because, I was back in the day, I was the guy that could fix things technologically. So I’d go and fix them. I didn’t need anyone’s permission, right? Or I knew best, or whatever. A lot of this comes with over confidence, and like the inflation of the ego, as it were, and now this, this, this quieter part of my life, has sort of led to so many more opportunities professionally and personally that I think it’s really important. And I think with doing futures work, there’s more questions than there are answers, and there’s more wisdom in the crowd than there is within yourself and the work that you’ve done. But we’re sort of, we steer the ship, right? We steer the conversations. I think that’s really what we do as futurists up all in all.
Ross: Yeah, there is a real danger with being the speaker on stage, being invited, being speaker on stage to you know, people want to know what you want to say, but that’s you telling them. That’s not necessarily the job, and it’s you know, you do have to in order to be able to you. To get people to see the future better, or to be more inspired. Whatever it may be, it’s often less about telling or being from a place, more where you are questioning and being able to engender that questioning. I think a lot of speakers don’t get there.
Nikolas: Yeah, and so, I mean, pre pandemic, late, late 2019 my style used to be very much like, here’s a future that you must care about. And it was very sort of lecturing,, very sort of, here it is. Good luck. Welcome to the future. Goodbye. And I tell the story of my book as well. In in late 2019 I spoke to 800 farmers in Alberta, Canada. And, Albertsons are very, taunch, forthright, you know, tech like Texans, you know, like very, very driven, very passionate people. And at the end, the guy stood up and he said, I think most of what you just said is bullshit. And I was like, huh, 800 people looking at me, and, well, I said, Look, I think you’re wrong. And we went through it, and we went around, and actually made friends with them. Afterwards, we ended up having a fairly decent email exchange on a number of different areas around renewable energy and farming and a whole bunch of stuff, and until he sort of gave up the ghost and didn’t want to talk anymore, which is cool, but I was, I was I was ripped apart with this like, Oh my God. Like, how the hell can I avoid that again? Because it’s horrible, and people were attacking me on Twitter, and it was all bots and horrible stuff. Afterwards, I picked up a book as I was flying to New Orleans for a vacation with my wife, and she was just pregnant, and I was like, okay, everything changes now. So I was having a moment anyway. And there’s a book by a guy called Rob Hopkins as a British guy, a community activist. He wrote a book called from what is to what if, and even just the first couple of chapters, talking about the power of curiosity and creativity and imagination, and even asking a simple question, like, what if, just it doesn’t close a conversation, if there’s a non believer, it starts a whole new realm of conversations, and it’s like, well, I don’t believe you, Yeah, but what if, let’s explore that together, and that that moment changed my entire trajectory, and, like I said, a little quieter, listening a little more and posing a question. It’s like, but what if this technology changes your industry? This societal change has a ripple effect and introduces new competitors into the market, you know, what if. What if? What if?
And and that was incredibly powerful, and it’s become a mantra of mine, you know, shift your mindset from what is to what if. And the idea that imagination, anticipatory capabilities and empathy can really level up and elevate an organization where we’ve got an organization that’s been where people are told what to do, when to do it, and what hours to work and to liberate ourselves, right is really important. And again, it comes down to the executives and the leadership, listening and I would say most of the consulting work I do is around helping executives inquire about a future and then inviting people in to inquire with them, right?
Ross: Yes, yes. And just, I think another interesting point around the pushback there is that I encourage people to disagree with me, because I say, if you disagree with me, that means you are thinking, you’re thinking about what you think is right and why you think I’m wrong. And if you’re thinking why I’m thinking about why are you think I’m wrong and why, why something else is different, then that’s great, because you’re thinking about the future and I’m not going to be right, but you’re giving something. You’re giving people something to be able to scaffold on which to be able to construct their own thoughts. Don’t think what I think? Yeah. I’m giving you something on which you can build your own thinking.
Nikolas: Yeah, and even if it’s like that, that idea of a mirror, you know, even if you disagree, and I use this technique in my keynote. At the end of my keynotes, just like you say, it’s like, does anyone have a question? And if there’s like, 600 people in the room and no hands go up, you know that someone wants to say something, right? Because everyone’s smart. And so I literally say it’s like, we’re in a room full of really smart people. And I’m sure some of you don’t agree with some of the things I said. You know, does anyone have a challenge? Do you think I missed anything that is when I find audiences really wake up, and then the questions start flowing, and then people have permission. I don’t think we give people enough permission to disagree, right? And I think that it’s an incredibly important tool to actually use. So that’s really cool.
Ross: So how do people find out more about your work and your books and your wonderful things that you give to the world? Nikolas?
Nikolas: Yeah, there’s really, there’s Nikolas. If you type in Nikolas Badminton, there’s only one of me. There’s another guy called Nikolas Badminton in South Africa. He’s not a futurist. So you can quickly determine that that’s not me. I run futurist.com and which I acquired from Glenn Heemstra a few years ago, amazing thinker and futurist and a great mentor of mine.
Yeah, and that’s generally where you can find me. I’m very active on LinkedIn. I do tons of chatting, debating, arguing, and engagement and just go out and find a little bit more about me. But start futurist.com start with my name. I’m all over the internet. I’m basically an internet business. So there we are. I’m ubiquitous on the internet.
Ross: So good to talk. Thanks so much for your time and your insights, Nikolas.
Nikolas: Cheers, Ross. It’s been a pleasure. Bye.
The post Nikolas Badminton on cognitive vibration, AI for scenarios, psychological kinesiology, and quiet listening (AC Ep57) appeared first on amplifyingcognition.
– Brian Magerko
Dr. Magerko is a Professor of Digital Media, Director of Graduate Studies in Digital Media, and head of the Expressive Machinery Lab at Georgia Tech. His research explores how studying human and machine cognition can inform the creation of new human/computer creative experiences. Dr. Magerko has been research lead on over $15 million of federally-funded research; has authored over 100 peer reviewed articles related to computational media, cognition, and learning; has had his work shown at galleries and museums internationally; and co-founded a music-based learning environment for computer science – called EarSketch – that has been used by over 160K learners worldwide. Dr. Magerko and his work have been shown in the New Yorker, USA Today, CNN, Yahoo! Finance, NPR, and other global and regional outlets.
Google Scholar Page: Brian Magerko
LinkedIn: Brian Magerko
Georgia Tech Profile: Brian Magerko
YouTube: Brian Magerko
People
Ross Dawson: Brian, it’s a delight to have you on the show.
Brian Magerko: Oh, thanks for having me, Ross.
Ross: So you’re a perfect guest, in many ways. You’ve been studying human and machine cognition, and how they shape creativity for quite a long time now. So, just to hear a little bit of how you came here, and why this is the center of your work?
Brian: I had the good fortune of being at Carnegie Mellon for my undergrad in the late 1990s. And there were a lot of folks that are doing really exciting work, since its inception, related to AI and cognition, so I got exposed to folks like John Anderson, who’s huge in the cognitive modeling community, Herb Simon, who wound up advising me, Ken Koedinger, who has been one of the leading intelligent tutoring system minds since the 80s. So, , being in the mix of all those, those great minds and being able to take classes with folks and, and do research really was a great place to start, .
Ross: Those are incredible people.
Brian: Oh, yeah, right! Yeah, I took Dave McClellan’s neural networks class. And he, , wrote the book that we used. Jaime Carbonell, I took his improved AI class.
Ross: So what was Herb Simon like?
Brian: Herb Simon? I took his, I mean, , as undergrads, we were, we were just in awe of him pretty much. I was friends with…there were five cognitive science majors at the time in our year, it was a huge class. We all put him on a really high pedestal and taking his class was absolutely phenomenal, though I feel like I would have gotten much more out of it as a graduate student than a scatterbrain undergraduate.
He was kind enough to be my research advisor for my undergrad thesis, which was one of the first places where I was really putting all of these ideas of studying human creativity and formalizing them computationally. Though, I kind of went in this direction of wanting to do it, models of creativity, which is a very difficult environment to do creativity work in at the level that I was doing. But he advised me on how I’m trying to study the tacit knowledge in jazz improvisers, as well as studying cognitive science and computer science at CMU. I was doing a jazz improv minor, because why not, I guess? I just wanted to explore the wide variety of things that interested me and take the opportunities that I had. And I, a lot of my career is about synthesizing those things together, so my work with Herb was about studying Jazzy, and jazz improvisers, which was the thing that I got exposed to and learned about as a student there. , yada, yada, yada, a lot of informing the first NSF proposal that I ever wrote and got awarded on Sunday’s Improvisational Theater and building formal representations of it.
Ross: That’s incredible. And for those listening who don’t know, Herb Simon was Nobel Laureate in economics and sort of the foundation of modern decision theory.
Brian: He’s also one of the progenitors of artificial intelligence.
Ross: Well, yes. He was right there at the start.
Brian: There was the Dartmouth conference in 56, I think it was. He wasn’t there the entire time. But he’s on the list of folks with Marvin Minsky, and others that Alan Newell and one of the building the computer science buildings at Carnegie Mellon, Mellon did, named after those two guys. It’s the Newell-Simon building.
Ross: I was looking through your list of papers and found this wonderful- I will bring this to the world of the present soon, but you had a paper in 2000, called robot improv. So, let’s go back to that section of jazz improvisation and AI. I’d be interested to have the context of today as sort of, we’ve come a long way, of course, from the capability so I’d love to lower the seeds of the thinking and also to a point where that has evolved.
Brian: I’m not sure what the question is. I can just talk though. That work was again sort of a product of being at Carnegie Mellon and having some wonderful people to work with there. I had the fortune of working with a robotics professor named Illah Nourbakhsh, who between him and Herb Simon, really the two of them sort of sowed the main seeds for me as a researcher, Illah was very much about doing robotics research at Carnegie Mellon but refused to get military funding. He wound up asking very different and very interesting questions that the folks who were doing the hardcore systems weren’t asking. He taught an intro mobile robot programming class and I thought that sounded fun, so let’s do that. And some of us liked it so much that we bugged him to do a special class and he did a special class. And we got to do this, this little improv robot comedy troupe, which is one of the… I mean, , there were in the 90s- and even earlier, there was work in sort of generative AI, story systems, , this is the first robotic one that I think existed. There hasn’t been much even since but we did this in ‘99. This was like, the robots were- the fact that the computers were laptops and that they were talking to each other wirelessly, like that was ‘woo’, . So we’ve come a long way, in some ways, but what those robots did was pretty much a trick, like they did this improv acting, but behind the scenes, they were just sending each other sort of messages saying, Hey, I’m doing something mean, hey, I’m doing something angry, I’m doing something happy, whatever. Just some emotional valence.
We had this little emotional calculus where like, ‘Oh, the other robot does this kind of move, emotional move, this is how you update your emotional model. And here’s how you pick a new move based off of your model’. They would basically just say one line, one shot lines, there was no kind of coherent dialogue back and forth. One robot was trying to leave the room and the other robot was trying to get the robot to stay. And it was about this: this tension as to whether or not the robot would leave or not, sometimes- and it was really interesting because they would actually improvise these things! Sometimes the robot would leave, sometimes they would stay and, and they always said funny things because we got to author these. When a robot turns to another robot and says, ‘Wait, don’t leave, I’m pregnant’ people laugh, like, they just think, , it’s a really good medium for comedy. We learned from that experience, at least I learned that improvisation through improvisation is really hard to do. So we faked it, basically, we said, ‘I’m mad, and here’s the thing I’m saying’, and the other one would say, ‘Oh, I’m scared. here’s the thing I’m saying, but there was no actual socio-cognitive mechanism actually going on. There was just the sort of independent, very, very simple Chinese Boxes of getting a little bit of input and then making a decision and outputting a thing. And that’s about it.
Ross: I think part of the point there, though, is that the AI was very primitive then, but you had the concept.
Brian: What I’m talking about is relevant to LLMs today- that there’s a lack of cognition, awareness, and reasoning about the establishment of meaning together. A large language model- some people would argue about this, but from an epistemological viewpoint, large language models do not have knowledge about collaborating, they don’t know, they can’t describe the process that we’re in and jump around in it, and reason about it. It’s called Generative AI for a reason.
Ross: This is where AI is a complement to humans.
Brian: It’s an Oracle, it’s a tool, it’s a thing, I have a query, give me the answer. It’s not a collaborator. It’s not a thing where you sit down at the computer like, ‘Okay, let’s think about this problem together, and we hash it out together, our solution’. It’s more of, a ‘gimme ideas’, or ‘what do you think about this idea’ kind of thing, right? This idea of establishing shared meaning, shared mental models, and making sense together? This is a lot of the work that I’ve been focused on over the past couple of decades in terms of studying human collaborative creativity, and how we can model those things that we’re doing together that afford us to so effectively make meaning in the world together. Computers, they don’t do this. It’s just not their capability, right?
Ross: One of the things you’ve come to, one of your recent projects is around improvisational dance with AI.
Brian: It’s called Luminae. It used to be called the Viewpoints AI Project before we stopped using viewpoints. It’s been a decade-long project. I started this question with a small group of students. I was like, Hey, why is this Improv Theater stuff really hard? Because you have to talk so much. It’s so dialectic and I don’t want to solve the natural language problem. Maybe somebody else will- somebody did, which was great, but still, what can we do if we just do all the reasoning, all this stuff that we’re talking about with improv, but no talking? Not even body language, no semantics, just no semaphore? Just abstract, raw emotional output. Scribbling, doodling, contemporary modern movements, things that aren’t restricted to a very specific vocabulary, but are more about just, ‘I’ve got stuff in my head I have to get out physically’. We’ve been working for the past few years, we were lucky enough to get National Science Foundation funding through a program called M3x, which is the mind machine motor Nexus. It’s a very futuristic-sounding program.
They funded this to study the function of dancers. How is it that folks across time reasoned about the ebb and flow of idea, introduction, and idea exploration when improvising with someone else’s body across time? We’ve been working with contemporary dancers and a dance professor there, her name’s Andrew Knowlton, and she’s been amazing. We both studied the dancers in terms of informing the technology and the design of the interface as well as we took this technology and incorporated it into their classroom. For two months, we did a longitudinal study of their thoughts and the function of adoption of the technology over time, and students hated it at first. They’re like, ‘AI, boo’, which made us wonder why they signed up for the AI and dance class. It was like, why did you? Why are you guys in the room? But, after they used it in the rehearsals, after a week or a week and a half, the language and attitude towards the technology really changed for the better. In early May, we had the world’s first improvisational human AI dance performance. You can find a video of it off our website, it’s on the Expressive Machinery Labs YouTube page.
Other folks have done AI dance performances before, I’ve done it before, but we’ve never had one where there was an actual model of collaboration occurring. This is what we put on stage, an agent who is reasoning about improvising, knows it’s improvising, quote, unquote. As opposed to a thing that is responsive to us, that’s more of an intelligent tool. This is trying to see how AI can augment us as a collaborator. The really interesting things that you can do with an AI collaborator that’s projected in midair, it was on a big scrim, you can make it really big, you can put it on this wall, you can make a dozen of them in a row. The affordances of this dancer both for rehearsing for certain reasons, as well as for performance were just kind of different, which is why I liked this work so much because it wasn’t about replacing dancers. This was about, ‘Okay, we have dancers, how can we have them express themselves and be creative in new and interesting ways with technology’?
Ross: Fantastic. Just to hop to a different topic, which is Earsketch, a different intersection of humans and cognition and senses. It had a big impact and I’d love to sort of hear that story.
Brian: Oh, sure. Earsketch actually has an AI as the end chapter. Hopefully, I’ll remember to talk about that. Earsketch is a project, it’s been a large team collaboration for 12-plus years at this point, it was co-founded by a school music professor here named Jason Freeman. We’ve been working with each other every week for over a decade. We’ve designed and built a learning environment. We’ve been working all this time on designing and disseminating a learning environment mainly targeted at high schoolers, for changing attitudes about considering being in computer science.
There’s a pretty big difference in representation and computing proportionally when you talk about gender or ethnicity. Earsketch is an attempt to try to design around the social, and socio-cultural barriers that have Black, Latino, Female, and other students not considering computer science in high school because it’s nerdy, or it’s for boys, or they’ll get beaten up. There are a lot of documented issues that unless you’re a white, or Asian male, you probably have these things in front of you keeping you from considering even taking that computer science class, or checking out that workshop at the library, or whatever, so Earsketch is an attempt to try to circumvent those socio-cultural barriers, provide computing in a different context that provides meaning-making, personal expression in computing, and in a domain that is especially ubiquitous across youth culture.
Making hip-hop and electronic music doesn’t touch everybody, but it touches a lot of the kids that we hope to be able to at least be more literate in computing, if not consider a more technical career down the road, right? This isn’t so much about helping feed the Silicon Valley machine as it is about empowering people in literacy that is very accessible and acceptable to a very specific part of our population and not so much to others.
Ross: Well, it’s very much fun to be around augmented cognition through using different senses. So, how does it come together?
Brian: As you said, the environment is called Earsketch, it’s an online platform, if you just Google Earsketch, you’ll find pictures of ears, and you’ll also find our website. It’s been used by over a million and a half students, we have about 20,000 active learners a month. It’s part of the AP curriculum for computer science programming in our country. It takes the idea of making music and programming and putting them together. Kids use Python or Javascript, which are industry-standard languages, and they still are likely enough. They manipulate musical samples, beats, and effects with this code. Within a single hour, they have sat down, never programmed before, and by the end of the hour and their hour code curricula and other curricula, they have a thing they want to show off.
That idea of having an artifact that you want to show off to your friends and family that you’re not trying to hide and that you’re actually kind of proud of or invested in is a very unique experience in education. That’s the kind of moment that we’re trying to provide for these kids. They’re really lacking because they’re checking out the robotics camps, and they’re not checking out whatever it is that is working especially well for the folks who are already represented in computing, like computer games.
Ross: That’s some stuff you’ve done in another domain, such as drawing, but this goes to the broader point of this co-creative cognitive agent. And I think that for me, I’m always talking about the idea of humans plus AI and AI as a compliment, AI to make us more, amplifying our cognition. That really seems to be the heart of your work, and this idea of this co-creative cognitive agent. We’d love just sort of to hear a little bit more, riff on that idea of where we are today and what we need to be doing.
Brian: It’s really nice to have help in creative domains that you’re trying to learn, and it’s really difficult to get help at an individualized level on a daily basis, unless you’re especially wealthy. If you can’t get a personal tutor for programming, or you can’t get a personal tutor for graphic design or what have you, it would be nice to have tools that can help you, so democratizing that knowledge is really a big part of our goal and the lack of prior research in this domain is simply because it’s a lot harder. It’s easier to do AI-based assisted learning in algebra, like what Ken Koedinger made his career in.
Algebra has really well-defined rules, and you can look at a problem and you can exhaustively search the errors that students make, and you can represent those errors computationally. It’s a well-understood problem. That’s hard to do with sketching. By working on sketching, there’s some really interesting questions that come up that point out the deficiencies in current AI techniques for generating images. One of the big criticisms is that they’re just copies, there’s no understanding, they’re just duplicating and mushing together things that they’ve had before.
Now, whether or not this is a good idea to release into the world I’m actually wrestling with but you can imagine improving these agents by representing actual perceptual processes, in terms of Gestalt representations, for example. This is a thing that I’ve been especially interested in. If I can draw a little C shape, the AI knowing that that’s a container and it can put things in that container is a very basic visual Gestalt representation that I feel like once you start being able to put those kinds of things together, you can get pretty complicated behaviors emerging pretty quickly, in terms of the intricacy and the variance of what you can do with an AI that is able to actually perceptually reason about the images, versus just ‘I’ve seen pixels and pixels go with other pixels, and I’m gonna put some pixels here.’
Ross: Well it seems to be what you’re describing as ‘bouncing off each other’, as in you do something, the AI does something which is not necessarily determined or you’re not just instructing it to do something, but it’s bouncing off you. It just goes back to the roots of this robot improvisation as it were. I think your point around LLM has been basically, yeah, you ask, you tell it to do stuff, so what do we do now to unfold this more emergent collaboration with LLMs and their ilk?
Brian: Funded by Labs Better is part of it. I hesitate to comment on this, large language model development is 99% led by industry. Who knows what’s going to come out in a month? At some point, it felt like we were just in this holding pattern of waiting and seeing until the crazy stops. Maybe it stopped with multimodal models. I haven’t seen anything about social cognition at all, in any of the work that’s out there, but I really like that after being so surprised by every month for the past few years, I’ve definitely lost a bit of certainty as to whether or not we’re asking questions that other people aren’t, because they’re doing it in secret and using lots of resources, and they’re suddenly going to release the thing someday. And that’s when we’ll find out when they’re out publishing papers.
Ross: There are two sides. One is how we get the AI to interact with us better and more usefully in more ways that draw us out, but also it is also our own skills and attitudes and the ways in which we interact with the systems as well. Taking your mindset and propagating that as to the ways in which we should be thinking about how we work with these systems.
Brian: We have a really clear ethical problem with one of our projects now. It’s a microcosm of the larger space. As I was saying at the end of the year sketch journey, we’ve worked on a co-creative AI, it’s called Cai. That was the collaboration with Kristy Boyer and folks at the University of Florida, it was this conversational AI that helped you write your script code and helped you with both the technical and aesthetic sides of the project which, never done before, is a big new thing. LLM came out right towards the end of our development and completely changed kids’ expectations. We had lots of really good positive findings on their iterative designs. Suddenly, kids’ perceptions have completely changed. The ceiling, or the floor was raised way up. Their expectations were just out of touch- we were out of touch with their expectations suddenly. This is still an interesting problem.
We’re working on it with LLM technology as a part of it. Even if they’re grossly successful, if we build this agent that really helps kids learn and write better Earsketch code, how do we release it? How do we make this tool? As I said before, this is a thing that gets judged in AP tests, those AP tests do not take into account intelligent assistants, right? There’s this double-sided question of what can we do? And what does the world actually want? And what is actually useful for the world, and still trying to figure that out to some extent, because this feels like a really useful thing. Yes, kids can learn from this, but how does it fit into our current structures how we teach right now and how we evaluate? Not sure about that. Part of this work really is figuring out the right way to integrate this into our current ecosystems, rather than some ideal one that we’re designing for. This might be a thing that you can turn on for your class or a thing that you can turn on for your assignment and then turn off. There has to be some teacher control in this that we’re gonna have to figure out, or some gatekeeping where it’s not just anybody who can use it anytime. It’s all for the sake of people being able to evaluate people at the end of the day. We’re not a part of that at all.
Do we make co-creative agents for Earsketch projects, or co-creative agents for drawing? There’s still this question of what is society okay with. Right now, it seems like society is not okay with AI-generated art, and visual art, more or less people are from what I’m seeing very- there’s a lot of vitriol if a comic book artist gets accused of using AI art, that person is canceled suddenly.
Ross: Even though it is usually used in co-creative processes. It’s just part of the co-creative process,
Brian: Right. Here’s also the weird and bizarre thing is that there’s a slippery slope argument here, there’s some weird big grayscale or gray area here, where AI has been used to film for decades, but now we’re worried about it being used in certain ways. How to talk about that nuance about the differences between ‘we use AI to fix Arnold Schwarzenegger, his eyebrows or whatever, post-production’, versus ‘we used AI to take these body scans of doubles, and have them be actors and all of our movies for forever’. There’s a really big difference, but it’s also using AI. There’s no really good language or common literacy to talk about these differences. That’s the reason that a big part of my work- Earsketch is about CS education, and as we’ve talked about, I do a lot of work in AI. There’s things that naturally come together, and I’ve done a lot of work in AI literacy, in particular, within CS education.
I don’t know if we’ve talked about this previously, but the main framework in the world for defining and discussing AI literacy is the result of a dissertation on my lab, my student, and current collaborator, Duri Long. I published a paper in 2020 called ‘What is AI literacy’. It gets hundreds of citations a year. I just point this out, to just bring up the idea that a lot of how we interact with these agents is on how we design them, but also in literacy, and in terms of how we know how to interact with them. It’s on us to design AI systems that are transparent, that are explainable, but it’s also on us to consume visual content skeptically now. It’s on us to have some basic understanding of the capabilities and limitations of large language models so that we consume it’s false information it’s giving us correctly. Some of the work that we do is in designing museum exhibits to try to get at specific learning objectives that center on this topic. If you’re in Chicago, we’re doing pop-up exhibits, every now and then for our exhibits, and hopefully, they’ll be there for a little bit longer of a time and in a year or so. At the Museum of Science and Industry, we’re putting AI on the floor that’s about engaging with AI in creative ways, which is sort of my thing, and learning about AI through that creative, often embodied, and tangible interaction. The dancing AI is part of that exhibit.
Ross: To round out, where should we be going in this human, computer, creativity, and expression? What are the next frontiers? What are the things which will enable us to do more with computers in being creative and expressing ourselves?
Brian: I feel like something I mentioned a minute ago, explainability, and we’re on the cusp of that, in a lot of ways, is far and beyond one of the most important things that’s missing from current systems. So when you see, gosh, I don’t know which systems do what now, but sometimes language models will give you some citations. So like, ‘here’s what I said, and here’s where I got this information from’, that is fantastic, compared to ‘here’s the truth’, and just nothing. In terms of implementation and technology, the socio-cognitive stuff that I was talking about earlier would be another thing. As a country and as a planet, policy is a thing that we need way more catching up work to do than maybe anything else. The integration of these technologies into our society suddenly is a weird experiment that we’ve decided to do. And it really feels that we’re not necessarily making the decisions that are best for us, but more that, what are maybe best for the market, or for investors in specific companies, and I really feel like they’re not looking out for me.
So there’s, I feel like a Star Trekian future for us where we take these technologies, and we use them for our betterment and to advance our lives to help create new art discover new scientific concepts to express ourselves to find meaning. But, there are other folks who are just using generative AI to make political spambots that argue with people on Reddit. So much about these technologies depends on the beholder rather than the technologies themselves, and I guess that’s true for pretty much any technology. But, that question of, what do we build, and how are people going to use it that aren’t like us, is the thing that we all should be asking as researchers and I’ve been asking myself quite a bit lately about, like the socio-cognitive work. Do I understand if I was super successful, though, do I understand the actual ramifications of this technology existing in the world? And me releasing it, open source? Like what would that do and, and having some lack of clarity, and understanding is a weird place to be in after having worked in this place, in this field for 25 years now.
Ross: I think the takeaway is, that we need inspiration and I think , your work and your attitude, and everything, all your collaborators is, is kind of a bit of a light and a lead for us, and being able to consider AI as enhancing who we are and our ability to express ourselves and our potential. So thank you so much for your work and everything you’re doing in your time today, Brian,
Brian: Thank you. That was one of the nicest summations of my work I’ve ever heard. I might have to write that down. Thank you. Appreciate it. And yeah, thanks for having me today. I love talking about this stuff. It was great.
The post Brian Magerko on AI to enhance human creativity, robot improv, music to learn coding, and improvisational dance with AI (AC Ep56) appeared first on amplifyingcognition.
– Claire Mason
Claire Mason is Principal Research Scientist at Australia’s government research agency CSIRO, where she leads the Technology and Work team and the Skills project within the organization’s Collaborative Intelligence Future Science Platform. Her team investigates the workforce impacts of Artificial Intelligence and the skills workers will need to effectively use collaborative AI tools. Her research has been published in a range of prominent journals including Nature Human Behavior and PLOS One, and extensively covered in the popular media.
Google Scholar Page: Claire M. Mason
LinkedIn: Claire Mason
CSIRO Profile: Dr. Claire Mason
Ross Dawson: Claire, wonderful to have you on the show.
Claire Mason: Thank you, Ross. Lovely to be here.
Ross: So you are researching collaborative intelligence at CSIRO. So perhaps we would quickly say what CSIRO is. And also, what is collaborative intelligence?
Claire: Thank you. Well, the CSIRO stands for Commonwealth Scientific, Industrial and Research Organization. But more simply, it is Australia’s National Science Agency. We exist to support government objectives around social good and environmental protection, but also to support growth of industry without science. And so we have researchers working in a wide range of fields generally organized around challenges. And one of the key areas we’ve been looking at, of course, is artificial intelligence. It’s been called a general purpose technology, because its range of applications is so vast, and it is so potentially transformative at least.
And collaborative intelligence is about a specific way of working with artificial intelligence. So it’s about considering the AI almost as another member of a team or a partner in your work. Because up till now, most artificial intelligence applications have been about automating a specific task that was formerly performed by a human. But artificial intelligence has developed to the point where it is capable of seeing what we see and conversing with us in a natural way. And adapting to different types of tasks. And that makes it possible for it to collaborate with us to understand the objective that we’re working on, communicate about how the state of the objective or even be aware of how the human state is changing over time, and thereby producing an outcome that you can’t break down to the bit that the AI did and the human did. It’s truly a joint outcome. And we believe that has the potential to deliver a step change in performance.
Ross: Completely agree. Yeah, this is definitely high potential stuff. So you have some, you’re doing plenty of research. Some of it’s been published, some of it’s still yet to be published. So perhaps you can give you a couple of examples of what you’re doing either in research or in practice, which can, I suppose, crystallize these ideas?
Claire: Yeah, absolutely. So to begin with, the key element is that we’re trying to utilize the complementary strengths and weaknesses of human and artificial intelligence. So we know artificial intelligence, vastly superior in terms of dealing with very large amounts of data, and being able to sustain attention on very repetitive tasks or ongoing things. So that means that often, it’s very good when you’re dealing with a problem that requires very large amounts of data, or where you need to monitor something fairly continuously, because humans get bored. They are subject to cognitive biases, and social pressures. So that’s one area of strength that the AI has.
But the AI isn’t great at bringing contextual knowledge. It isn’t great at processing information from five different senses simultaneously yet. So it will also fail at common sense tasks that humans can perform and read easily. But it also can’t deal with novel tasks if it hasn’t seen this type of task before. And it hasn’t seen what the correct response is, it can’t respond to it. So it’s also important to have the human in the loop if you like.
So, we actually developed a definition of what represented collaborative intelligence. And our criteria were that it had to be the human and the artificial intelligence communicating in a sustained way over time, on a shared objective to produce the joint outcome. And where there was this capability for the AI to understand changes in that objective, but it was also going to improve performance and the humans work experience, so that quite a lot of tough criteria. And we reviewed all of the academic literature to see whether this concept was actually delivered in reality, and I think we did this study one or two years ago. Now, we only found 16 bit samples at the time, but they spanned a wide range of applications. So sometimes they were virtual. And a great example of that was something called the sage patient management system. And this is a system that’s meant to improve diagnosis, and patient management.
The way the system works is really interesting. The AI doesn’t deliver a diagnosis. Its job is to take the same data that the physician has and to look for any contract indications in that big stream of data that suggests there might be an issue in the diagnosis or the ongoing management of the patient. So its job is only to intervene if it does see something that suggests maybe something’s wrong in that diagnosis or monitoring. And it then communicates with the human to say, have you looked at this? What about that? The idea here is the AI isn’t in charge, but it is adding to the quality of the treatment and diagnosis by making sure the physician hasn’t missed anything.
Another lovely example is a cyber physical thing. So we’re interested in instances where the AI is embedded in a physical object, whether that be a robot or a drone. And in this case, it’s called drone response, and it’s supporting human rescue teams. The way this works is that the human rescue team is given an alert that somebody is in trouble or needs rescuing. It’s hard for a human rescue team to get to a large area and search it quickly. But you can send a pack of drones out to that area. And those drones are streaming imagery back to the human response team, and sending alerts when they think they may have spotted the person. But the drones still don’t have that capability to go, yes, that’s the one we’re looking for. So that’s the job of the human response team. And when that decision is made, the drones can actually call on other drones that are equipped with something like a floatation device, if the patient or the person needing rescuing is in water, they can bring that immediately. And then although the human response team still needs to get there to rescue the person, the drones can’t do that. That human response team can get those drones to move around the person to understand how to get to the site quickly, and what type of support they need to bring to help the person so he can see a rescue operation happening more quickly and more likely to deliver a good response.
Ross: So those great examples and thinking about them, they just wait, why only 16. Think of extrapolates or variations on what you’ve just described, or even just other use cases. There are many other ways in which even quite simple ways, simply in sort of pre-identification of a particular pattern, for example, to alerting humans. I mean, this is fairly common and easy. So what are the stumbling blocks to really build collaborative intelligence?
Claire: Finding a pattern and alerting a human isn’t the sustained interaction, that communication over time where you continue to build on one another’s work. So we’ve been looking for which this capability can be applied in our own science. And what we’ve discovered is two types of applications. And I would say roles for that the arrival of generative AI has totally changed the context and is creating the potential for collaborative intelligence to be used in many domains. So our review is a little bit out of date now, because that’s really been such a game changer. But what we’ve found is that it can make a huge difference in areas of discovery.
So for us, scientific discovery would be things like our national collections, we hold millions of insects and flora and fauna. And we don’t have enough human beings with the expertise to be able to identify when out of all the insects that have been served through. One might represent a new species. So it’s using AI to go through that entire collection and start digitizing it. And then working with the human to determine this one that looks anomalous. Is this potentially something different that we need to study further, we’re using this capability in the genome annotation space where already AI is being used. But unfortunately, it’s being used in a way that’s proliferating errors. Because if an incorrect classification was made originally, because everybody’s drawing upon one another’s work to build the body of knowledge, it’s getting magnified across many studies.
And so the potential here is that rather than just having the AI come up with suggested annotations, it’s working with the human over time, where the human can direct them to go to other studies or other datasets to refine the decisions that are made. So yes, many applications, things like discovering proteins that have special properties that are needed for particular applications. Cybersecurity, it’s really big, because at the moment AI is being used, but it’s actually creating an alert fatigue. To the humans, cybersecurity professionals cannot respond to all the alerts that the AI generates. And so now we’re building AI to monitor the human workload, using cues, such as their eye gaze or blood pressure. But also just how many alerts that human is currently dealing with, to modify that threshold for notifying the human and potentially do more of the work itself, in circumstances where that would make a difference.
Ross: So I think one of the things we think about humans plus AI, in a system is partly as obviously designed the AI and its outputs so that humans can more readily use them or make sense of them or integrate them into their own mental models. And part of it obviously, is the capabilities of the humans to use the output well, whatever is generated by the AI. So I understand, you’ve been doing some study into the skills for effective use of generative AI. So I’d love to hear more about that.
Claire: Thank you, Ross. I mean, that’s spot on. We’re also looking at things like how you design workflows, redesigning how things are done, because when you get a new technology, and you’ve just plugged it into existing ways of doing things, you don’t really get the transformative benefits. And a great example of that is in the world of chess, where it was a supercomputer that was built in 1997 IBM’s, Deep Blue, I think it was the one that beat the human world champion. But super computers have improved massively since then. But they’ve been outclassed by hybrid teams, the humans working with this computer. And what’s interesting in that story is it’s not the best chess player, or the fastest supercomputer being brought together, that achieves the best results. It’s the humans that have the skills to collaborate with the AI, they need to be the grandmasters and the AI that’s built to work with the humans that is achieving the best results.
And one of the ways in which we’ve been trying to understand what skills the human needs to collaborate in this longer term way with artificial intelligence, is by using generative AI as a test case. So in those instances, where we have a simple question, and we ask for an answer from the AI, we would not call that collaborative intelligence. But when you’re working with it over time to produce a report, and the human might ask for a structure of the report or some ideas for the report initially, and then suggest some sources that the AI should go to, then we consider it a more form of collaborative intelligence.
And so we’ve been talking to expert users of generative AI across a range of fields. And they were nominated by their peers. And we’ve asked them what characteristics, whether skills, knowledge, mindsets, help, will make the difference in effective use of these tools, as opposed to so-so use of these tools. And the feedback was, in some ways, what you would expect. So one of the key things they talk about is understanding the strengths and weaknesses of the tool, what it’s good at, and what types of tasks you might use code for. And when you might use GPT-4, for instead, for example, but they also talked about the need for the human to still have domain expertise, to be able to understand what you’re looking for, and how to evaluate the output of the AI and how to improve upon that. And also, I guess, a responsible mindset where, as one person put it, I’m not considering the AI as a teammate so much as an intern, because it’s my job to guide them. And ultimately, I’m responsible for the quality of the work.
So it’s really important, we’re not ceding everything to the AI. And we continue to add value ourselves in that collaboration. And then they talked about having specific AI communication skills. So they talked about some people using AI, the generative AI like Google, and just plugging in a Google search type of prompt, but that you need to understand it’s conversational, and that you can speak a natural language and you can improve upon your existing request. If it hasn’t responded to that you can get different types of output with different prompts.
And then I think the last piece that came through really clearly was the importance of a learning orientation that the people who were using these tools well weren’t just adopting an existing way of doing things or exploring what else they could do with this capability, how they could do things better, and they were investing time and learning how to use it well, rather than some people who they described, you know, trying at once not getting the result they wanted. And therefore concluding it’s not that useful that you need to keep seeing what it can do, because that changes over time. And then thinking about well, then how do I use that now to improve to deliver something better than I was delivering before?
Ross: So these role users reported skills and behaviors and mindsets?
Claire: Yes, so we just went in with a general question of, can you talk about how humans make a difference in getting really good output from a generative AI tool, versus ordinary output from a generative AI tool? What skills, knowledge, attitudes and mindsets do you think are important, and is it really all clustered under those things about being informed about how to use the AI, being a responsible user, having the knowledge to direct it and evaluate it’s our port? Understanding how to communicate in a way that aligns with the affordances of the AI. And those are changing all the time, because now we have multimodal generative AI. And consequently, being a great lineup, constantly exploring what else is out there and how you can do it still better?
Ross: Well, couple a couple of thoughts, I suppose next steps here. One is those for having empirical studies to look at the actual behaviors rather than self-reported. But the other thing is, well, if we, if these are indeed what makes somebody an effective user of generative AI? How do we then propagate this or educate people or shift their attitudes? I mean, is this something you’re how old going back to the industry point? Is this an industry? How does this flow through into assisting organizations be more effective?
Claire: So there have been about four really good randomized control trials with generative AI, where half of the workers are randomly given the access to the tool and half are not. And those studies have been in Boston Consulting Group with consultants. And with programmers, they’ve got people doing writing tasks, and they’ve also done call center workers. All of those studies found really significant productivity improvements so that the call center workers could resolve 14% more issues that the developers were completing about 15% more pull requests, that the consultants were completing 12% more tasks and achieving what was evaluated 40% higher quality with the customer service ones, I love this one, because not only did they get more productive, but worker turnover reduced. And the customers were expressing more positive sentiment in their communication with the agents. So it’s got those joint benefits that we’re really looking for, which makes the workers happier. And we’re getting productivity and quality benefits from the use of AI.
Ross: So the interesting point, though, is in the Boston Consulting Group study, they did separate between those who were given the tools without any training, and those that were given some training. And whilst there was really just a very small, overall incremental improvement were from the people who had the highest skills training, and in fact, in some cases there was a deficit. So that’s kind of an interesting point as to whether just giving the people the tools or whether there are any educational skills, which can further improve the outputs.
Claire: Something that makes you consistently in this work is that it tends to be the less experienced and the lowest skilled workers who are benefiting most from the use of these tools. And the other thing that’s really important, and which gets to this question about do people need special skills — is that even though performance improves, on average, there are usually some tasks or decisions where the use of the AI is actually making the humans decision worse, because as we’ve said, the AI is not very good at tasks it’s not trained to perform. And so in those instances, maybe it provides bad advice. And in consequence, the humans are going to take longer or give a worse answer than they might have working alone. It’s because of those instances. And also that training and understanding how these tools work and when they fail, that we are confident that human skills are still going to be really important when we work with these tools. And that we are actually developing and trailing some interventions within our own organizations where we’re giving people information about the strengths and weaknesses of the tools, how they work, how they’re trained.
And then we’re trialing that against another intervention, which is about promoting a mindful sort of metacognitive approach when you work with these tools. And the reason we think that’s really important is because, as you know, we all have cognitive heuristics, ways in which we are able to make decisions under conditions of uncertainty. And those rules were primarily developed for working with other humans. So for example, if somebody speaks well on a certain topic, I infer that they will be good at another task, which is related. That inference does not work when you’re dealing with generative AI, which can sound fantastic and can talk, perform brilliantly on some tasks, and then completely fall over on another task that to us would seem simple.
So what we’re arguing is that when we’re working with the AI in this collaborative way, there is nothing that’s relatively routine and automatic. And so our cognitive heuristics become less functional, our role is actually to look for the things that don’t fit the rules. And to be more aware of where the AI might fall down, and when it might be wrong. So we’re looking at training people to be more metacognitive. To think about, well, what other information might be missing, what other sources might I go to, to validate what I’m getting out of the AI? And so, we’re interested in whether AI literacy or metacognitive interventions, or the combination is going to deliver better results for humans who are working with these tools?
Ross: Do you have a hypothesis?
Claire: Our hypothesis is that you need the two — that it’s great to know how AI works and where it can go wrong. But unless you’re switching on that awareness and mindful approach to metacognition, you’re not going to utilize that knowledge in how you respond to AI. And I think that will be a real challenge for us. Generative AI is so good at so many things, how do we make sure we stay aware and alert to where things could be better, or it’s made an error.
So it’s funny because it kind of involves slowing down a bit with the generative AI to notice how to improve upon what it’s given you. And maybe we don’t need that on all tasks. But for knowledge work, I do think that’s where humans will add value. It’s when they’ve retained that self awareness, thinking about how they’re thinking and what the AI is doing, that AI at that stage does not have. It’s intelligent, but it isn’t self aware.
Ross: Are there any established approaches to developing meta cognitive strategies? For new and new spaces? Are there new metacognitive strategies that we require?
Claire: Metacognition is very well understood in the educational domain. We know that metacognition helps people to learn more deeply and more quickly. That’s why in educational settings, you’re often asked a lot of questions about something that you’re learning. They’re called metacognitive prompts. So what are the strengths and weaknesses of adopting this approach? Or what era? What alternative sources of information might you consider here? Those are what we call metacognitive prompts. And what is a possibility and we might be looking at is building metacognitive prompts into artificial intelligence tools to encourage human engagement in metacognition.
Ross: That’s a very interesting outcome of that study. I think that’s particularly important. So it’s two pronged with literacy and metacognition. So the AI illiteracy is that goes back to what you were saying before around the basic approaches, what are effective uses for, how we interact with generative AI?
Claire: Yes, but I think this concept of AI literacy has been around for a while. But I think with the proliferation of AI tools, we’re going to need much more sophisticated AI literacy in the first instance, we’re communicating and interacting with it more and more over time. So building communication forwardGgenerative AI or communication with a cobot, which is going to be a very different proposition. It’s going to be necessary, and that hasn’t so much been the focus in the past. And also, the nuance of this, the generative AI literacy and our might need a very different kind of literacy for a say a computer vision application. So yes, I think that’s going to become a whole big area of study in itself. So
Ross: One of the very interesting areas of research or looking into as you’re looking with generative AI or AI can provide enhancements to capabilities in a collaborative system human using generative AI, but also looking to see whether there is or can be an improvement of the capabilities of the human when you take the AI away. I know there have been some other research studies which have in the conditions which they’ve set up anyway, that they give people the tools, and they do better and they take away and they don’t do any better than they did, which I think is a lot around the framing of how you do that study. And I think it’s yeah, it’s a very interesting idea as well. Can we use generative AI in a way that makes us more capable without the use of the generative AI afterwards?
Claire: Absolutely. Is this just lifting us as long as we have access to generative AI? Or can we learn while we’re using it so that when it’s taken away, we’re actually better at the task by ourselves? And other people are saying, Could it make us less smart? Because now we’re not using skills that we previously built up over time and experienced? They’re really important questions, and I suspect there isn’t a simple answer, because it will have to depend on the way in which you’re using the AI. I mean, I’m pretty confident that when I use a language translator, I’m not learning. Because I take the answer, and I plug it in, and I get what I need out of it.
But perhaps, that study of the customer service workers in a call center was really interesting, because they had a system outage. And what they found was that the low skilled workers who’d use the generative AI, continued to perform better. And the theory was that the AI had been useful in helping to highlight the strategies that are best at dealing with this type of problem or this type of customer. And they managed to retain that learning and continue to work better. So I think that’s going to be a really important area of study, especially for teachers, the classrooms and universities, because we know these tools are starting to be integrated in those learning environments, how we make sure not just that people are learning the skills to work with the generative AI, but that as they work with it more and more, they are also still learning that domain expertise that we know makes a difference.
Ross: Are you doing a study on that at the moment, or designing?
Claire: We are designing a study, what we’re struggling with is the choice of tasks. And also the amount of engagement because I guess what comes out of that language translation example is the it probably depends on how much we engage with the material that the generative AI gives us as to whether we learn from working with the generative AI or not, if I take the answer, and I plug it in, I’m probably not processing it and learning from it. Maybe in other contexts, where I’m using its ideas to build upon them, we will see some of that learning happening. So yeah, that is a study we’re doing, we’re going to have people doing about six different tasks. Some people of course, not getting generative AI at all others where they get it some of the time and not other times, and then assessing everyone’s performance at the end when they don’t have access to the generative AI to see whether the people who got access to the generative AI have learned more than those who worked alone.
Ross: Well, very much look forward to seeing the outputs of that. And I guess my frame is that once you reach one end of the spectrum, you can design your Human-AI collaboration so that the AI learns what the human does, and then continues to take over more of that task. On the other hand, you can design systems where every interaction that humans are learning, perhaps explicitly, in some cases around developing or extending capabilities or testing. It is absolutely in the design of the Human-AI collaboration as to whether or not the human learns or unwinds or becomes numb or whatever it may be.
Claire: A fantastic example of that Ross, that I’m not sure I think is an excellent practice, but takes your idea. And that is schools where they’re using a generative AI tool, with students that always removed some of the information in its answer. I’m just not sure that’s a great way of teaching people to work with these tools, because it’s not really allowing you to experience the true affordances of the tool. But it is that notion that maybe we’re going to have to build in ways of designing the AI to ensure we don’t lose our own intelligence and knowledge and continue to grow from it.
Ross: Yeah and I think that this idea that over reliance on AI is the biggest problem we might have. To round out well, what would you, what do you see is I mean, you’ve already discussed some of them, but I mean, what are some of the other most exciting directions for the research and the potential of these approaches.
Claire: So I guess it comes back to those two types of use cases of discovery and monitoring and identifying the areas where we need those strengths of the AI and the strength of the human to get the best possible outcomes. And I think beginning to break down the types of work where you need that combination, we can’t just automate it. And the human can’t deal with the volume of the data or the speed of response that’s required. All the work is currently as Erik Brynjolfsson would say dirty, dull or dangerous. And then I think the real promise will be understanding how we design the work under that collaborative model, and things like how we calibrate trust appropriately. Not too much, not too little. And the answer to that is going to depend on the type of task you’re doing, and the type of AI you’re dealing with. So there is so much work to be done in this space. But really high potential, I think.
Ross: Fantastic! I love all the research in the work you’re doing now, Claire. I’m sure it’s going to have a very important impact. So thanks so much for your time and your insights.
Claire: Thank you for allowing us to share it. A pleasure.
The post Claire Mason on collaborative intelligence, skills for GenAI use, workflow design, and metacognition (AC Ep55) appeared first on amplifyingcognition.
The podcast currently has 136 episodes available.
31,886 Listeners
15,906 Listeners
29,715 Listeners
736 Listeners
4,251 Listeners
45 Listeners
326 Listeners
13,101 Listeners
3,311 Listeners