Share The Digital Life
Share to email
Share to Facebook
Share to X
By The Digital Life
4.7
1010 ratings
The podcast currently has 55 episodes available.
Well, the reality is that automation is now making its way into our space. It has, in fact for a long time, and we haven’t used the language of automation, but we have a design firm here at GoInvo and for many years, the core tools for our team, among them at least, is the Adobe Creative Suite, and that is software that’s loaded with automation, that has drastically automated what design means over the last 30 years now.
This show is about the fact that automation is coming more quickly, in a way that is woven more into the very day work lives of me, of you, of people like us, all kinds of people. This is impacting researchers, writers, artists, designers, engineers, entrepreneurs among others. It’s going to change our work. It’s going to change our jobs. Tasks first are going to be falling to the automation, some of that automation will simply take the tasks over, some and more commonly it will be augmenting, so they will be tools that are helping us to perform tasks more quickly, giving us more power.
Again, going back to the Adobe Creative Suite example. But, those will in turn change what our jobs look like. They’ll change the skills required, the tasks required, and for folks to be ahead of that, to have it be a tool that is improving our career, improving our chances, giving us more longevity, and more ability to really thrive not just survive, we’ve got to be ready for that. We’ve got to be knowledgeable, we’ve got to be thinking, we’ve got to be learning, and Creative Next is about exploring all of that stuff.
It’s not what we’re reading about and learning, it’s different. It’s more subtle. It’s more integrated into our lives, and it has a more direct and real impact on our work lives in particular, in the short term. In the years ahead. People weren’t talking about that. It was still stuff that would be down the artificial general intelligence path, or stuff about goofy robots. I really felt like people are looking in the wrong place, and so for me it was like this is something people need to be aware of, it’s a story that needs to be told, and it will help a lot of people, because we’re understanding things that are going to really impact the world of work in the years ahead, and it’s going to surprise a lot of people.
The people who aren’t surprised, the people who are striving with it, and us, and hopefully our listeners, and hopefully much even broader than that, but are going to be at an advantage, are going to be protected, are going to be … In the language you’re using on the show, future proofed. For me, it was something that the discover of it surprised me, the learning of it enlightened me, and I found a calling that this was something that needed to be done to be of service to people who I consider my peers, my friends, my colleagues, people I’m sharing community and history with.
This long transition, which we are currently experiencing from a more industrialized economy into more of an information economy. Understanding that those changes really sparks a lot of interest from me. I’m interested in this kind of transformation. For me, this podcast Creative Next is … It’s a podcast, it’s also a much more focused research project in a lot of ways. We’re going to be talking to experts on AI, experts on design, on technology, similar to The Digital Life in that way, but exploring this thesis around what’s next for a creative economy. So, that’s another thing that excites me about the show, is just the focus and the research aspect to it as well.
From there, we pivot to looking at how machines learn, and then specifically how learning machines have been participating in, and influencing games. We get into chess, we look at … You know, chess was the first of the major strategy games that AI defeated, it’s now been over 20 years ago. That’s given us 20 years to study once a machine dominates a game, what happens to that game, and what happens to the people who play and compete in that game?
We explore that, and then we move into poker, which is more recent. Understand how humans were able to build a machine that beat the best players, but then what has that done to the poker community just over the last two years? What impact has that had on strategy, on play, how are poker pros using machines? Which was pretty cool, too. That got us through about half of the season, and then we move into learning in the most direct way. Series of five shows, I think are really strong, where we start by looking at how is learning functioning in the corporate world, then talking with a high school principal, how is learning functioning in high school, then how is learning functioning in university, then how is learning functioning for young adults from a student perspective, how are they learning both in and out of the university, and then finally to online learning and lifelong learning, and how those things are manifesting.
Before then, finishing off by taking a look at where AI is headed, where automation is headed. In the years ahead, what are some things that will be changing, and contextualizing those in the future season. Maybe that’s a long winded overview, but that’s … Season one is about learning, and that’s the journey that we’ve taken with it.
But then a lot of new blood. A lot of people that will definitely be new to our listeners, and new to our shows. Chris Chabris, fantastically smart author, professor and columnist for the Wall Street Journal, talking with us about chess. Tobi Bisetti, senior machine learning engineer as episode two, and she really gives us a good framework for what we’re talking about here, when we’re talking about AI and machine learning. The real stuff, not the scifi stuff. The nuts and bolts, among others, and we have 12 guests in this first season, and I think it’s a fantastic crew.
Function then is going to pivot in season four to engineering. How we make things work, and how we will automate the way that we make things work. Then season five is going to be on leadership, and that’s going to come from a couple different directions. One is about leadership in management, how those things will be automated. The other part of leadership is how leaders can implement automation solutions, at scales small and large, into their organizations, whether their organizations are small or large, and really understanding what is it going to look like to be shifting, and to be leading the shift into automated work places.
Season six is going to be called, “You.” It’s going to look at our lives, and look in the most direct way, regardless of whether you’re an engineer, or an artist, or a journalist, or a research scientist. How will this impact you, how can you make the most of it? How can AI automation not be something that’s a little scary, that’s a little uncertain, that feels destabilizing, but it’s something that’s empowering, that is something that really is a tool for good in your life, in the life of people who count on you, and count on your ability to make an income. But also good for the world at large, and how you and those tools could be a catalyst for that. That’s our plan.
Listeners, remember that while you’re listening to this show, you can follow along with the things that we’re mentioning here in real time, just head over to thedigitalife.com, that’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player Fm and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett, that’s J-O-N, F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O dot com. Dirk?
So you have all these fantastic technologies, and you don’t have enough people to fill the jobs that they require, because they require a different set of skills than maybe what some universities, colleges, schools might be generating the students to do those particular jobs. They’re just not meeting the demand. Before we get into that broader topic, there’s a second part of this story that I find really fascinating, which is the way in which you can pay for this education fromLambda school. It’s a 30-week software engineering course, and so you can either pay 20 grand, which is your tuition. So you can pay that as you would maybe if you attended a university, or you can do this thing called an ISA, which stands for an Income Share Agreement, and it essentially means that you will pay the school 17% of your salary, from your job that you get after you complete your coursework, and that’s for a period of two years. It caps out at 30 grand, so you’re not going to pay more than 30 grand for your education.
And if you don’t get a job after five years, you don’t owe them anything. So in this way, Lambda school is attaching its success in training you for these skill, taking on some of the risk. So it’s saying, “These skills we know are in demand, so we’re going to enable students who might not otherwise be able to afford this type of education, we’re going to make it possible for you.” And I thought that was a really fascinating model, and I don’t know how I feel about it. In one way, it kind of feels like economically that might work a lot better for people than carrying a load of debt, and at the same time, signing over a percentage of your salary seems a little funny. Dirk, what was your impression of this ISA business model type?
So Lambda’s offering something that is a commodity, that is in the market generally seen as something to be given away, or to be acquired at a very small price, and they’re charging tens of thousands of dollars for it. They set as an anchor their $20,000 price point, in order to sort make you sign up for the more attractive model of paying them even more, significantly more downstream. So to me in that way, I don’t find it particularly altruistic, I find it particularly capitalistic, and they’re offering something where, what they have to pay their instructors to teach this online course and then slack with the students who reach out to slack them in some limited way. They’re going to be grossly profitable doing this. Good, creative, interesting, has a chance at scale to make an impact. All good, but I definitely see it as self serving motivation more than serving the public, because of the price model, what they have. And I’m sure that’s why they’re getting so much investment and so much attention, it’s because there’s just the opportunity to make gross amounts of money with it, which is generally what Silicon Valley’s all about.
Which would be the more liberal arts education focused on whether it be writing, reading, understanding. Everything from science and literature, and getting sort of a broad survey, as opposed to very specific job-specific skills that you can use in the market place immediately. And I don’t know whether these two models will come crashing into each other, but it seems to me like we have these competing entities of very quickly moving technologies, university systems which are extremely expensive, and then the quest to find meaningful and ongoing work, which is only going to change even further as more technologies take shape. Dirk, when you think about how these worlds where continuous education is going to be a prerequisite for being able to compete, what do you see? How do you see the traditional university model and these more technical type schools in emerging technology? How does that all come together? Or is there even other …? I’m sure there are other ways that we could approach this realm of education as well.
I feel like all this is headed for an interesting collision course, and that’s of course where innovation happens, but it’s a struggle for me, because I know what I took away from university, and that being so valuable for how I learn today, and at the same time I know the price tag of it, and the price tag today is huge whereas something like Lambda school seems almost … It’s extremely affordable in comparison. You’re not talking 200 grand, you’re talking 20. So I can see the appeal there. And obviously this is a topic that we’ll be exploring more as we dig into the future of education.
Listeners, remember that when you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com. That’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterwards if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com. Dirk?
This really took me back to the Luddites. We use the term Luddite without really knowing or understanding where it came from, but the Luddites are based on an economic worker’s movement in the 1810s, so about 200 years ago. At that time it was mainly in textiles that automated machines and companies with these machines were displacing skilled workers and the reaction to that was for these skilled workers to form groups and be disruptive. We remember sort of historically the top layers while they were off breaking machines. They did break machines, that’s true. They also assaulted, in some cases killed, business owners that had the companies that were doing the cheaper textile work and replacing their jobs. The Luddite movement was so significant and it was overlapping with the Napoleonic war, that at one point the English government had more soldiers dealing with domestic Luddite disturbances than they had soldiers dealing with Napoleon and the French army. So, the scale of it is staggering, so what we have happening now in Chandler, Arizona is … Let’s call it a minor nuisance for lack of anything better. At the point at which the US Army is having to deploy people en masse, then we’ll be dealing with something that is socially at a level similar to the Luddites 200 years ago.
So, of course the story is disturbing and people behaving in such base and ultimately self-destructive ways, slashing tires and throwing rocks is … You don’t feel good about that, but it certainly is nothing compared to a very similar context 200 years ago and the sort of very organized, much larger scale reaction to approximately similar encroachments.
So, in the virtual world, there’s nothing to act upon. You can uninstall. You can not buy the stuff, but other people are going to buy the stuff unless everyone’s turning against it, in which case that service will go away. Other services will come to replace it, and a lot of the AI driven change over the next decade certainly will not be as physical. It will be more virtual. It will be things that are happening in systems where there’s nothing to destroy, there’s nothing to attack. Yeah, you can take your laptop and smash it on the ground. Congratulations, you’re out $2,000 or $500 or whatever the cost of your laptop is. There isn’t this external, corporate owned, physical thing that we can lash out against. I mean, can we go to their corporate headquarters and start throwing rocks through their windows? Yeah, but that’s the fastest path to jail you can possibly imagine. I don’t know. So from my perspective, it’s going to be a question of where are there opportunities to sort of physically act out and against things? That’s where this will show up more. So, companies that are more virtual … It just won’t be explicit because people can’t really do anything, but the fact is that these technologies will be disrupting older industries. The people in those industries could … It’s not like jobs are going away. There’s other things that they could be doing and retraining for, but that’s not what people want to hear. People want to continue doing what they were doing, what they perceived as safe and part of their identity, and those folks are going to continually be frustrated and discouraged over the next decade as AI and automation are encroaching on our world.
So, we’re going to do this Creative Next podcast in a slightly different fashion. It’s going to be interview based, so each episode we’re gonna be talking to an innovator about a critical issue related to our creative futures and we’re doing this in six separate seasons which will be released over the next couple of years. Our first season on learning is going to be debuting on February 19th, 2019.
So, The Digital Life is transforming into sort of our next podcast iteration and we invite all of you, all of our friends and listeners who have enjoyed the show over these past seven, eight years now, to come along with us on this next journey which really sort of builds on all of the work that we’ve done here on the digital life. It’s sort of the next instantiation, which is Creative Next. So, if you’re interested in taking this journey with us, please go to CreativeNext.org and sign up for our mailing list and we’ll be sure to let you know all the whens and wherefores when the first episode drops, and as a special bonus, we’ve got a sort of a prototype first episode out there for you, Creative Next number one, that you can sample and listen to and see where we’re headed. We would love it if you join us.
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com. That’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterwards if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com. Dirk?
And so what this says to me is well, number one that sort of the technical aspects of artificial intelligence are going to be impenetrable I think for many designers, myself included. Having a visual interface that sort of reveals the system and how the connections are made and how the rules are set and how things interact is going to be important to getting more, call it non-technical people involved in the creation of AI systems.
I found this completely fascinating because it felt like a step towards making it more accessible for folks who might also be interested in the user experience side of things, which, of course, we have a user experience studio we care very much about it. So to me, that’s a positive development and something I think we’re going to see more of in 2019. Your thoughts?
Yeah, they might be able to make little simple websites work, but beyond that, more complex, more interesting, more powerful things are not able to be composed or created by a designer. It still requires a true program, a true software engineer. So the notion that suddenly for artificial intelligence, they have this great beautiful plug and play any creative professional can use it. Here’s my AI software. I’m super skeptical about. There’s just no track record in software in general of graphical-user interfaces totally disintermediating the engineering component and allowing us to plug and play code complex things. It just isn’t real. Cool. Great. If they could make it work with magic, awesome. But it’s just a concept at this point.
I can remember sort of early, in the days of the nascent web, you had tools like Dreamweaver from Macromedia, right? Originally before Adobe bought them. And the idea was that you weren’t going to hand-code things, you were going to assemble things visually. And so the feedback from the engineering usually was, “Hey, this code is …
I know I’m not a capable coder in any sense of the term. So from a prototyping standpoint, maybe Dreamweaver is an interesting product or was right. So maybe these AI that are assembled using code as visual interface maybe they aren’t production grade or what have you. But I think even from an idea generation prototyping, lightweight testing, some of bringing these to a broader audience I think has value. I think as we move forward, the need to allow this technology to be accessible to a broader range of people I think is going to be really important for a number of reasons.
Number one is the top thing in 2018 is the same as the top thing in 2019. In 2019, they call it machine-learning and AI. In 2018, they were calling it machine-learning and deep neural networks. So it’s also interesting to see how their language evolves and changes over time around what they think is important. But it really underscores the fact that AI, machine-learning, like these, are really dominant right now in terms of the emerging technologies, the trends, the sort of cutting-edge stuff year over year. That was interesting to me.
The second one was number two on the list was wearable electronics and that’s interesting from a few perspectives. Number one, last year they called it Smart Watches. So big evolution from a specific device to a very broad category where they’re seeing the broader application of the things that make a Smartwatch interesting in a whole variety of wearable technology. That expansion really speaks a lot to the market. Second too is the raise in rank. 2018, it was ninth on the list and this year it’s second on the list. So that’s one really, really to watch from a Lux Research perspective. I found that interesting.
And then also new to the list and number six, so not even one of the top 18 from last year, but now all the way up to number six is battery fast charging, which interesting. I know there’s certainly technologies behind it, but from a consumer perspective that’s more of a feature, right? My battery can charge quickly, that’s a feature, has much broader applications, particularly on the B2B side, on the industrial corporate side. But for that one to just kind of show up it’s sort of raising a signal flare that hey, this is something that might be important. So those were a few things that stood out to me, Jon.
But generally speaking, I’ve always felt that wearables were a transitional technology. Definitely, an emerging technology but one that would give way to perhaps in an embedded type technology, or even one, like using cameras to discover some of the same information. There are algorithms that can tell you your heart rate based on what your facial scan is doing because they can detect the small capillaries pulsing right at a certain level. I’ve felt wearables were a transitional technology and that could just be my bias because I’m not really a huge fan. But, Dirk, I mean, you’ve worn wearables. I mean and you don’t wear them every day now.
And whether that’s good for the technology, probably not, but that is going to be getting a lot of additional scrutiny by governments, by organizations. They’re going to be a lot of ethical questions asked about CRISPR technology in 2019. So I don’t know whether this is going to be a net positive for gene-editing in 2019, but it’s going to be big.
And I think in the past year we’ve seen the debut of some amazing metal 3D printing. Printing parts for motorcycles say that are extra-light because they’ve got a sort of very interesting honeycomb interiors, which are strong and yet a lot lighter than having a solid metal part. I’ve seen some demos of this and it’s really I think underappreciated how much this is going to transform manufacturing.
Now, in terms of, over the course of 2019, I think we are going to see more productions systems come online. So moving from the prototyping, which is very popular right now with 3D printing and starting to move much more into the production space. So I know some companies are making it, so the prototype systems can be … You can have multiples of your prototyping system which then serve as production. So you may have one of these machines in your research and design facility and then 100 of them on your factory floor in a warehouse somewhere. But that’s one methodology that I’ve seen for rolling this to a production capacity. American manufacturing I think with these flexible lines that can produce different kinds of parts of different kinds of products and then swiftly retool them to produce some other thing. I think that’s part of the future of manufacturing. I think that’s pretty exciting and something we can watch for in 2019.
The other factor too, which won’t hit immediately but at some point, we’re not going to have giant container ships going over the ocean full of so many products. Due to global warming, there’ll be some kind of legislation or tariff thing or something that’s either a cost, a pain for the people who are wanting to ship or just limits, based on not allowing sort of global trade to happen at that scale just in order to keep the planet okay. We’re so backwards right now it’s a while away. But it’s when those things start to happen that the bringing it to the US will really start to take off.
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one l in the digitalife. And go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find the Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T and, of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare in emerging technologies. You can check out [email protected]. That’s G-O-I-N-V-O .com. Dirk.
But my background is as a digital product designer, I’ve worked in government, startups dot-coms. I spent three years heading up design at Twitter UK. And since then, I have focused pretty much exclusively on the ethics of technology and the ethics of design. And as you mentioned at the start, recently released a book about this sort of combination of my work in that field. And I’m now trying to see how I take that to the world. And how I help companies make better ethical decisions, and avoid some of the harms that have sadly become all too apparent, I think, in our field.
So I think there are some focal points within ethics that are understandable, that may be too narrow. So we see a lot of people within this field, say looking at the ethics of attention, and you know, all this panic about addictive technologies and devices that are consuming all our free time. Now, that’s an important issue. But it’s not the only issue. There are plenty of other ethical issues.
So I’m keen not to be too boxed into a specific section, if you like, a specific problem, or indeed a specific approach. For me, it’s really about challenging these ideologies and the assumptions that have for too long gone unchecked, I suppose, in our field. And entering into a proper discussion about how we change things for the better. I don’t think we’re at the stage yet where we can simply just take an ethical design process and imprint it upon technology teams. I don’t think we have that level of maturity in the discussion yet. So it’s my job, hopefully, to stimulate some of that conversation.
And so, when I see another one of these being projected, I try to view it charitably, but I don’t think it’s going to really change anything. If a previous 50 didn’t work, what use is another one going to be? I think there is a danger with approaches like codes of ethics and the like that we get this checklist approach. That we almost end up with ethics becoming sort of what’s happened with accessibility.
Accessibility on the web, you know, since the release of the WCAG guidelines, they’ve helped and they’ve hindered. They’ve helped raise the profile of the issue, but they’ve also made accessibility appear to be a downstream development issue. You know, tick some boxes at the end, you know, check your contrast ratio is your … Now double AA compliant, job done, accessibility finished, let’s move on.
And I don’t think that would be beneficial to have ethics as a checklist exercise at the end of the existing design process, the existing product development process, because it’s that process itself that we need to examine, rather than just tack on a code at the end and say “Well, did we comply with everything that we said we were going to?”
So I can understand the impulse to do that kind of thing. And there may still be a place for some kind of codification, but we’ve got to have those hard conversations first, rather than just throw that up as a one size fits all answer.
Very entertaining, perhaps, but not necessarily grappling with the real ethical issues that matter now or in the future. As someone who spends a lot of time thinking about these things, what are the ethical issues that really should matter to us today and going forward?
But on a more, I suppose you’d say, a more proximate scale, things that are more readily apparent harms that are happening right now, we obviously have a lot of harms around use of data and the effects of algorithms, often opaque algorithms. You know, the classic black box complaint that goes with a lot of say machine learning systems that we don’t know why they take the decisions that they do.
And I fairly familiar with the idea that they replicate the biases within not just the teams that create them, but also the societies that creates the historic data that feeds and trains these algorithms. So they can essentially exacerbate and concretize these existing biases in ways that look objective and ways that look completely neutral.
I find particularly interested in the effects of persuasive systems, persuasive algorithms. Karen Young, who’s a legal scholar here in London, talks about the advent of an era of hyper nudge, taking the idea of nudging systems to the extreme, where they’re networked and dynamic and highly personalized. And they could be irresistible manipulators. And we won’t know essentially the presence of these systems until it’s too late.
We’ve started already to see, of course, in the political sphere, the power of bots and of human networks of trolls working in collaboration to try and change mindsets. What if we put that kind of persuasive power and dialed it up and amplify its capabilities and put it in the hands of more and more people, that could have phenomenally challenging implications for society and even for free will.
I am also interested in how technology can be weaponized. And I mean that in two senses. I mean it in terms of how it can be misused by bad actors. So of course, hackers, trolls, et cetera. And to an extent, some governments are now using technology as a means of force to compel certain behaviors, or to take advantage of weaknesses and systems to their own advantage and to the disadvantage of others.
And then, of course, there is, I suppose, what you’d call more visible and above the line weaponization of technology, which is still fraught with ethical difficulties. We look at what’s happened say in Google with their project Maven program, which caused all sorts of internal friction. And then, I think it was yesterday that Microsoft announced that they had just won a large defense contract to provide HoloLens technology to the US Army.
And so, the weaponization of these technologies may not have been intended. We may be playing with things that we think have fascinating implications. And we want to see where that technology takes us. And then we find later, oh, actually this could be used for significant harm, but we didn’t plan for it, or we didn’t have an opportunity for the people working on that technology to object and say “Well, I’m not actually comfortable working on a military project, for instance.”
So it’s all these unintended consequences of technologies and the externalities of technologies that fall on people that we just didn’t consider. I think that’s where some of the more pressing and slightly less far fetched perhaps ethical challenges lie.
There’s not a wide precedent for it. I’m sure it’s happened, but there certainly isn’t a standard that I’m familiar with, and I would suspect most people are. I mean, is this a function that should be like a lawyer? You know, that’s generally sort of an outsider, specialized thing that’s coming in, in expert situations? Or is it more like a designer-researcher that’s sort of part of a team on an ongoing basis? How do we structurally make ethics the appropriate part of the things that we’re doing in our organizations?
So some people think, well, maybe that’s a model that we take, and we transfer to large tech companies. I’m not entirely convinced. There maybe some cases in which that works. But I think tech industry ideologies are just so resistant to anything that looks like a committee. That anything that feels like academia and the sort of heavy burdensome processes.
So I think, in reality, we have to tread more likely to begin with, unless there are really significant harms that could result. I’d say, if you’re working on weapon systems, you probably need an IRB, right? You need a proper committee to validate the decisions, the ethical choices in front of you. But for every day tech work, I think there is certainly benefit in having, yep, legal on board. You know, there will absolutely be lots of lawyers, general counsel, and so on, who have an interest in this, in both senses of that word.
But most of the change really has to come, I think, from inside the company. Now, I may be able to … And we’ll find out whether this true, I may be able to stimulate some of that and to help guide those companies. But ultimately, I think a failure state for ethics is to appoint a single person as the ethical Oracle. And say “Well, let’s get this person in, then they give their binding view on whether this is a moral act or not.” It doesn’t scale. And it also could be quite a technocratic way of tackling what should be more of a democratic, more of a public-orientated decision.
So I think we have to find a way to approach ethics as an ethos, a mindset that we bring to the whole design process, the whole product development process, so that it raises questions throughout our work, rather than, as I say, just a checklist at the end or a legal compliance issue.
As for the structures of that specifically, like do we need an onsite ethicist within the team? Or do we train designers in this, I think designers make for good vectors for this kind of work. I think they’re very attuned to the idea of the end user having certain sorts of rights, for example. But I am only just begun getting to see the patterns that different companies are trying.
And what I’m seeing at the moment is there is very little in common. You have some companies setting up entire teams. You have some people leading it, some companies leading it from product. You have some companies getting it from design, some trying to hire ethicists out of university faculties. And I don’t yet have the data to know which of those approaches works. I’m glad they’re trying all these approaches because hopefully in a year, we’ll have a better idea of which of those have been the most successful.
But my hunch is it’s going to be much more meaningful to have some kind of, you know, like a retainer relationship, or something where someone like myself can come in and start off some initiatives, and then equip the team with some of the skills they need to make those changes. But then come in and check for progress. Because I can tell you from experience that pushing for ethical change is difficult work. You’re swimming against a very heavy tide a lot of the time.
So you have to have persistence. You can’t be too dissuaded if your grand plans don’t work. So I think a kind of longitudinal interaction, maybe over the course of three, six, 12 months is where I’m trying to head. For me, there’s obviously, you know, I’ve got to position that appropriately and convince people that there’s value in that. But, you know, ethics is for life, not just the Christmas, all these sorts of things. I don’t want to have a situation in 12-18 months where we’re saying “Oh, we’re still talking about that ethics thing?” It has to be a bit more drawn into the way that we approach these problems.
And for me, the ethical harms of emergent technology ramp up quite sharply because over the next 10 to 20 years, we’re going to be demanding, as an industry, we’re going to be demanding a huge amount of trust from our users. We’ll ask them to trust us with the safety of their vehicles and the homes and even their families. And I don’t think we’ve yet earned the trust that we’re going to request. So my focus is trying to illuminate some of the potential ethical challenges within that territory within that emerging fields. But then to interlace that with what we already know about ethics.
I think the tech industry has this sometimes useful, but often infuriating belief that we’re the first people on any new shore. That we are beta testing this unique future. And therefore, we have to solve things from first principles. But of course, ethics as a field of inquiry has been around for a couple of millennia. Even the philosophy of technology, science and technology studies, these fields have been around for decades. And the industry really hasn’t paid them the attention that perhaps it should.
So I see my job as trying to introduce some of the maybe theoretical ideas, but introducing them in a way that’s practical to designers and product managers and technologists, so they can actually start to have those discussions and make those changes within their own companies. So I’m trying to, if you like, translate between those those two worlds. So if I have to say there’s a particular focus of the book, it’s that.
But I have structured the work in a way that it also is sort of somewhat chronological, working from the most readily apparent harms, such as, as I mentioned before data and digital redlining as it’s known, bias, things like that through to perhaps some of the larger, but further away threats such as the risks to the economy, the risks of autonomous war, and so on. Those sorts of things tend to appear later chapters partly because I decided you need to build upon some of the knowledge we introduced earlier in the book to get to that point.
You know, pivoting to capitalism. So capitalism is under increased scrutiny and critique in ways that overlap with issues of technology and of course, ethics. A specific example recently is how the ad-funded business model’s being blamed for ethical lapses. And Cennydd, I know you have a different take on this. I’d love to hear about it.
I think tracking or targeting, that’s really where the ethical risk lies. Now, advertising can be seen as a promise, you know, a value exchange that we agree to. You know, I get some valuable technology and in exchange, I give up, you know, my attention. I expect, I believe that I’m going to see some adverts on my device, or in my podcast, or whatever it might be. I think if we reject that outright as a business model, which some people do, then really the only business model that leaves us is the consumer-funded technology model. And that has a lot going for it. But it is also potentially highly discriminatory.
One of the great things the advertising model has brought us is that it’s put technology in the hands of billions for free. And I don’t want us to lose that. I think it would be a deeply regressive step to conclude that the only ethical technology is that which is funded by the end user because, of course, then you’re excluding the poor, developing nations, those without credit, and so on. So I would hate for us to throw the baby out with the bathwater.
I do think, as I say though, we have to think more carefully about tracking. And tracking definitely does have some ethical challenges. Sometimes people make the inference then. They say “Well, okay, but the tracking comes from the needs to advertise. You know, you have to track people so you can advertise more accurately to them and get better return for that.”
My counter to that is the value of tracking has now gone beyond the advertising case. Everyone sees value in tracking. So tracking helps any company, whether it’s ad-funded or not, helps us generate analytics about the success of our product, see what’s working or what isn’t in the market. And also, it’s particularly useful for generating training data. We want to understand user behavior so that we can train machine learning systems, AI systems upon that data to create new products and services.
So tracking now has value to pretty much any company, regardless of the funding model. So this cliche of if you’re not paying for the product, you are the product being sold, I would take to even perhaps a more slightly dystopian perspective and say you are always the product. It doesn’t matter who’s paying for it. And so, we’re trying to make a change that isn’t focusing, I think, on the right issues, which is how do we combat some of these ideologies of datafication, of over quantification, and the exploitations that might lurk within that. I think that’s where the real ethical focus needs to go, rather than on the advertising case itself.
But on the other, you know, I remember feeling true, shocking outrage when there was a concept video for a Boston Dynamics robot that was shaped like an animal. This was maybe three years ago, and they had the engineers in this concept video beating it up, pushing it down, doing things that I would consider inhumane. And they were doing it to this robot, and I was upset at them and made sort of character judgments about the company and the people participating in the video based on those behaviors, sort of surprisingly so perhaps. Robot rights. Talk a little about that.
You can look at something like Sophia, this robot that you’ve almost certainly seen. It’s this kind of rubber-faced, it’s a marionette essentially. It’s a puppet. It has almost no real robotic qualities or AI qualities. But it’s now been given citizenship of the Kingdom of Saudi Arabia. Some people pointed out that that actually afford it certain rights that women in that nation didn’t have.
And things like that frustrate me because that thing should absolutely not to have any rights. It has nothing approaching what we might call consciousness. And consciousness is probably the point at which these issues really start to come to the fore. At some point, we might have a machine that has something approaching consciousness. And if that happens, then yes, maybe we do have to give this thing some legal personhood, or even moral personhood, which would then maybe suggest certain rights. You know, we have the Declaration of Human Rights, maybe a lot of those would have to apply, maybe with some modification in that situation.
So we have, for instance, rights against ownership of persons. If we get to a point where a machine has demonstrated sufficient levels of consciousness or something comparable that we say it deserves personhood, then we can’t own those things anymore. That’s called slavery. We have directives against that kind of thing. We probably have to consider can we actually make this thing do the work that we built, essentially, this future of robotics on? Maybe suddenly it has to have a say and opportunity to say “I won’t do that work.”
Now, it’s tempting to say the way around this is well, we just won’t make machines that have any kind of consciousness, right? We won’t program in consciousness subroutines. But as a friend of mine, who’s a philosopher and a science and technology studies academic, called Damien Williams, and he makes a very good point that consciousness may emerge accidentally. It may not be something that we simply excise and say “Well, we won’t put that particular module into the system.” It may be emergent. It may be very hard for us to recognize because that consciousness is probably going to manifest in a different manner to human or animal consciousness.
So there’s a great risk that we actually start infringing upon what might be rights of that entity without even realizing that this is happening. So it’s a really thorny and controversial topic, and one that I’m very glad there are proper credentialed philosophers looking at. I’ve done obviously plenty of research into this, but they’re far ahead of me, and I’m very glad that folks are working on it.
Just with respect to your point about the big dog, I think it was the Boston Dynamics robot. Yes, I mean, that’s fascinating and I think there is … Maybe I have a view that’s a bit more sentimental than most. Some people would say, well, it’s fine. It’s not sentient. It’s not conscious. It’s not actually suffering in any way. But I think it’s still a a mistake to maltreat advanced robots like that. Even things like Alexa or Siri. I think it feels morally correct to me to at least be somewhat polite to them and to not swear at them and harass them. At some point, they’ll be some hybrid entity anyway, they’ll be some centering where these things are combined with humans, some intelligence combination there. And if you insult one, you’ll insult the other. So that feels like something we shouldn’t do.
But I also think we should treat these things so that we don’t brutalize ourselves, if you see what I mean. I think if we start to desensitize ourselves to doing harm to other entities, be they robots or be they animals, whatever it is, that line maybe between artificial and a real life may start to blur. But I think if we start to desensitize ourselves to that, if we lose the violence of violence, then I think that starts to say worrying things about our society. I would say not everyone agrees with that. Perhaps that’s my sentimental view on that topic.
Now back in Bentham’s time, that was the view that he was challenging. Back in the 1700s, that didn’t really seem to be accepted that animals could suffer in the same way. But clearly, they exhibit preferences for certain states, certain behaviors, certain treatments, and you could argue that suffering results from acting against those preferences.
You’re absolutely right to point out a fierce contradiction in a lot of ethics in the way we think about how we want to treat these emerging artificial intelligences and the way that we already treat living sentient species, such as animals. And I think anyone who’s interested in this area owes it to themself to consider their views on say animal ethics, and whether actually that’s an industry that they feel able to support.
Now, that’s not an easy decision to take, and I’m not saying, for instance, that anyone who claims to be interested in robot ethics by logical extension has to become a vegan, for instance. But we owe it to ourselves to recognize as you point out, there are significant contradictions in those mentalities. And we have to try to find a way to resolve those.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter at @jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O.com. Dirk?
Then Uber and similar companies sort of rolled out their platforms, and that was sort of the future for the gig economy. But this time for moving people physically. Transportation. There are all sorts of platforms today where you of course can go and find great contractors, like UpWork is one of those. There are all sorts of platforms were you can go find people to code software. In whatever industry you’re in, there are platforms that enable buyers and sellers to sort of come together.
So all of this is fine is good, except that, well number one, the gig economy is just that. It’s piecemeal. So if there is a lot of buyers and very few sellers, and you’re on the selling side, and your skills are in demand, then life is good. But if you’re trying to piece a bunch of things together, that’s where people can get very anxious about where their next paycheck will come from. And of course this is the anxiety of the gig economy. And I think experienced by all gig workers, because you are subject to the whims of the market, and the circumstances can change on a dime, really.
And so, the author of the article then proceeded to talk a little bit about this idea of co-owning platforms. Enabling digital workers to own a piece of the platform, and that maybe that would be a possible future where we don’t all become digital surfs for the oligarch’s of the biggest tech companies. So, without preamble Dirk, I’d love to know some of your thoughts about this. I know you think about these issues a lot, and I’d love to get your take on it.
So yeah, rock on, right? We need solutions for the future that don’t result in the masses being left behind. Even in the present, the masses are being left behind. But more and more of us will be left behind the way the future is progressing currently. And we need solutions around that, that not only allow you and me and people like us to keep having a path forward to safety, security, and wellness. But to broaden that, and bring more people up who are currently behind left behind and shouldn’t be. And these are things that need to be addressed. The idea of these sort of networks, these sort of platforms being a key to that is a really good idea. But there’s a lot of complexity in the way.
You know, one of the things that will work against it too is technology. We mentioned that ride sharing platform. Ride sharing type of technology is something that is really likely to be further disrupted by self-driving stuff, right? The people who are creating the self-driving vehicles themselves are going to be good candidates for creating platforms that again are just rewarding the owners, the people at the top of the pyramid. Both from the standpoint of the speed of technology leaving these existing platforms behind, but then the manifestation of the technology and these heavy, capital intensive contacts also creates an opportunity to disenfranchise those who are trying to move forward via a platform.
It’s easy to share the idea, and it sounds great and it’s inspiring and it’s kind of focused on a real problem. Boy, there’s a lot between good idea and something that actually could work in a repeatable way, instead of just in one little micro community or another.
Where does that money come from? You don’t have a big fat VC, the whole point is to push those people out of the picture. You don’t have the Daddy Warbucks there to just burn money so that the world can find out that you’ve done it. So, a paradox, one of the frustrating things about capitalism, about sort of generational, and cross generational wealth is that the people who have the money are the one’s who can make more money. They’re the one’s who can make the future platforms, they’ve got the money to burn, to waste, to spend to make that happen. So you know, the sort of communist drivers of the world unite model, there’s a lot of boundaries between getting them to unit, to having something that actually is a credible competitor. And then to take the articles point, do that across many industries, it just gets harder and harder.
So sure, it might be possible. But I think the people who are talking about these ideas such as the particular article we’ve talked about here, I’m not seeing any path to viability. It’s just a lot of hand waving and smoke, and good ideas. And we need those. I do a lot of hand waving and smoke and good ideas of my own. But it’s a long way from that moment to it being a real thing. And there are huge barriers, in this case, and overcoming those barriers all seem to drag us back to the same old, Daddy Warbucks, the rich get richer model.
So there are, and sort of the whole opensource movement is based on that. You have Linux, which is sort of the go-to example, right? Of Opensource spreading. So I’m a little less skeptical, but certainly all the difficulties are there. The one thing that strikes me, there’s a profound need for there to be worker owned assets. So you see, in the industrial revolution you get unionization, right? And so the asset there was the labor, right? So collective labor really was what people were able to come together in a union and then use that as a bargaining chip. Because it’s not just the one guy, it’s the many guys, but it’s their labor.
So in our digital transformation, that revolution, there really hasn’t been that consolidation of labor in the same way. There hasn’t been a digital workers union. There’s not, none of that exists right now. And I do think that one route there is this idea of the participant owned platform. I’m not saying that that’s what’s going to necessarily take hold. But there is a profound need for there to be a counterweight to capital in this. Because over the long term, you are just not going to have a healthy economy as money works it way to the top and stays there. For this system to be able to continue on in any sort of recognizable form that doesn’t get turned into a complete cluster screw, you need counter balances. And right now, all the weight is moving in one direction.
So, I do see the profound need and the possibility, right?
Again, even with the successful opensource yay, good, rah rah solutions, they often are burning the people who really can’t afford to be burned.
It is causing a tremendous uproar in the scientific community at the moment. The sort of story is unfolding right now, but there’s been much objection as to the way in which the science was done, how it proceeded, how it wasn’t transparent, and sort of the rather dangerous consequences and precedence that this experiment has set. In fact, this morning, it was noted that there could be a third baby also, a third CRISPR altered human being coming into the world potentially. So, this experiment continues, and the world is just reacting to it at this point.
Obviously, these types of edits can be conducted in sort of any kind of living thing, whether you’re talking about plants and animals all the way up to human beings. The progression has been surprisingly fast moving from, like I said, the plants and animals stage to now living human beings is quite surprising. Dirk, I would be interested in your thoughts as to the pace of this change, to me, is kind of scary. How are you looking at it?
I mean, CRISPR has actually, from a conceptual standpoint, been around for 30 years. The sort of technology stack that’s lead us to where we are today is something called Cas9, which is just from this decade essentially. So, like you mentioned, it’s really new. Yeah, I’m just not the least bit surprised it’s happened, not the least bit surprised it’s from China. I’m a little uncomfortable that it’s here.
I think even the university where the scientist is working is surprised and instigating their investigation as well. I think it’s something that was bound to happen just given the level of importance of this technology. It was bound to happen, and I think the human hubris, sort of this desire for being first, I don’t know what it is, this combination of things-
There’s historical arguments in both directions about that, but it certainly was a contentious decision and this is very similar. When you talk about the dropping of the atomic bomb, that part of it was a part that you can’t say, “Oh, it’s about Nazi Germany. It’s about Imperial Japan.” That’s about power. That’s about whipping it out and throwing it on the table and saying, “Look how big we are.” It started a whole new reality. Thankfully since then a lot of atomic bombs aren’t being dropped outside of testing context, but they’re there.
We now have a world full of atomic weapons that could create a situation that is catastrophic at any time. Here we have the Chinese who are intent on becoming the preeminent world power and, over a course of decades, have a strategic plan and have very successfully executed it. Going back to World War II again, there was the project in the United States called Operation Paperclip bringing scientists over from Germany to gain an advantage over their antagonists. This, what’s done with CRISPR, came out of a similar project in China where China is luring back the scientists that-
So, China’s going through the motions of shock and outrage, but the reality is they are bringing back cats like He to China with promises of being able to do exactly this kind of thing. In the United States and in the scientific community in general, if you’re participating in that community, that’s not going to be allowed. That’s not going to be accepted. Because China isn’t the sort of international hub of the bureaucracy and the leadership of these kind of things, they’re the upstart, they’re trying to bring people back and incentivize them with the opportunity to conduct research such as this that is on the fringes or outside the bounds of what the international scientific community would allow or advocate.
We are watching the playbook run out step-by-step. This is just the beginning. There’s not a lot of stories … I mean, CRISPR/Cas9 was so monumental that when they did the X-Files reboot, they were talking about it on the X-Files reboot. So, that tells you there’s something here that is sort of so profound that it’s permeating into stupid popular culture as almost a meme. There’s not a lot of moments like this. It’s not like we’re going to have just shocking reality out of shocking reality coming out of China, but there’s just no denying the fact that they’re pulling in great minds, really talented ambitious people who in some cases, like the case here with Professor He, who want to go beyond the bounds of what the scientific community will allow.
Again, going back to the atomic bomb, it’s just sort of the biggest example of if we can do it, we will do it. It might be sooner, it might be later, but it will happen. It will likely happen as part of an assertion of power, an attempted expansion of power. Going back to when Jason Grant was on the show and talking about human development models, until we develop a little bit more and get out of this nationalistic, tribalistic, power acquisition mindset, which was necessary when we had to fight bears to survive but is not necessary in the 21st Century, then the advances that we have in science, such as CRISPR/Cas9, such as atomic power and energy, will be perverted to their extreme and ultimate consequence.
So, the moment we have right now is saying … In reality, what Professor He is doing is a tiny step. He’s not doing with the technology some of the things that we might find most alarming, such as trying to create, let’s say, going to another sci-fi meme, trying to create super soldiers. Professor He, as far as we know, is not in the lab engineering the future super soldiers of China to take over the world. He’s playing with just one little modification, particularly aimed at a blocking the HIV virus in particular. Although, it has other positive impacts on preventative health as well.
So, this is just teeny, but we inevitably will get to the point where someone is creating the super soldier. That might be happening by the stewardship of the Chinese Government, of another government. Certainly the United States is not above bad behavior, so I don’t just want to put a scarlet letter on China here even though I do think China, given the geopolitics, is going to be sort of driving and spearheading a lot of the dark stuff. More is on the way.
Yeah, I guess that was a lot, but I’m not at all surprised. I think history let us know that this was going to happen. It’s going to continue to happen. There will be more. The more will start to alarm us and get into the boundaries of where … Whereas we can say, if we can be genetically modified to never get HIV, that’s just sort of a good thing, forgetting the fact that of course it will be limited to the wealthy, the class issues that we continue to struggle with and are foolish about.
In theory, the idea that we could block that disease is a good thing. We’re just going to careen though into more contentious and ambiguous moral grounds in the years ahead, and there’s just no stopping it. This moment and the fact that the scientific community reacted so strongly will slow it down. It will certainly push it farther underground, but it sure as hell isn’t going to stop it, Jon.
Now, we are living with the leftover unintended consequences of the Industrial Age. We’re steeped in it. Our climate is changing massively because of the unintended consequences of the Industrial Age. In fact, we may have sent the planet into some awful scenario that we can’t recover from, and that is from something perhaps much more simple, which is the internal combustion engine, which we all have in our garages.
So, to think about the way in which we’re moving into this biotech age sort of this recreating the same types of mistakes that we started with during the Industrial Age, which is this pursuit of the technology and implementation without very much thought to the consequences. So, I don’t know that there’s … Far be it from to understand what kinds of speed bumps need to be in the way. Clearly, the scientific community didn’t have enough of those barriers or speed bumps in place.
If you think about from like 1820 or the 1820s, how far has technology come from the 1820s? We were still on horse and buggy. The idea of flying was pure science fiction. I mean, computers, give me a break. The technology was so far behind where we are now, but the President of the United States was a thug and an ignorant similar to the President of the United States today. We have not evolved. We have not developed. We have created this technology that’s incredibly powerful, but we collectively in terms of our development as a social species are very little far better collectively than we were in the 1820s. We just haven’t progressed.
In order to keep up with the technology, we need to be progressing. We need to be developing so that we are more self-confident, that we are more self-possessed, that we are not tribalistic in our we’re structured and how we frame and think about the world. We need to be more holistic thinkers and see ourselves as part of cooperative social systems.
We’re not there. We’re not close to there. I mean, in the United States, the world socialism to some majority remains like the third rail. We aren’t developing, and we need to because it’s the only speed bump. The only speed bump is that we get smarter collectively, not an elite but collectively, the masses, the group of us. We’re so far away from that as to be ridiculous. I love the idea of we need speed bumps. They’re not going to happen until the scientists, the people that access the technology themselves are self-possessed enough to say, “There’s just no need to do this. There’s no point. The gains are gains that don’t matter, and the downsides are downsides that would be horrific.”
Right now the gains do matter. They matter big. It’s big stakes. We’re still caught in these weird, old … I used 1820 just because I like the Andrew Jackson to Donald Trump parallel, but we’re still mucking around in the same bullshit that they were in the Roman Empire. I mean, we’re still in those days from the standpoint of power and structure. I mean, Putin marching around and doing the things that he’s doing. We have not advanced. We have not developed, become collectively more mature, collectively wise. We’re sort of the same, ignoramuses that we were even thousands of years ago. It could end up being our undoing, because the technology is hurdling at such a fast rate, and we aren’t keeping up with it, Jon.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. If you’d like to follow us outside of the show, you can follow me on Twitter at jonfollett, that’s J-O-N F-O-L-L-E-T-T. Of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com. Dirk?
So, the article presents this so called productivity paradox, which is essentially sort of mapping this boom in technology. So we have all our fantastic digital technologies that we talk regularly about on the show. And then sort of maps that to this strange results which is slowing productivity growth in major economies across the world. So what is the reason for this increase in technology and then subsequent sort of flat lining of productivity? And that’s sort of what the article digs into and suggests some policy tweaks or a full out changes in some areas that I tend to agree with.
What’s funny is the premise itself, the productivity paradox, I find kind of funny because it’s this idea that this one thing that we’re measuring, which is sort of how effectively and efficiently we can create value is this really important metric. And I understand, yes, from an economic perspective that may really a critical metric. I think it’s also interesting or important that we consider that efficiency and productivity are not the sole important metrics of our day to day lives and economy. But we won’t dig into that argument too much on the show.
So let’s dig into that, that question, right? So what’s interesting is we can take a look at pretty much any emerging or even some what we’d call standard technologies now and we can look at each of those and see how poorly they’re being used. So now, not to pick on the internet of things because there was obviously his huge hype cycle in 2017 and 2016 in which pretty much everything was going to be connected to the internet of things. That hype cycle’s since moved on to artificial intelligence. Now everything’s going to have artificial intelligence in it. he Internet of things. That hype cycles since moved on to artificial intelligence. Now everything’s going to have artificial intelligence in it. Thanks very much technology press.
But the Internet of things, even though it’s sort of current chunks along and we’re seeing more and more evidence of sort of sensor laden products, buildings, cities, et cetera, slowly coming online that the truth is that this is a multi year process to get these products, and even longer for things like smart cities to get online. And then after that, you’ve got this sea of data which some may be useful, some may not be useful. And take years to pour through that. And then figure out how you are going to automate things around that data, which means you need to recognize the patterns in the data and then make tweaks and then see how those adjustments work out.
And that’s very realistic scenario and that doesn’t even take into account all the operation and maintenance, things failing, projects not getting financing or getting off the ground. So this is not what we talk about a lot in the technology industry but it’s the very un-sexy adoption of technology over time. And if you look at graphs and charts of the 20th century and seeing how long it took electricity, cars, electric lights, telephones, all these things to achieve market penetration and become useful to people, you’ll see that it takes tens of years for this to happen.
So sorry if that busts the hype cycle for folks but I mean, it wouldn’t be much of a sale if you say hey, let’s get your smart city online. A decade later you might see something out of it.
Now, what’s interesting though is with software we see much faster evolution, with personal consumer technology, particularly today, we see much faster evolution. Like if we think about it, an analog might be thinking about like televisions and radios, which those technologies also moved more slowly back then. But the limitations weren’t infrastructure based. They were technology based.
Today technology is developing at a much more rapid pace. And so we see, for example, the evolution from an iPod to an iPhone is less than a decade. And that’s massive. I mean that’s revolutionary. So a lot of it is about the context and what the physical constraints are and sort of the bigger the thing, when you, again, when it’s a level of a home or a city, the more that those constraints, it doesn’t matter where the tech is, you’re just going to hit that like a fricking hammer because people don’t have the money. The country doesn’t have the money. We can’t just re-implement everything.
Today, there are companies that for better or worse, right, are, are entirely virtual. They don’t have headquarters anymore. They work from a combination of shared office space, people working at home and then convening in sort of rented space when they need to sort of hold large events or meetings. So that’s a generational change. It took a solid 20 years for the idea of the virtual company, and I’m sure there were some early adopters of that. But this sort of cultural norm that is the expectation of a pardon the phrase, maybe maybe the younger set the millennial set, right? That was not the case when as a Gen X where I was thinking, hey, it’d be nice to work home one day a week at one of my earlier places of employment. And they were like, nope, you gotta be here. You gotta be in the office.
That was a, my boss’s boss was an older school gentlemen and really wanted everybody in the office. Flex time was considered revolutionary. The fact that I didn’t come in at, I wasn’t there at like eight o’clock in the morning. I came in at like nine-thirty, that was nuts. So that was unheard of.
But those differences are marginal. When we have email and everybody was using email professionally, which is basically 20 years old now, we had the tools that we needed along with old school telephones, mobile phones to work remotely. But the cultural gravity well of there’s this other way that things are done and those things are based on all of these beliefs, assumptions, values, frames of the world. It took a long time to overcome that.
So there’s all of these factors. It would be interesting to read and maybe somebody already already written something about it. I haven’t come across it, that sort of breaks down all of the factors that block implementation to go from technology or concept to manifestation in the world. Because it’s super nuanced.
And I think where there’s enough capital to force those things through, in very select areas, I think we’ll see some successes, also making hampered a little bit by all of the problems that early adopters experience of course.
That’s the, to me if somebody wanted to leap forward and say look what’s possible, that would be the kind of situation where to do it.
So this article sort of concludes with some recommendations around policy, which I think are useful. And one of those, of course, is this idea about pace of change for the worker and the ideas around what do we have in place to allow people to adapt to new industries, to change in their industry, to maybe even taking on a whole new set of skills that they never thought they would need to learn. We’ve dealt with this topic a bit around the idea of AI automation and I think you and I are fairly adaptable in adopting skills. But certainly on a large scale I could see tremendous need for this ongoing education and re-skilling of workers.
So I did think that policy recommendation, of course broad and not including a lot of specifics, that’s the right direction. We’re really not talking about that too much as a country yet. I would almost see an additional layer to the education system that’s required. We have our public schools and we have public and private colleges. I think there’s another layer of education that needs to happen in order for modern economies to be able to continue to be productive and compete in the future.
I’d love to have my own repo of education and whether it’s virtualized or not and just be able to track what I’m learning over time and continually learn. I think owning our educational mechanisms in some shape or form, whether it’s to show the credits, so it shows that I’m learning, or just simply for our own ability to track these things, I think the student as the center of education might be an interesting model in the future as well.
Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com, that’s just one l in the digital life and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. And if you’d like to follow us outside the show you can follow me on Twitter at jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O dot com. Dirk?
So, this is just one corner of that discussion around virtual reality and empathy, but I think it applies to the larger idea of ourselves in the digital age, and how we look at that. So, let’s dig in a little bit. I must note that this was inspired by the article in Aeon, which is at aeon.co. It’s a great online magazine, and the name of the article is, It’s Dangerous To Think Virtual Reality Is An Empathy Machine.
So, one of the starting points for this is that a few years ago, researchers at the Virtual Human Interaction Lab at Stanford University created a VR simulation of a slaughter house, and so the idea was that people would put on VR goggles and sort of walk around on all fours and experience what it’s like to be a cow and go through, being fed and then eventually being brought to the slaughter house, which you know sounds like a pretty horrific experience to me. And just sort of the concept there, was that it would give people an idea of what it was like to be an animal and this behavior will also lend us to think about, are we being cruel to animals and sort of what is the ultimate sort of morality there around that. That’s the idea anyway.
And I find the experiment really intriguing, as well as the hypothesis of the article itself, which is we’re going and we’re creating sympathy for animals, but the author of the article is like, “You’re not really experiencing this. This is not, you can’t be a cow.” That’s kind of not a notion that the author agreed with and I tend to fall on that side of the argument as well. Dirk, when you were reading about this experiment, where do you fall in that spectrum, or which side did you fall on too?
You know, it’s silly though in that they’re saying that it’s empathy, it’s giving us empathy for the animals or creating sympathy for the animals. Forgetting the facts that these animals are in very specific conditions over a long period of time, fully physically ensconced in them, whereas we can be on our little virtual reality goggles having just had an ice cream sundae, crawling around on the floor eating a cheeseburger ironically enough, without real cattle prods zapping us in our sides. It’s sort of phony baloney. So, it’s skin deep.
So, the idea is interesting. There’s some impact, some value I’m sure in participating in it, but to translate that to sympathy or empathy in any real way is just silliness.
Now, to me, those were sort of closer to the mark in terms of eliciting a response, sort of in keeping with the fact that these are human experiences and these are perspectives that you may not be exposed to at all, whether it’s being homeless or experiencing racism. You know, some folks will just not experience those things. And so taking the virtual walk in other people’s shoes may be valuable. Dirk, how did those two experiments strike you?
For the people who experience them, assuming they’re sort of accurate and correct, those people are going to say, “Yes, that’s right.” For the second group of people, the people who don’t experience them but believe that those things are a part of life, there’s something there. You can sort of experientially get it. It’s both reaffirming something you already think in your brain, but also giving you some experiential context to imagine. So there’s some value there. Then you’ve got the people who don’t believe in it, and I think more likely than not, much more likely than not, they’re not gonna believe in it afterward either, right? Because somebody put that together. The reasons that people wouldn’t believe in a political agenda like that is that they don’t wanna believe in it. And they’re going through this app and experience those things, they’re just gonna say, “Oh well, anybody can take these clips and put them together and make it seem that way. Yeah, well anybody can.”
There’s nothing in the experiential aspect of it that is going to take someone from being dug in and doubting it to, “Oh my God, if only I had known.” Because it’s not a real life thing. It’s a carefully curated thing. And then you have the people who are completely ignorant of it, and I think depending on sort of their political persuasion and beliefs to begin with, will either find something there or not find something there. So, I think it’s in very narrow bands that it has impact. I think it can have good impact within those bands, but the framing of it as having this transformative sort of universal power, I just think is really Trumpian, and it’s over hype and over statement.
And I think we had sort of the same discussions about video games, in particular I remember lots of discussions around first person shooter games. Whether or not those would influence people. Or even just what we’re communicating through games for education and for development of learning, etc. So I think virtual reality and augmented reality are the sort of natural inheritors of a lot of the purposes for what we’re using gaming for now. I mean, very specifically around video gaming. And I do think in the future, the sort of, I don’t know if I’d go so far as to call it scary, but you put your finger on it Dirk that there’s a particular narrative that’s being pushed or crafted here, to allow you to think in a certain way, which you may agree or disagree with, depending on your perspective. But to sort of think about how, whatever is done in that way in which you agree, you know someone can take those same tools and create a whole bunch of stuff that you would disagree with vehemently, right?
So, for example, showing certain groups of people. You have the racism app. I could see someone developing an application showing bad behaviors of certain groups that people didn’t like, right?
If we want to, and for me if I was tasked, if I was given a big budget and said, “What you need to do is have the populace of our country, all have sympathy and some degree of empathetic understanding for racist micro aggressions.” If that was my task, what I would do is I would get a team together and say, “We’re gonna make this science fiction game, we’re just gonna focus on making the coolest game possible, and it’s putting the character in this as a minority species on this planet.” And you’re doing things like you do in science fiction, whatever the sort of genre of science fiction you’re doing might be. But the whole thing is from this place of a species in that case, basically racial minority, from being sort of culturally penalized, and just build that into an amazing fricking game. That is going to get the lesson through.
If you make the right game that’s a smash hit and people want to play, then people are gonna really get it and then they’re gonna identify with it. Because they’re gonna say, “Yeah, I was the Gorn character, and boy the Gorn had it hard.” Whatever that looks like. Suddenly it’s in there. You’ve got the sympathy, you’ve got the empathy, because you’re coming at it, not from this political, “Oh yeah, be a cow and see how bad cows have it.” That’s not gonna get you there. But if you take someone there through their own interests, their own desires, their own excitement, because we’re selfish, stupid, MFs here, right? We aren’t the brightest species in the world, so we can’t be taken there through these direct routes.
We have to be taken there, we have to be tricked into it, by being taken into something that we perceive as just fun for us. All about us. Not about somebody else’s agenda. But just out pushing our pleasure buttons, left, right, forward and backward. That’s the way to do it, and at the end of that game that everybody’s played and is a big smash hit, you’ve suddenly educated a bulk of the nation on these crucial things that have to do with the social welfare and equality for all. You can’t do it directly. You have to go in through the back door, and eventually we’ll get there. Right now, we’re just kind of seeing the clumsy first steps of, “Oh yeah, let’s do this social justice thing.” That most people are just gonna reject out of hand.
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com. That’s just one L in The Digital Life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember that something you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T, and of course, the whole show’s brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?
So first, I don’t think we need to go too far to see the hysteria of it all, right. It’s fun and it probably gets a lot of clicks. If you can talk about how a particular subsection of the economy is going to be completely wiped out by automation, whether it’s robotic or AI or some combination. Usually, I think the around any of these predictions, if you dig a little deeper, you can reveal some of the lazy thinking. Some of the questions probably worth asking are, “Hey, what’s automated already and how easy is it to automate and how many jobs are there and how likely is this to happen?” But Rodney Brooks gives us some rules of thumb, which I think are very useful.
Let’s get started. And we’re not going to take these in any particular order. I think just the sort of the generally interesting ones. We’ll start with some of them. I found this pretty fascinating and we’ve talked about this before, but the idea that purpose-built AI is just not adaptable. Anytime you’re making a prediction that’s based on a very specific purpose-built AI and it’s sort of a being pointer to future change, whether that’s apocalyptic or utopic or somewhere in between. A perfect example of this is we spent a lot of time talking about AI and poker, AI and Go, AI and chess and it’s this idea that here are these amazing games that humans invented and now we’re not even the best at them anymore. We’ve been bested by machines. Uh-oh, where is this going? Dirk, this is a great rule of thumb, I think, and sort of leads to the sort of the massive difference between narrow AI and the sort of more general AI, which people sort of conflate. Do you want to tackle that one a little bit?
Now, in order to achieve those ends, they will need to start over again from the standpoint of the AIs learning. However, the basic structure of the AI, the programming of the … I don’t know to what degree they’re just learning from what they’ve done. They’re leveraging assets from what they’ve done. They’re taking whole cloth, the whole engine and just re-teaching it. He didn’t go into those details and I can’t speak to that, but there is now this thoughtfulness around we want to make something that solves and addresses particular problems. Each instantiation that we have will need to be specific to one problem, but in working on the one we can adjacently easily work on the next and work on the next. That’s where you start to get some really interesting things happening. But it remains in the territory of narrow AI again, where each one is just doing, is doing kerchunking at that one thing. It’s a whole lot of hammers. John.
For example, EHRs got a ton of money sort of injected into that industry by the Federal government because they wanted to digitize health records and reap all of the benefits of having a digitized system. Now, we won’t get into the fact that a lot of this deployment wasn’t, we haven’t realized the success of it yet, right?
There are no open standards so people can’t share data with each other. Patients don’t own their data. Patients can’t even really transfer their data from one hospital to another provider to another hospital. There are all sorts of just sort of practical problems with the deployment of this technology and this is a fairly unremarkable technology. Let’s face it, digitizing the health record, it doesn’t seem like this would need to be magical like that this-
If you’ve got a system that works and you’ve got sort of incremental improvement from whatever the software is going to be, it’s also just going to take time for that to be absorbed. For many reasons, deployment is the unforeseen monster in the closet for any technology. It’s like, “Okay, great, this stuff works.” It may even work in the small prototype or a rollout. But once you start talking about enterprise-grade rollouts of things the stakes get a lot higher and the timelines get a lot longer, for sure.
The health room that we innovated at GoInvo is an example of that. But these houses are made of certain materials. They are physical expensive spaces. Changing those physical materials and completely metamorphosizing, I think is the right word, the environment, it’s just beyond the bounds of what 80, 90% of people can pay for. The more interesting and exciting and say magical solutions around smart homes that ain’t going to happen because of existing infrastructure, if absolutely nothing else. So as you’re thinking about your own predictions and your own trying to sort out what is the future look like, think about infrastructure because if you’re dealing with something that has existing infrastructure, I mean, that’s a huge boulder in the way of exciting new ideas becoming reality.
Okay, so that’s not necessarily true. These advances that we’re making, whether it’s in genomics or are around chips and computers. There are physical and other limits to them that will, economic limits for instance, at a certain point it doesn’t make any sense to keep on making things smaller if people just don’t need the power anymore for whatever it is they’re doing. I probably don’t actually need all the power that’s in my MacBook Pro right now. I could probably survive with something somewhere in between what I had in college and what I have now probably would have been fine. But there are economic limits and then there are the actual physical limit of how small you can make something. Right?
And so these will come into play in different ways, but it’s worth considering that this is not, it’s not really exponential or at least it’s not going to be exponential forever in many of these cases. That’s not to say that there won’t be some interesting quantum computing discovery that, who knows, may develop some crazy fast computing that we can’t even imagine yet. But barring that, and sort of considering the laws that we sort of understand now there are sort of these upper limits that you never hear about limitations when a miraculous predictions are made at all, frankly.
As we’re thinking about just personally how to manage our prognostications for the trajectory of the future, it’s to be mindful of that if you have specific time horizons are probably wrong. To try and figure out what’s happening in 10 years or whatever that chalk line is going to be a failed exercise in terms of the conclusions you come to. So think less about timeframes and think more about possibilities and more generally speaking what is going to happen. Timeframes are just, they’re going to prove inaccurate and are the wrong things to focus on.
Technology that is designed for one thing inevitably ends up in another industry being used for something that it’s inventors could never have imagined. So when we’re making predictions about AI, they’re based on our understanding at the moment with all of the biases around the industries that we have knowledge of and they very well could end up in completely separate areas doing something that we never imagined.
And so the example from the article that I love is how GPS was basically for targeting munitions, right? For dropping bombs. And now we’re using it for tracking our runs right down to the foot basically where I’m running around the park.
So there isn’t a lot of discernment because we’re being entertained, but at the same time it’s sort of laying some of the groundwork for our thinking around AI and, of course, we’ve all seen the Terminator films where the artificial general intelligence Skynet sort of takes over things and destroys humanity. And so all of the assumptions that lead up to this very interesting dystopian future, all of these assumptions that make the story feel so exciting, you accept as part of the fantasy. But once you exit the movie, these ideas remain with you and shape the way you think about AI, whether or not you’re conscious of it, in a day-to-day way.
To your point, the silliness of the killer death robots. I mean, where did we get the killer death robots idea in the first place? I suspect that as a kid growing up in the 80s. Oh, I love the Terminator so much. I don’t know if that was the 80s or early 90s. Feels like 80s.
Listeners, remember that while you’re listening to the show, you can follow along with all the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one l in the digitalife. And go to the page for this episode.
We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. If you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T and, of course, the whole show is brought to you by Goinvo a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?
The podcast currently has 55 episodes available.