
Sign up to save your podcasts
Or
Welcome to Ever Not Quite—essays about technology and humanism.
This essay attempts to name something for which—or so it seems to me—we have lacked a satisfactory term. I have argued before against the reasoning which attempts to defend human preeminence by insisting that “no technology could ever do ‘X’”, however sophisticated or impressive the ‘X’ in question might be. This, it seems to me, is a weak position from which to secure humanistic values, for reasons which I’ll explain below. Here, I offer an alternative basis on which to think about what distinguishes us from even the most powerful simulations of human competencies. I am calling this the “anthropological aura”.
These reflections, of course, don’t exhaust all we might wish to say about what is at stake in the present unfolding of artificial intelligence, but I hope they provide a helpful category which may perhaps be of some use in your own thinking about these issues.
Welcome to Ever Not Quite—essays about technology and humanism.
This essay attempts to name something for which—or so it seems to me—we have lacked a satisfactory term. I have argued before against the reasoning which attempts to defend human preeminence by insisting that “no technology could ever do ‘X’”, however sophisticated or impressive the ‘X’ in question might be. This, it seems to me, is a weak position from which to secure humanistic values, for reasons which I’ll explain below. Here, I offer an alternative basis on which to think about what distinguishes us from even the most powerful simulations of human competencies. I am calling this the “anthropological aura”.
These reflections, of course, don’t exhaust all we might wish to say about what is at stake in the present unfolding of artificial intelligence, but I hope they provide a helpful category which may perhaps be of some use in your own thinking about these issues.