By Francis X. Maier.
Earlier this month, in its regular Artificial Intelligence section, the Wall Street Journal ran a 2,000-word feature - for a newspaper, that's serious ink - on the AI program "FutureYou." Developed by the Massachusetts Institute of Technology (MIT), FutureYou allows you to talk with your 80-year-old self. It also (regrettably) projects what you'll look like. The idea, according to the WSJ author, "is that if people can see and talk to their older selves, they will be able to think about them more concretely, and make changes now that will help them achieve the future they hope for."
Thanks to FutureYou, the author discovered that she'd write a book, have six grandchildren, live in the suburbs, take up gardening, survive a health scare, make a solo visit to Japan, and take a family trip to Thailand. In the years ahead, her FutureSelf said, she'd regret not starting a business. She'd also need to jettison her doubts and fears. And she would always work for positive change. . .however she might define that. Immersed in an ongoing, engaging, intimate chat with herself, the author gradually achieved what the program's creators call "future self continuity" - strong identification with her online octogenarian avatar.
Some 60,000 people in 190 countries currently use FutureYou. With that kind of endorsement, what could go wrong? So I signed up myself. The registration process was free. I duly provided a wide range of personal data, concerns, and aspirations to a series of questionnaires, and a photo snapped by my computer - all of which, per MIT, will remain anonymous.
Alas, it turns out that 80-year-old Fran is tediously familiar. He's not a riveting chat partner. And that's not a surprise. I'm already in my mid-70s; and at 80 (assuming I'm still around) I'm likely to be more of the same old me. FutureYou seems geared to those with a longer takeoff ramp; people in the 30-50 age cohort. So I won't be joining Elon on Mars or writing the sequel to Dostoyevsky's The Devils. Dashed dreams are bitter.
On the bright side, MIT's developers describe FutureYou as an "imagination aid," not a fortuneteller. It offers possibilities, not prophecies. It doesn't give medical or financial counsel or outcomes. It's designed to help people think more clearly about the person they might become. It's simply another self-help tool, and similar tools can be very useful. I use Google's Gemini chatbot for quick research every day. The results, while not perfect, are nonetheless impressive.
So why bother telling you this?
The Wall Street Journal is easily (I'd argue) the finest newspaper in the land. But it has a bias toward shiny new tools if they suggest a profit downstream. And that bias, that subtle boosterism, frames its treatment of AI. The Journal does caution readers about AI's various dangers, with stories highlighting Anthropic's anti-Doomsday Frontier Red Team, or the hapless and very real guy who fell in love with "Charlie," his female chatbot, or the problem of AI "hallucinations." But if progress is good for business, then - the reasoning goes - so are the tools that drive it.
And it's true: In practice, tools like AI are often very "good" for advancing improvements in medicine, communications, and education. The trouble, as the cultural critic Neil Postman warned, is that human tools tend to reshape and master the humans who make them, with unhappy results. To put it in Biblical terms, humans have an instinct to worship, and Golden Calves come in all shapes and sizes. This accounts for how easily we can anthropomorphize the AI voices on our phones.
The advertising for Google's Gemini tool promotes exactly that delusion. I've had conversations with the personality on my Gemini app that were astonishingly relaxed, informative, and real. Except they weren't. It's hard to be skeptical when you have a warm and fruitful relationship with the algorithm in a microchip. Or your 80-year-old "self" online.
Simply put, AI is the m...