Wired
EVERY DECEMBER, ADAM Savage—star of the TV show MythBusters—releases a video reviewing his “favorite things” from the previous year. In 2018, one of his highlights was a set of Magic Leap augmented reality goggles. After duly noting the hype and backlash that have dogged the product, Savage describes an epiphany he had while trying on the headset at home, upstairs in his office. “I turned it on and I could hear a whale,” he says, “but I couldn’t see it. I’m looking around my office for it. And then it swims by my windows—on the outside of my building! So the glasses scanned my room and it knew that my windows were portals and it rendered the whale as if it were swimming down my street. I actually got choked up.” What Savage encountered on the other side of the glasses was a glimpse of the mirrorworld.
THE MIRRORWORLD DOESN’T yet fully exist, but it is coming. Someday soon, every place and thing in the real world—every street, lamppost, building, and room—will have its full-size digital twin in the mirrorworld. For now, only tiny patches of the mirrorworld are visible through AR headsets. Piece by piece, these virtual fragments are being stitched together to form a shared, persistent place that will parallel the real world. The author Jorge Luis Borges imagined a map exactly the same size as the territory it represented. “In time,” Borges wrote, “the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.” We are now building such a 1:1 map of almost unimaginable scope, and this world will become the next great digital platform.
Google Earth has long offered a hint of what this mirrorworld will look like. My friend Daniel Suarez is a best-selling science fiction author. In one sequence of his most recent book, Change Agent, a fugitive escapes along the coast of Malaysia. His descriptions of the roadside eateries and the landscape describe exactly what I had seen when I drove there recently, so I asked him when he’d made the trip. “Oh, I’ve never been to Malaysia,” he smiled sheepishly. “I have a computer with a set of three linked monitors, and I opened up Google Earth. Over several evenings I ‘drove’ along Malaysian highway AH18 in Street View.” Suarez—like Savage—was seeing a crude version of the mirrorworld.
It is already under construction. Deep in the research labs of tech companies around the world, scientists and engineers are racing to construct virtual places that overlay actual places. Crucially, these emerging digital landscapes will feel real; they’ll exhibit what landscape architects call placeness. The Street View images in Google Maps are just facades, flat images hinged together. But in the mirrorworld, a virtual building will have volume, a virtual chair will exhibit chairness, and a virtual street will have layers of textures, gaps, and intrusions that all convey a sense of “street.”
The mirrorworld—a term first popularized by Yale computer scientist David Gelernter—will reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.
At first, the mirrorworld will appear to us as a high-resolution stratum of information overlaying the real world. We might see a virtual name tag hovering in front of people we previously met. Perhaps a blue arrow showing us the right place to turn a corner. Or helpful annotations anchored to places of interest. (Unlike the dark, closed goggles of VR, AR glasses use see-through technology to insert virtual apparitions into the real world.)
Eventually we’ll be able to search physical space as we might search a text—“find me all the places where a park bench faces sunrise along a river.” We will hyperlink objects into a network of the physical, just as the web hyperlinked words, producing marvelous benefits and new products.
The mirrorworld will have its own quirks and surprises. Its curious dual nature, melding the real and the virtual, will enable now-unthinkable games and entertainment. Pokémon Go gives just a hint of this platform’s nearly unlimited capability for exploration.
These examples are trivial and elementary, equivalent to our earliest, lame guesses of what the internet would be, just after it was born—fledgling CompuServe, early AOL. The real value of this work will emerge from the trillion unexpected combinations of all these primitive elements.
The first big technology platform was the web, which digitized information, subjecting knowledge to the power of algorithms; it came to be dominated by Google. The second great platform was social media, running primarily on mobile phones. It digitized people and subjected human behavior and relationships to the power of algorithms, and it is ruled by Facebook and WeChat.
We are now at the dawn of the third platform, which will digitize the rest of the world. On this platform, all things and places will be machine-readable, subject to the power of algorithms. Whoever dominates this grand third platform will become among the wealthiest and most powerful people and companies in history, just as those who now dominate the first two platforms have. Also, like its predecessors, this new platform will unleash the prosperity of thousands more companies in its ecosystem, and a million new ideas—and problems—that weren’t possible before machines could read the world.
GLIMPSES OF THE mirrorworld are all around us. Perhaps nothing has proved that the marriage of the virtual and the physical is irresistible better than Pokémon Go, a game that immerses obviously virtual characters in the toe-stubbing reality of the outdoors. When it launched in 2016, there was an almost audible “Aha, I get it!” as the entire world signed up to chase cartoon characters in their local parks.
Pokémon Go’s alpha version of a mirrorworld has been embraced by hundreds of millions of players, in at least 153 countries. Niantic, the company that created Pokémon Go, was founded by John Hanke, who led the precursor to Google Earth. Today Niantic’s headquarters are housed on the second floor of the Ferry Building, along the piers in San Francisco. Wide floor-to-ceiling windows look out on the bay and to distant hills. The offices are overflowing with toys and puzzles, including an elaborate boat-themed escape room.
Hanke says that despite the many other new possibilities being opened up by AR, Niantic will continue to focus on games and maps as the best way to harness this new technology. Gaming is where technology goes to incubate: “If you can solve a problem for a gamer, you can solve it for everyone else,” Hanke adds.
But gaming isn’t the only context where shards of the mirrorworld are emerging. Microsoft, the other big contender in AR besides Magic Leap, has been producing its HoloLens AR devices since 2016. The HoloLens is a see-through visor mounted to a head strap. Once turned on and booted up, the HoloLens maps the room you’re in. You then use your hands to maneuver menus floating in front of you, choosing which apps or experiences to load. One choice is to hang virtual screens—as in laptop or TV screens—in front of you.
Microsoft’s vision for the HoloLens is simple: It’s the office of the future. Wherever you are, you can insert as many of your screens as you want and work from there. According to the venture capital firm Emergence, “80 percent of the global workforce doesn’t have desks.” Some of these deskless workers are now wearing HoloLenses in warehouses and factories, building 3D models and receiving training. Recently Tesla filed for two patents for using AR in factory production. The logistics company Trimble makes a safety-certified hard hat with the HoloLens built in.
In 2018 the US Army announced it was purchasing up to 100,000 upgraded models of the HoloLens headsets for a very nondesk job: to stay one step ahead of enemies on the battlefield and “increase lethality.” In fact, you are likely to put on AR glasses at work long before you put them on at home. (Even the much-maligned Google Glass headset is making quiet inroads in factories.)
In the mirrorworld, everything will have a paired twin. NASA engineers pioneered this concept in the 1960s. By keeping a duplicate of any machine they sent into space, they could troubleshoot a malfunctioning component while its counterpart was thousands of miles away. These twins evolved into computer simulations—digital twins.
General Electric, one of the world’s largest companies, manufactures hugely complex machines that can kill people if they fail: electric power generators, nuclear submarine reactors, refinery control systems, jet turbines. To design, build, and operate these vast contraptions, GE borrowed NASA’s trick: It started creating a digital twin of each machine. Jet turbine serial number E174, for example, could have a corresponding E174 doppelgänger. Each of its parts can be spatially represented in three dimensions and arranged in its corresponding virtual location. In the near future, such digital twins could essentially become dynamic digital simulations of the engine. But this full-size, 3D digital twin is more than a spreadsheet. Embodied with volume, size, and texture, it acts like an avatar.
In 2016, GE recast itself as a “digital industrial company,” which it defines as “the merging of the physical and digital worlds.” Which is another way of saying it is building the mirrorworld. Digital twins already have improved the reliability of industrial processes that use GE’s machines, like refining oil or manufacturing appliances.
Microsoft, for its part, has expanded the notion of digital twins from objects to whole systems. The company is using AI “to build an immersive virtual replica of what is happening across the entire factory floor.” What better way to troubleshoot a giant six-axis robotic mill than by overlaying the machine with its same-sized virtual twin, visible with AR gear? The repair technician sees the virtual ghost shimmer over the real. She studies the virtual overlay to see the likely faulty parts highlighted on the actual parts. An expert back at HQ can share the repair technician’s views in AR and guide her hands as she works on the real parts.
Eventually, everything will have a digital twin. This is happening faster than you may think. The home goods retailer Wayfair displays many millions of products in its online home-furnishing catalog, but not all of the pictures are taken in a photo studio. Instead, Wayfair found it was cheaper to create a three-dimensional, photo-realistic computer model for each item. You have to look very closely at an image of a kitchen mixer on Wayfair’s site to discern its actual virtualness. When you flick through the company’s website today, you are getting a peek into the mirrorworld.
Wayfair is now setting these digital objects loose in the wild. “We want you to shop for your home, from your home,” says Wayfair cofounder Steve Conine. It has released an AR app that uses a phone’s camera to create a digital version of an interior. The app can then place a 3D object in a room and keep it anchored even as you move. With one eye on your phone, you can walk around virtual furniture, creating the illusion of a three-dimensional setting. You can then place a virtual sofa in your den, try it out in different spots in the room, and swap fabric patterns. What you see is very close to what you get.
When shoppers try such a service at home, they are “11 times more likely to buy,” according to Sally Huang, the lead of Houzz’s similar AR app. This is what Ori Inbar, a VC investor in AR, calls “moving the internet off screens into the real world.”
For the mirrorworld to come fully online, we don’t just need everything to have a digital twin; we also need to build a 3D model of physical reality in which to place those twins. Consumers will largely do this themselves: When someone gazes at a scene through a device, particularly wearable glasses, tiny embedded cameras looking out will map what they see. The cameras only capture sheets of pixels, which don’t mean much. But artificial intelligence—embedded in the device, in the cloud, or both—will make sense of those pixels; it will pinpoint where you are in a place, at the very same time that it’s assessing what is in that place. The technical term for this is SLAM—simultaneous localization and mapping—and it’s happening now.