Discuss this episode in the Muse community
Follow @MuseAppHQ on Twitter
Show notes
00:00:00 - Speaker 1: With spatial computing, there’s a level of trust
that the user is placing in you as a developer that most software
developers have not had to handle. On a phone, if the app crashes or
freezes, it’s annoying, but it’s not going to make you sick. It’s not
going to viscerally affect the central nervous system. Whereas in the
case of any immersive software, it will. You’re going to directly put
their brain in a state that is uncomfortable or even harmful.
00:00:33 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for
deep work on iPad and Mac, but this podcast isn’t about Muse product,
it’s about the small team, the big ideas behind it. I’m Adam Wiggins
here with my colleague Mark McGranaghan.
00:00:46 - Speaker 1: Hey.
00:00:47 - Speaker 2: Joined today by our guest Eliochenberg of
00:00:51 - Speaker 1: Hey, Adam, hey Mark.
00:00:53 - Speaker 2: And Elio, I understand that you’ve been doing a
little bit of breath work recently.
00:00:58 - Speaker 1: Yeah, so I was just sharing with you some of my
learnings on the importance of breathing, which I feel like a lot of
people maybe have figured out before, you know, way before I came across
this topic, but I started trying some Wim Hof breathing before some of
my like cafe work sessions, which is equal parts actually very
invigorating and effective. I find it helps me focus and also makes me
feel like a complete weirdo sitting in public, like staring out the
window and breathing really intensely. So I recommend it to people who
are looking for ways to, you know, quickly get in the zone and focus
when they maybe are a bit distracted. And if you have any tips, you
know, on different resources, I’m very open. I’m very curious about
00:01:39 - Speaker 3: What does this breathing technique entail? What
are we signing up for here?
00:01:42 - Speaker 1: So, I mean, Wim Hof breathing specifically is this
cycle of very intense breath in, breath out. There’s nothing too
technically complicated about it, it’s more just about sticking to a
certain rhythm and at the end of, I think like 20 or 30 breaths, you
hold your breath for about a minute. There’s a very helpful Spotify
podcast episode that’s like 5 minutes long, that just guides you
through it. And so there’s all this drumming and, you know, Wim Hof is
kind of like they’re motivating you through the whole thing. So I find
that after I do this breath work, I am indeed able to just like really
get in the zone and whether it’s for writing or cracking some other
like tough cognitive problem, I’m definitely more focused afterward
00:02:30 - Speaker 2: It feels a little bit adjacent to meditation
somehow, but I also know you breath work, I don’t know about the
specific one, but just the topic generally, I’ve known people in the
psychedelic community that basically say you can get unbelievable
altered states. One example here you’re giving here is like, yeah,
greater focus or something like that, and you wouldn’t believe it
because yeah, breathing is so fundamental, it’s literally automatic and
What is there to it? It seems so simple. There’s some incredible
potential there to affect ourselves. I never dabbled myself, but I’m
00:03:04 - Speaker 1: Yeah, I mean, so one discipline I came across this
holotropic breathing, I believe it’s called, which is you can breathe
yourself into a very altered state that’s akin to chemically altered
00:03:17 - Speaker 2: Have to give that a whirl. And tell us about first
what SoftSpace is and then love to hear about your journey and how you
00:03:25 - Speaker 1: Sure, so I am the founder of a software company
called SoftSpace, and we’re building a product called SoftSpace.
Which is a spatial canvas for thinking. So it is a 3D augmented reality
app that lets you organize and make sense of the ideas, the images, the
websites, PDFs that you are working with in your creative projects or in
your personal or professional projects.
And the way we frame the value proposition is that Soft space shows you
the true shape of your ideas, and there’s a lot of research that has
been done over the years into the immense, almost like superpowers that
we have around spatial memory, spatial reasoning, and up until very,
very recently. which we’re going to talk about in this episode, until
very recently, we didn’t have the technology to really tap into those
And so the best that we had was like a larger display, a computer
display for, you know, showing you more windows at the same time, but
that’s only scratching the surface when it comes to the brain’s
ability to make sense of and to remember and to think about objects in
space, which we have evolved over millions of years to do very, very
well. And so I started building this company in 2017, way before, you
know, the current crop of hardware, standalone headsets was really even
on the horizon with this kind of, I guess, expectation and faith that
eventually the technology would catch up to this idea, and I think that
it’s starting to, and that feels really good.
00:05:06 - Speaker 2: And my first introduction to your product was we
met in a cafe in Berlin last year and you handed me the, I guess this
would have been the, at the time, the latest version of the Oculus,
which I think has been, or in the last 10 years has really been on the
forefront of this, and, you know, it has this element where I can still
kind of see the environment, so I’m not just completely zoned out in a
public space, but I’m also seeing essentially notes and other ideas
floating in space and indeed I can interact with them and Yeah, the how
viable is it relative to the Hollywood version of virtual reality that
we have been seeing for ages is a huge question and for sure an app
developer like yourself that chooses to not only pick a particular
platform, but the technology in general, you’re making a bet that the
amount of time you’re going to be working on it will overlap with the
eventual viability of it for your particular use case or your particular
00:06:01 - Speaker 1: Correct, yeah, and I mean, I would say one of our
investors said it’s still early, but it’s no longer too early, and I
think that’s getting more and more true all the time.
I mean, even with, of course, the very big news of Apple finally
entering this space, I think we’re still a little ways out from really
mainstream adoption of computers they wear over your eyes, but if It
were ever going to happen, this is the path that I think the industry,
you know, needs to take to get there. And I think one of my personal
motivations for continuing to work on SoftSpace is to offer a vision for
what our augmented reality spatial computing future could look like that
I think we want to want, right? So, I think up until very recently, the
overwhelming popular imagination when it came to VR, for example, was at
best like a little bit goofy and at worst kind of dystopian and not
something you would necessarily want the next generation of humans on
the planet to be living and working in because it felt very
disconnected, it felt very escapist perhaps, and I think that this
technology is So much more than what we’ve been able to imagine up
until this point. Like we’ve been able to imagine a lot with
essentially nothing, right? And fictional depictions of, you know, the
metaverse or fictional depictions of very futuristic holographic UIs,
but those have really only been fictional and now we’re finally
The reality of it, and I think that there are many possible paths
technology can take, and the underlying power of it has nothing to do
with the computers or the chips or the lenses.
The underlying power of this is the fact that the human brain and body
are inherently spatial, right? We are spatial organisms. And so whatever
positive outcomes or whatever negative outcomes come from this
technology will be rooted in that reality. And so I’m both optimistic
and also now that the reality is finally here, you know, we see Apple
making a big move for it. I’m a little bit trepidatious about sort of
where this could all go. I mean, we’ve seen with other technologies
that people had very optimistic visions for, right, turned out maybe not
completely positively. So I think this is at least has that risk, if not
a greater risk because of how it works if it is.
00:08:32 - Speaker 2: Yeah, and we’ll definitely get on to all the
present and future here, but can you tell us a little bit about your
background? What would lead you to, you know, that moment in 2017? What
you said, this is what I want to be doing.
00:08:44 - Speaker 1: Yeah, absolutely. So, I was in architecture school
and I was halfway through my second year, and I took a summer job at a
design and art studio here in Berlin called Studio Oliver Liasson.
They had just bought the Oculus DK2, the Development kit 2 VR headset.
It made quite a splash. A lot of people who are excited about technology
had gotten their hands on one. I really wanted to check one out. The
studio got one thinking it would be like any other piece of consumer
tech, you could boot it up and try stuff out, but it really was a
development kit. There was nothing that you could do with it if you
didn’t code something up yourself. And so luckily I got a job as the
research resident, poking around with this thing, trying to figure out
both how it could be used as a medium for artworks, as well as a tool
for the production of artworks that maybe weren’t digital or virtual of
themselves, but would benefit from some sort of like virtual
visualization or some other tooling around that.
00:09:48 - Speaker 2: I mean, architecture is certainly a place where
use cases spring to mind very readily. Let’s walk a client through kind
of a design that we made, you know, in some CAD tool or let’s do some
design work there. So presumably those are the sorts of things you were
00:10:04 - Speaker 1: Yes, and I would say much more than that as well
because this studio is very much an art studio first and foremost, and
one with a history of being interested in the body, the human body, how
we relate to ourselves and to others and what different spaces and
different spatial effects like lights, acoustics, atmospheric effects
can do to our sense of ourselves and others.
And so this is actually Maybe where the most exciting promises of
virtual reality at the time, it was only VR virtual reality came in
because you could create effects that would be physically either very
difficult or impossible to do. So one of my favorite demos that we built
was this non-Euclidean, sort of like castle that you walked through. So
it was back in the era of like really long cables that connected you to
a PC. We had the PC in the middle of an open area. The user would put on
the goggles at one edge of the open area and walk in a circle. And as
they walked, they would walk through doors, and around each door was a
new room with an artwork in the center, and as they walked, at some
point, you know, they would realize, wait, I should be back where I
started, but I’m not. I’m actually somewhere else. I’ve actually
entered yet another larger room that shouldn’t physically be able to
have fit into this floor plan. These were the kinds of experiments that
we were doing, and during this period of experimentation, um, I came to
two formative realizations. So the first was that the physical building
that the studio was in, it had about 110 people at the time, and it was
in this old beer brewery in the middle of Berlin. The physical studio
itself was an incredibly important part of the creative and production
process. We walked around and there are models everywhere, images pinned
up on boards, books, there’s like libraries all over the place, half
finished sort of sketches laying around at people’s desks, and this
physical space was in and of itself a framework on which the creative
process hung. And that was something incredible to see, and also, you
know, this is quite a successful studio, and I felt that having that
space was a major asset for the studio to be able do its work.
And the second realization, as I was working with VR was that many of
the same qualities of that physical space actually don’t have to be
physical in and of themselves. So the images that you had pinned up, the
notes that you had laying around, these were actually at the end of the
day, just media for holding information, right, for conveying
information, and you could do something very similar with a purely
virtual environment, you know, you can’t completely recreate it, but
Not everybody has access to a giant beer brewery or even a very large
room, right, to lay out all of their thoughts and their ideas. Maybe
this technology could democratize access to space for thinking, space
for doing your best work.
And once that idea kind of sparked in my mind I couldn’t stop thinking
about it and sort of stereotypical, like I was like laying awake at
night dreaming, you know, oh, if you could also make this multi-user,
then you can like meet with people from anywhere in the world. And so at
some point I thought, OK, this has been great, but I need to go see if I
can build this thing, and I didn’t really know what I was doing at the
time. But apparently I was starting a tech startup, a software startup,
so we got a bit of funding. I was very lucky that we had a wonderful
investor Boost VC make a bet on us, and they flew us out to San
Francisco and we learned, you know, like, what’s a product, what’s a
market, and we’re still around, we’re still around chugging away.
00:13:45 - Speaker 2: of that story where it’s the serendipity which
you know often is a big part of any kind of creative spark, but here
both that they were, yeah, you had this opportunity to work with this
cutting edge technology for a different purpose, obviously, they wanted
to create art or explore the spatial environments that they were working
on and then you also through that exact same opportunity had access to
information in a space. And then making that kind of leap of, can we
make information in a virtual space.
00:14:18 - Speaker 1: Very interesting, right? And, you know, so I was
in architecture school at the time, I ended up dropping out to keep
running this idea, but because of my background in architecture, and
because also of the fact that the tech at the time was only VR, you
know, everything that the user was seeing had to be digitally rendered.
SoftSpace started with a much heavier focus on the design of the virtual
environment, because I believed then, I still believe now that the
environment is a critical factor. And getting you into a certain kind of
headspace, letting you think through certain problems that you just need
the right kind of environment to do.
But over the years of working on the various versions of SoftSpace, of
course, we also then started doing a lot more design and development
work around information architecture and user interface design. And by
now, when we have finally the possibility of pass through augmented
reality, There’s almost no sort of virtual environment design anymore.
I’m not directly thinking about what the digital environment of our app
should look like, although I have some ideas about what the ideal space
you should be in, maybe when you’re trying to get focused on some work,
but we’re now grappling much more directly with problems around. Yeah,
information architecture, the right primitives that the user should be
working with to help the user work directly with their ideas, with the
information that they’re trying to make sense of, and the right UI
paradigm and language to express these elements in.
00:15:57 - Speaker 2: And maybe we can briefly define by virtual
reality, you’re referring to something that is 100% immersive, you have
no awareness of your surroundings, and then I don’t know, it’s
augmented reality and mixed reality kind of the same.
Two words for the same thing, but at least as I understand it, it’s
something where there’s some combination of you still see the world
around you, but you have these additional things from the digital things
sort of superimposed, you might say, and I know there’s even different
technologies on that which include actual pass through goggles or it’s
projected on your retina or something versus you’re still looking.
Scres, we have external facing cameras that kind of bring the reality
into or bring what you would see if you were looking in that direction
into the space that you’re in. So interesting, I hadn’t even thought
about how the mixed reality or augmented reality actually greatly
reduces the amount of, I guess just stuff that you need to be rendering
or think about or design, which is maybe a good feature.
00:16:55 - Speaker 1: Correct, yeah, I think by this point, my sense is
that VR is pretty clearly defined. I think most people would give you a
pretty coherent, similar definition of VR. I think between augmented
reality, mixed reality, extended reality, I think the definitions there
are, you know, you’ll have as many different definitions as people you
ask. I would say that within that spectrum of taking something that is
virtual and then also showing you the physical space you’re in,
there’s also a spectrum of that virtual information being aware. Of
your physical environment. So I guess some people would say true
augmented reality has to engage very thoroughly with your physical
00:17:41 - Speaker 2: So you would have a file, some representation of a
file, there’s a version where it just floats in the air and some
basically random place and there’s another version where I can kind of
detect that my desk is here, so it sort of puts it on my desk. In the
00:17:55 - Speaker 1: Yeah, I mean, there are merits and demerits of how
much the virtual system can be aware of or should be aware of your
physical environment, but I guess, you know, it’s in the term augmented
reality that some AR purists would say it’s not augmented reality if
the virtual is not literally adding to your physical environment.
00:18:17 - Speaker 2: So the mixed reality is a little more neutral in a
way. It could be somehow adding or interacting with the environment
you’re in, but it could just be you just have like a heads up display
overlaid on top of what you’re saying, correct.
00:18:29 - Speaker 1: Yeah. So there’s a term that encapsulates all of
these different categories, which I’m a personal fan of spatial
And spatial computing, as far as I know, as a really concrete concept
was coined by Scott Greenwald at the MIT Media Lab in 1995, and he was
talking about digital systems, computer systems that maintained and used
references to physical objects in physical space, or parts of the user
It was very broad, but over the years and very, very Recently, I think
it’s been taken up by some members, some participants in the XR
ecosystem to mean this sort of very general idea of a computer or
computing system that engages with The fact that you are a human being
in space, and very directly. And I like this because it places the
emphasis not on the technical capabilities of a system, or on the
specific UI design decisions that the developers might have made, but it
really sort of focuses attention on the underlying material of what
we’re designing with, which is Three dimensional space. I mean some
people would say 4D space time, but it’s the idea that you can place
things, you can work with information that has this intrinsic quality to
it, of like being somewhere specific relative to the human being, and
that this poses both great opportunities and new and, you know,
previously unencountered challenges.
00:20:13 - Speaker 2: Well, you teed up our topic today, which is
spatial computing, but certainly encompasses. I like the perspective of
VR and AR as means to an end. They are a way of accomplishing the goal
of making computing more spatial, whether we bring it into our space or
whether we make it just access the spatial capabilities of our minds. I
think starting with the human centered or starting with the benefit or
starting with the user’s mental model is a better way to talk about
really any technology here.
00:20:41 - Speaker 1: I agree, and I think that that’s maybe an angle
to this technology that has been under communicated, and I hope the
community of developers and the big players and small players that we
find a way back to that foundation for any successful product or
industry, right? Like, what is the actual value of this? Beyond the
novelty, beyond the technical wizardry, beyond, even I would say the
hedonic qualities, like maybe it is just really nice, right, to have
this massive surround screen that you can watch, you know, your NFL
games on. But beyond those, why do we need this? What will this unlock?
What does this add to our lives and to our work that We would be poorer
for if we didn’t have it as opposed to, oh, if it wasn’t this, we’d
be still playing games on our phones instead, and it would be all kind
00:21:41 - Speaker 2: So what are some of your answers to that in terms
of what you’re trying to bake into your product or influences you’ve
had from academia or other thinkers who have been pondering this topic.
00:21:53 - Speaker 1: Yeah, I spoke earlier about the fact that our
brains and our bodies have these spatial superpowers that are not fully
or even really well used by existing. 2D user interfaces, displays,
A very telling quantitative metric is that from the original 1984
Macintosh to the, I’m using an older model computer, but the 2020 iMac
Pro, and by now Apple’s latest and greatest are much faster than the
iMac Pro, but the computing power increased by 10 million times, by a
If you count, you know, the CPU, the GPU and the display area increased
And it’s still a rectangle, right, that you click around on with a
And now there’s nothing inherently wrong with that.
I mean, clearly the iMac Pro was a very successful product and help you
do a lot of amazing things that the original Macintosh, you know, you
wouldn’t be able to imagine using that to do.
But, you know, you have to wonder what this massive discrepancy in
And I think now that we see at least 2, and hopefully soon more of the
Looking at that question seriously and proposing answers to it, I think
we’ll start to see what computers might have been able to help us do
all along, or already have the computing power to help us do all along,
right, but simply didn’t have the display technologies to make that
Very concretely, I know that training, any sort of scenario where human
users need to be learning something that’s very experiential. These are
use cases that are already very valuable, so pilot training, a physical
simulator, apparently these are like in short supply and they’re very
expensive to run and take, you know, months to book, and a lot of these
are being replaced now with VR systems and that makes a lot of sense to
There are pilots running with VR surgery or VR surgery planning use
cases. So these very high value, very sort of intrinsically spatial use
cases where, you know, we had all the computing power necessary to do
these things before, and now we have the display technology as well.
What I am personally motivated by in building soft space. Is the belief
that there’s tremendous value to working with 2D information in a 3D
And I think that a lot of the 3D use cases are in architecture, with
manufacturing, with surgery, you know, A, there are people who are far
more knowledgeable about those specific domains than myself, who can
work on those problems, and B, I think those problems are very well
served because there’s such an obvious connection between, you know, a
3D display and the 3D model or something.
What I think is relatively under explored, but has the potential to
impact a lot more people directly. is giving people a way to work better
with information that’s intrinsically two dimensional or best
represented two dimensionally, but in a spatial context and If you look
at Apple’s marketing materials and the imagination that they’re
offering for what spatial computing looks like, this is actually their
Vision, right? There’s like maybe 1 3D model in all of their hours of
marketing material. Most of the time they’re showing you documents,
they’re showing you photos, they’re showing you app windows or web
browsers, but in this 3D context. And so I would like to think that the
design minds at Apple are pursuing a very similar thesis that there is
tremendous value in letting people work with 2D information, which has
the advantage of being portable to all the other devices that, you know,
we already have. You can print 2D information out on a piece of paper
and mark it up, so it’s a lot more flexible and a lot more universal,
but there’s a lot of value in letting you work with that in a 3D
context, and that is essentially what SoftSpace is.
00:26:20 - Speaker 2: Yeah, well, we’ll certainly come to talking about
I’m sure folks are curious to hear your take on that, but yeah, since
we’re sort of talking about use cases here, it’s often the case for
any new technology that you figure out something new and impressive you
can do with computing or some other technology first and then you sort
of figure out how that can be used and often we’re surprised by The use
cases that end up coming out, you know, I don’t know that the people
that invented TCP IP predicted e-commerce, for example, but often that
has to be discovered once the technology exists and is in the hands of a
lot of developers and end users.
And I do think that’s one where to me it feels like VR and AR has been
pretty impressive for quite a while.
You mentioned using the Oculus dev kit. I think I tried it first around
2013. A friend of mine had it and yeah, you know, very much long cable
connected to a PC, you know, pretty limited, but it had a little, you
know, demo of someone riding down a roller coaster and it basically
became a party trick for him to essentially put this on people who had
never experienced it before and everyone else would stand around and
watch them react to that. So that was fun.
But it doesn’t become a thing that’s deeply integrated to your life.
And certainly my dabblings in the past, which are not as extensive as
yours, is that games and immersive experiences, maybe like sort of
interactive movies or something like that, are kind of a good place to
start, partially because of the immersiveness of the environment,
partially because I don’t know, games are always a good place to
Indeed, if I was to try to name a killer application off the top of my
head for VR, probably Beatsaber is the first thing that comes to mind.
Then you go from there to, yeah, of course those either domain verticals
like surgery, training or pilot training or architecture design or
walking a client through a space or something.
But then there’s this whole world of like collaboration, right? We’re
going to a remote first world, we want to have meetings, we miss our
whiteboards, we miss the body language side of it, and then you have
just productivity software and that’s something where that feels like
it’s gotten the least attention.
And maybe that’s because when you think of productivity software, a
word processor, a spreadsheet, a video editor, a design tool, coding,
yeah, it’s very much about those 2D rectangles. I’m not even sure if
2D rectangles are the perfect or most pure form of representation of
that. It’s just something, yeah, starting from paper and scrolls and
then books and then up to computer monitors and And even phones,
obviously, writing also is a big part of all of that, that’s the format
So then you can bring that to. This 3D environment, but in the end it
just happens to be a rectangle that’s sort of like floating or you can
make bigger or you’re sort of mapping the same two dimensional window
metaphor into that environment, but it sounds like you think that one
way to kind of interpret that like, well, if you’re going to bring
productivity software into some kind of spatial computing environment,
OK, let’s just make it a floating 2D window and one interpretation of
that was like, well, that’s really kind of Inspired in the sense that
it’s just a very direct mapping, but it sounds like you think actually
there’s more promise to it than that, that there’s a reason why so
many of these past iterations of our information technologies tend to
revolve around writing and kind of one dimensional or two dimensional
squares or rectangles of some kind, and there’s value to bringing that
to a virtual spatial computing environment.
00:29:54 - Speaker 1: Yeah, I do. And I would distinguish between a 2D
UI paradigm, like a window or a grid for that matter, and information
content that is inherently 2D or is best represented in two dimensions
like text or images or a PDF page.
So, One of the big shifts that I’ve made in my own thinking about how
to design for spatial computing happened when I Came across Rome
research, and at the same time I started using Notion myself.
I never actually got into Rome so much, but I read a lot about the
thinking behind the design of Rome, and in both these cases, Rome and
Notion, these are block-based note taking tools or productivity apps.
The conceptual and technical and, you know, UI primitive is the block of
content, the block of information. And this paradigm in both these cases
works within one app, so the app has control over what its UI elements
are, and it’s decided that OK, it’s gonna be a block of text or block
of an image, but there are others who have been doing work into
Speculating about what an entire computing environment or entire
operating system that revolved around these what are currently would be
considered subunits of computing information, what an entire operating
system that worked this way might be like, what advantages it would have
over our current paradigms.
And once I kind of really wrap my mind around what block was, I
essentially shifted my own development model toward working with blocks,
because Blocks to me, map so much better to the underlying material of
thoughts and of creativity than, you know, a Word doc or an Excel
And so for me, one of the promises of spatial computing is to give you
more Powerful ways of displaying information that is kind of around a
block in size, displaying the relationships between those items, because
for Rome, a big part of its appeal to a certain kind of user was the
ability to represent explicitly the links between the blocks, right? So
back linking and being able to explicitly construct arguments, drawing
from pieces of evidence or pieces of information that are elsewhere in
your database in your notebook. And on a 2D display, there’s just all
these limitations around like how much more other information you can
show, how you represent these links in an infinite spatial canvas or an
infinite 3D spatial canvas, you have many more options.
At the same time, you know, that sounds great and it sounds powerful,
and why don’t we all already work in this like a beautiful mind kind of
memory palace. Well, there are also real constraints on our ability to
process that much visual information, and you do pretty quickly hit a
point where it’s overwhelming, you know, there are times when you do
prefer to just have one piece of text in front of you that you’re
focused on, they’re thinking about, and to have a few other relevant or
Supporting materials close by at hand, but not to have everything
you’ve ever thought about, you know, everything, every topic, visible
at once to you. And so, a lot of the design work and research that
we’ve done has been around trying to probe the edges and map the
landscape of not only what’s technically possible, but what from a
human user point of view is desirable, at which moments.
You know, it’s a lot of fun, it’s very exciting, and sometimes I’m
like, should we be doing this? You know, shouldn’t some large tech
company with billions of dollars be doing this research? I hope they
are, but, you know, we may very well be one of a few group of people who
are doing this research because these questions couldn’t be asked even
a few years ago. There was no hardware platform for which these
questions even mattered. And so now that we do have the hardware
foundation. To start answering these questions, and now we need to
develop software for which having good answers to these questions, you
know, is important, then now we’re doing the work and trying to map out
00:34:26 - Speaker 2: And I’m glad you are, but I still think it is a
niche and a niche, right? The kind of interest in not just productivity
software, but specifically thinking, idea oriented tools on this new
I think the big companies are thinking about the hardware, the operating
system, the much more kind of mainstream.
Can I exactly watch something or shop or do other kinds of things that
are more common operations, and I think you mentioned this in the
beginning. that you see it as something that is potentially very widely
distributed in the same way that like note taking is widely distributed
or email is widely distributed, but I think that’s quite a number of
So it sort of makes sense to me that maybe only smaller players are
interested in this right at the moment.
And you mentioned the coming across Roman notion after you had started
this company and already working in this space, so it’s quite
interesting because you now mentioned two things. One is the VR to AR VR
to, yeah, some kind of pass through, I can see part of my environment
and how that changed your application. And then yeah, tools for thought
appearing presumably, I don’t know, made you feel like more like you
had a home or a community of people that were thinking about the same
thing, even though obviously, as far as I know, you’re one of the few
who’s thinking about this specific kind of environment and hardware
platform, but in terms of like how do we use computers for thinking and
ideas specifically, suddenly now there’s a thing happening there.
00:35:52 - Speaker 1: Absolutely, I was thrilled to discover the tools
for thought community that it existed, mostly on Twitter, so, you know,
you can tap into it from wherever, because, I mean, people who are
really into, you know, their personal knowledge management into these
tools, it’s never going to be a vast majority of the population or of
the user base, but I think that these people are maybe very Impactful,
you know, they might be working in fields like investment or in tech, or
running product teams, where the decisions they make and the knowledge
they have access to or can make sense of reverberates beyond just their
personal life and work into, you know, organizations that they’re a
part of, into the markets that they are selling to. And so there’s
leverage there, you know, to make an impact and It’s also a larger, you
know, market or a larger group of people than I would have thought
before I came across the tools for thoughts ecosystem. It was certainly
large enough to support at least a few pretty successful venture backed
software companies, and there was a path, you know, you can see a path,
for example, for notion, to go from more of an enthusiast user base to a
larger, broader, maybe more enterprise focused markets. Once they got
the primitives right, or once they sort of better understood who would
be the power users and who would benefit from the power users' work,
but who didn’t, you know, themselves need to be sort of like crafting
the notion uh wiki for eight hours a day themselves. So, I think that,
yeah, me coming across that community and then also that community being
very open and very excited for some of the demos that we’re showing
with these sort of like force directed 3D force directed graphs of
linked concepts. We got a really good response from that community as
well, and that was a really important source of feedback, and an
important source of just engagements to motivate us to keep going and
also to provide really good signals and like, OK, which features might
matter more, which use cases might matter more and which not. Of course,
the thing that’s happened since Tools for Thought summer was AI and
specifically large language models. AI has upended everything about
everything, but it’s, you know, definitely upended our working
assumptions about what knowledge work was, what the tools would be, what
the roles would be, what the objectives of knowledge work would be, and
I think everyone building. Software in this space, you know, we all have
to have our own theory of change around what impact AI is gonna have and
how our projects will stay relevant in a drastically transformed future.
One of those changes is that, so maybe tools for thought will become
unnecessary in the future because we won’t be thinking for ourselves
anymore, right? We’ll just have this sort of all knowing AI oracle that
will be able to pull out the right answer, the best answer, you know, at
the moment that we need it’s, and the answer will be fed to us through
our super thin Apple Vision Pro 10, you know, glasses. That’s one
version of the future. Another might be that humans do stay in the loop
because, you know, there are still experiences and values and judgments
that we make that you can never by definition replace with an automated
system, and that there is still value in having better tools for
thinking, for having better processes for making sense of new
information that’s coming in. And that AI can lower the barriers to
using those tools because, you know, maintaining a sort of up to-date
Rome notebook is, you know, at least a halftime job, and not many people
have the bandwidth to be doing that, but maybe if some of those friction
points and some of those barriers could be lowered, then we could have
tools that you could on their own be Making a lot of the connections
that previously had to be done manually, but still, you were the one
sort of gardening this knowledge garden. You were the one shaping it and
deciding what’s important, what’s not important, and drawing from it,
you were the one harvesting its fruits and using them in your day to day
00:40:23 - Speaker 2: For sure, a lot of, yeah, productivity systems,
note taking systems, settle cast and GTD, etc.
They do attract folks who maybe get just satisfaction from the investing
in those systems, the transcribing of the notes, the capturing of them,
the gardening of them, the finding the connections between them, and
many people certainly get huge value from that, me included, and I think
that long predates the current tools for thought summer, as you said,
you know, I think of something like the Steven Johnson wrote, very
prolific author. wrote some time back about using Devonthink, which is
super old school app that you know you type in a bunch of notes and it
has like a little very rudimentary algorithm for finding connections
between them and how that helps him have new ideas and get value from
But yeah, he is someone who is willing to take that time and invest in a
system, and I feel like the vast majority of people just find that way
too tedious, but maybe there’s some element of These advancements in
large language models can help us with the tedious parts where you can
still get the benefit of the end result.
While you’re not just fully outsourcing the decision making or the
sense making or the judgment calls or the aesthetic calls to the
computer, you’re getting it to fill in some of the more tedious parts
that not everyone has patience for, but in the end, you’re still the
one that, you know, is making the calls.
00:41:50 - Speaker 1: So, there are so many interesting threads in this
conversation that we’ve had so far, and I think there are also many
interesting ways in which these threads unexpectedly overlap and connect
So earlier you had talked about some of the earliest use cases for VR
that you had experienced as a party trick for gaming, you know. Actually
one of my favorite is fitness. I personally do not use VR for fitness,
but I’m very impressed by the apps and by the stories of people who
have found a way to achieve previously very, you know, difficult goals,
fitness goals through virtual reality and through some of these fitness
apps like Supernatural. And I really like this model for how spatial
computing can fit into Our lives and work, or actually any technology
for that matter, can fit into our lives and work, that it’s this really
time boxed and place boxed use case, you know when you begin and you
know when you end, but then, even when you’re not using this app, you
are enjoying the benefits of having that practice of having that in your
life, you know, in this particular case you’re feeling physically
healthier. And, you know, you’re able to hit these goals that you had,
but maybe had difficulty achieving in other ways, like going to the gym
or going for a run, and that’s very much a model I would like to adopt
for our own product, whatever we build, you know, the idea that we make
something that makes you, let’s say, smarter, or makes you more
creative, or makes you talk more. Coherently, you know, about ideas that
are important to you, even when you’re not in the headset, even when
you, you know, you step out and you’re just grabbing a coffee with a
friend or you’re going for a hike, that somehow we find a way to tap
into the parts of your brain that remember complex information that
makes sense of it in a way that your laptop screen doesn’t, and that
therefore makes you like a more interesting conversation partner even
when nobody has any gadgets on them, right? I mean, they’re definitely
sort of, it’s almost like an aesthetic preference of mine, that like, I
would like the future we live in to still have room for unauugmented and
unmediated, you know, human to human interactions. There’s another
future where we just all have these like tiny AI like earpieces, and
they’re telling us what to say and what to think all the time. Sure,
but I prefer a world where our technology is helping us to achieve goals
that we have. For ourselves, you know, whether it’s mental health or
physical health, or creativity, or productivity, or just being an
interesting conversation partner, but then can also get out of the way,
right? They do the work and then we step away like a little bit closer
to the ideal versions of ourselves, but we’re not dependent on a
continuous subscription to like, you know, the software product to stay
that way. So that’s tied back to VR Fitness. Another interesting tie in
here is that there has been some research recently that suggests our
brains use or creatively misuse spatial navigation, neural circuitry to
keep track of concepts and memories. And this I found fascinating
because, you know, I’d always kind of thought of this. The idea of like
conceptual space as a helpful metaphor, as a useful sort of metaphor
because we can’t like, otherwise visualize, you know, what it means for
this idea to be close to this one but far from that one. But it seems
like there is some evidence that this is actually what’s happening, you
know, in our brains, and If that is the case, so a lot of this research
actually came out of interpretability research in AI like computer
scientists trying to understand what’s going on inside a large language
model, what is a latent space, you know, like, what makes one word
closer to another word in this like, super high dimensional space. And
then realizing that there are actually some mappings back to how human
brains work and how human language works and how human beings express
ideas through language, etc. So I’m not a neuroscientist or computer
scientist, so this could all well be just my sort of fanciful
misinterpretation of all this. But, you know, if indeed there is some
concrete underlying mechanism that ties space and ideas together, then I
would say that’s an even stronger argument to Investigates what a
spatial user interface displays for working with information could be
and how that could help us to come up with designs that are better
synthesize the underlying sort of requirements of the user, or come up
with theories that better synthesize the different pieces of evidence
that we’re trying to fit together. Etc. So, it could be that this is
not only a metaphorical connection between, you know, a semantic space
and like mapping out ideas on the big wall and the actual ideas
themselves, there could literally be a real phenomenon going on here.
There are papers that point to evidence that this is what’s going on.
00:47:08 - Speaker 2: And you’ve got a couple of links here you’ve
shared with us that outline some of these explorations and discoveries,
so I’ll put those in the show notes and listeners can follow through
and read those to make their own judgment.
Yeah, well, so far I like that we haven’t talked too much about the
technology and really focused on the user and the big ideas here and
your unique take on this.
But with that said, now let’s talk about the hardware and the
technology and you know, I was interested to go read about the history
of it. I found an interesting link I’ll put in the show notes, but
going back to even the 60. and 70s people strapping these ridiculous
contraptions to their head and trying to figure out head tracking and
all this kind of stuff. I feel like there was some kind of maybe
awareness of OK, the hardware with the miniaturization has happened with
mobile computing and internet and all this sort of thing that lots of
big companies and lots of investment dollars went into Many platforms,
most of which have not panned out, but nevertheless have produced some
We already talked about that early Oculus demo or kind of dev kit that
we both had access to. One that to me was a really, I don’t know, wow
moment was the Google Glass concept video from, yeah, I think it was
around that same time, 2012, 2013, something like that. And yeah, I
remember people that I knew, not even in the technology world saw that
and just were floored and just said, you know, this is amazing, this is
something I want to have. Now, of course, the reality didn’t live up to
what was in this concept. Video, Microsoft’s got the HoloLens. Magic
Leap is one that, yeah, it was the secretive project and billions of
dollars of investment were going into it. I think they did develop some
genuinely impressive hardware, but in the end, yeah, too early,
couldn’t get there, couldn’t get the two-sided market of developers
and and users, too expensive, too weird, that sort of thing. And then
obviously you’re choosing to build on Oculus, which is now owned by
Meta and has been through many iterations here. So what’s your take on
the kind of currently available hardware? What made you choose this
platform that you’re on now and how do you see the good enough is a
weird thing. To talk about because there’s so many different aspects
head tracking and input mechanisms and that sort of thing, but I think
it also depends a lot on the application. It’s clearly been good enough
for certain kinds of games for quite a while, but maybe that’s
different than what you need for, for example, a more precise kind of
text manipulation oriented productivity. Yeah, how do you think about
the recent history of hardware platforms?
00:49:42 - Speaker 1: Yeah, that’s a great question. The way I’m
thinking about it is that it’s only been, I would say about a year or
just over a year, that there has existed a hardware and operating system
platform that just barely got over the line of like good enough for a
general purpose computing tool like ours. And I think there’s a strong
case that could be made that it’s not even over the line and we’re
only just now seeing where that line is, which may be quite a bit
further out than what everyone had hoped it would be because to hit that
line today, it’s very expensive, and so, I think that the challenge
with spatial computing. In my humble opinion, has been that the minimum
viable product is actually not minimum at all. It’s actually a very,
very, very high bar. When it comes to The visual acuity, the pixel
density, the motion to photon time, you know, how quickly this system
responds to the user’s head movements and hand movements. We’ve gotten
used to technology that can be quite buggy and not work so well, but as
long as it delivers like that modicum of value and that value is like,
you know, higher than the friction or the cost of like using the thing,
then there’s a path that tool taking off.
00:51:09 - Speaker 2: Do you think that bar is so high for this
technology specifically because, yeah, for example, we’re trying to
like basically trick your brain into something because another way to
think of it might be, well, the bar is higher because computers in
general can do so much more.
We’ve got mobile devices that are amazing, we’ve got computers that
are so powerful, you know, if you go back in time. To, I don’t know,
you know, something like early personal computers where the minimum
viable product was toggle switches and LEDs and like manually, you know,
keying in programs or whatever, but there just wasn’t that much to
So here we’re trying to compete with all these other really developed
platforms, but it seems like you think it’s the first thing, it really
is more about the specific problem of the Humans have such a strong
sense of spatiality isn’t the right word, and so digitizing that is
just a very, very hard thing.
00:52:02 - Speaker 1: Yes, I think there are actually probably 3
headwinds. The first and I would think the greatest, is that you’re
dealing with the human nervous system, right? And it’s almost like
thank goodness our nervous system is actually laggy enough, it’s hard
to trick. Thank goodness it’s this hard trick. Thank goodness it has
these buffers of like, OK, if you update the display within like 14
milliseconds or whatever the number is that Apple thinks it is, your
brain does accept it, right? Conceivably we could have a much lower
number. I think there’s been research done on like insects that have,
you know, like super low thresholds, right? And if that were the case,
then all the technology would be, you know, even further away before it
got good enough. So I think that’s Absolutely the greatest factor in
terms of headwinds for getting this technology good enough for
We can’t dismiss the fact that everything else in the consumer text
space has gotten so good, right? The iPhone is this like beautiful slab
of glass that can basically, you know, do anything you ask of it, if it
has an internet connection especially, and the competition for spatial
computing is therefore that much greater. I think the third factor is
the market’s expectation of what success looks like here has also
gotten so much. Greater, right? Back in the days of like punch cards,
if, I don’t know, every computer science department at all 8 Ivy League
universities adopted your system, that was like a smashing success,
right. So like 8 purchasing decisions had to, you know, come through.
Now, if it’s not like 2 billion user addressable market, then you’re
not making a coffee meeting, right? So I think all these forces have
been headwinds to this space and it’s only through the sort of
unilateral multibillion dollar. Very long term investments that
companies, individual companies have made that The technology has even
progressed as far as it has, and it’s going to take many more billions
of dollars of investments made in the face of very skeptical
shareholders and press and markets, probably to get to anything that we
consider like mainstream or a success compared to even the iPad or Apple
So yeah, I mean, you’re asking about hardware, you’re asking about the
choice of platform. So, the quest devices. So what Met has done really
really well is getting the price right, for this technology, and getting
the sort of like absolutely minimum acceptable quality at that price,
and I do see that they are calibrating the price upward a little bit
from the very, very low cost of the Quest 2 for their next generation of
devices, in order to maybe meet users a little bit more in the middle
when it comes to the quality, and that’s the range that they’re
exploring right now, but from a developer’s point of view, from my
point of view, it’s moving in the right direction, and I think that
what we have right now, the question two is, yeah. Sort of just on the
line of what a productivity app would need the user to have access to,
to, you know, be usable for let’s say 30 minutes or 60 minutes, and for
the user to feel like, OK, that was worthwhile.
00:55:28 - Speaker 2: What are some of the dimensions actually for that?
Because there’s obviously a lot of different things here.
You mentioned like the needing to be tethered to a bunch of cables,
which I think was, you know, one of the problems that various VR
headsets have essentially tackled and solved in the recent past, but
there’s also things like, yeah, display latency or yeah, pixel density,
you know, text legible, you mentioned operating systems, so presumably.
There’s, I don’t know, files, copy paste, all these things that maybe
aren’t important for games, but be important for productivity.
What are the dimensions that have advanced forward to yeah, be kind of
across that line or where is it still weak either yeah, the quest
specifically slash the larger Oculus platform or just all the platforms
00:56:11 - Speaker 1: In a word, comfort. Hm. I’m using this very
So physical comfort, the ergonomics of the device on your head, having
it be standalone, so there’s not a cable coming off of it, which
impedes movement and is uncomfortable, getting the weight distribution
right on the head, making it light enough so there’s not as much weight
to have to distribute in the first place.
The visual comfort of having good lenses and a good display with the
right range of like contrast and brightness and darkness, and the pixel
density not being so low that it’s really straining to look at the
And then there’s social comfort. When Oculus finally opened up the pass
through SDK on their VR devices.
00:56:58 - Speaker 2: They essentially is this where there’s an
external camera that’s sort of taking pictures of your surrounding and
then you can bring that into, yeah, yeah.
00:57:06 - Speaker 1: So they had, you know, originally focused on
making VR devices. The cameras on the outside of the device were never
intended to create an image for a human to look at. They were for
tracking purposes, right? They were for positional tracking purposes to
supplement the inertial tracking data. And to their credit, they
realized, oh wait, augmented reality might actually be the future. We
had been sort of like talking about the metaverse and VR and this full
immersion future, but maybe people want AR and what can we do to instead
of going through another multi-year cycle of developing a totally new
hardware before we can even test this hypothesis, what can we do today
to start understanding the parameters of this? Well, we can take the
really, really, really terrible grainy. Infrared camera feed from our
tracking cameras and stitched together this like binocular pass through
feed, which is so terrible on the quest too. It’s like this muddy
impressionist painting of like what is going on around you more than
there’s like any kind of like image of going on around you, but they
took a big leap in opening that up to developers, but it made this
really important point, which is Even a really muddy and terrible view
of what’s going on around you physically, is infinitely better than
none. I’m someone who spends a lot of time in the headset, and before I
was able to experience a pass through in the headset, I always had this
low level visceral discomfort going into VR, which I was not even aware
of. I think I had sort of like denial about it, because, you know, like,
accepting it would have torpedoed sort of my whole faith and motivation
and building our products. But once I could experience facial computing
without that discomfort. I could never go back. It was night and day,
right? And so that sense of social comfort and of just visceral
animalistic comforts is another comfort factor that quest through purely
software, just by switching the camera feeds on and doing some, you
know, remapping and stitching, was able to alleviate and so. Yeah, and
answer to your question, like, OK, what is it specifically about this
hardware that’s finally kind of like good enough or barely good enough
for our kind of use case? I would say it is that comfort. With gaming,
with fitness, those comfort factors are, I mean, they’re still of
course, like tremendously important, but they’re not gonna be as
critical. Well, maybe I’m, you know, underestimating the importance of
those factors in those other use cases. I won’t speak to them, but
especially in productivity and focus and deep work. You’re not going to
be able to crack the toughest problems or write the best, you know,
piece of writing ever, if there’s just something gnawing at you, if
there’s like, something on your face, it doesn’t feel good or this
sense that like, someone could be sneaking up behind me and Once you
kind of get over that line, then you can suddenly imagine using this
device in all these other ways. I would say that Apple’s approach,
they’re coming in from completely the other end of the spectrum.
They’re saying that minimum bar for visual acuity, for latency of the
pass through video feed, for the feel of the materials and industrial
design of the headset itself. The necessary minimum bar is like really,
really high, because I guess they think Humans are, we have a very high
standard when it comes to visual information that’s coming in, right?
And they’re unwilling to compromise on those standards and would rather
to compromise on maybe the accessibility or the affordability of the
first generation of the device, hence the like, almost comical price,
right, of their first headset. And I’m very excited to see whether
their thesis is correct, or more correct than that is. So, We’ll find
01:01:15 - Speaker 2: Yeah, well, I guess now is the right time to be a
little more future facing and to react to Apple’s recent announcement
of the Vision Pro, which is their long awaited entry into this space.
All these other ones we mentioned so far either defunct platforms like
Google Glass or current platforms like MetaQuest or Meta Oculus or not
quite sure the right naming there.
But now Apple said they’re going to do it. They’ve shown kind of their
vision for things and let people try the demo, and now they’re
basically, I think trying to get developers excited to build
So certainly I want to hear your, as a person who’s been working in
this space for a long time, I want to hear your reaction to their
approach generally, the hardware, the software, etc. But I’d also like
to know just how does it affect your business or how do you think about?
Certainly it’s good news to have the largest technology company in the
world to be getting heavily into this space, but what do you expect in
the near future for you business wise? Do you feel invigorated by this?
Does it bring new attention to what you’re doing?
01:02:14 - Speaker 1: Yes, this is only good news. The fact that Apple
has entered in this way that Feels like it’s very central and very core
to their plans for the future of Apple.
It’s not a peripheral device, it’s not a new pair of headphones. It
feels like something that they want to turn into a pillar of the
company, you know, going forward.
That is all very exciting, and that is all very positive for our
I mean, my reaction to the actual unveiling of the device, it’s
complicated, it’s not unequivocally positive or celebratory. I think
that A lot of people, myself included, had been hoping that Apple would
pull a rabbit out of a hat.
That they would be able to circumvent. The laws of physics in some way
that, you know, that no one else had thought of or figured out, or that
they would make some really radical design decision where they would
throw away something everyone thought was absolutely critical to this
And thereby, you know, make this huge step change in some of the
tradeoffs that other companies had to make in order to retain this
thing, whatever the thing was, right? So Apple famously is always
getting rid of features that everyone else is not ready to give up yet,
like the CD-ROM drive, right, the iPhone has no physical keyboard, and
01:03:40 - Speaker 1: Yeah. And so one unfair characterization, but
it’s somewhat captured my initial feeling.
When I saw the headset was it kind of felt like if Apple had released
instead of the iPhone back when they released the iPhone, they had
released a BlackBerry, but it had a retina display. That with the
current headset, it feels a little bit like they decided we’re going to
take essentially the same paradigm that everyone else has been working
with, and just crank the knobs up on every single quantitative
characteristic all the way up as far as the existing supply chains will
allow us to. And that’s their strategy.
I mean, to be fair, they got rid of the physical hand controllers.
They are going all in on an eye tracked input system and like, there’s
absolutely a quality and quantity, right? Just like if you make
something so fast and smooth and reliable and feel so good, you can get
a step change out of it, but I don’t know what it would have been that
Apple would have done drastically differently, which is The whole point,
like, I I don’t work at Apple, I’m not Steve Jobs or Johnny Ive, but
now we know, OK, they decided not to take that route or they couldn’t
figure out a way to take that route.
And so, I think this is incredibly validating for all the existing
players. I think this is very validating for meta, right? It means that
meta can proceed with their hardware roadmap, whatever. You know, it was
gonna be for the next couple of years, and they don’t have to throw all
that away because Apple came out with something that like made all that
Yeah, so, like I said a bit earlier, I’m very curious to see what the
actual impacts for user adoption for the market response. Of these
qualitative improvements that Apple has made will be. And initial
reviews from, you know, tech journalists from the media has been very
positive, people saying that it essentially looks like you’re looking
through maybe like a thick pair of safety goggles. It doesn’t feel like
you’re looking at a digital display at all, which is incredible, you
know, if that’s in fact what Apple has been able to accomplish, that in
itself is like a really incredible achievement. Now what we’re gonna
01:06:04 - Speaker 2: And as you said, that’s really a continuation
that same insight that the quest had with like, let’s try to repurpose
these external cameras to see if some kind of pass through will work.
And there have been other augmented reality that was more based on the
projections, but then you always have the problem with the digital part
of the display is just too hard to see, especially in environments with
And so in a way, this is, yeah, something meta had already figured out
and was investing in in Apple.
Took that idea, you know, the timing here exactly, but they came out
with something that wasn’t, let’s throw away what you’ve figured out,
but actually let’s take what you have already figured out and build on
it a lot and make it better, as you said, crank the knobs up, apply
their industrial design and their supply chains and their willingness to
And all that sort of thing to see how far we can take it.
That seems good, but yeah, in a way, it just all those other platforms I
listed, it seems to validate their work they put into it in some cases
what seemed like a huge, you know, money loser, well, was a huge money
loser for all those investors in Magic Leap or whatever, but a lot of
exactly what they were doing there is what Apple is now doing, but they
have the benefit of being, you know, technology has moved on a little
bit. And they can, yeah, try to take it to the next level and see if
they can finally be the one to get it over the hump into something that
becomes a must-have device for some demographic of people.
01:07:30 - Speaker 1: I mean, that’s on the hardware side. The software
side, I think, is much more radical, the direction that Apple is taking,
and that’s also as a developer, an area that I think has not received
the amount of attention that it needs for the technology to succeed.
And I hope that Apple is bringing to this area of the investment and the
attention that it needs, specifically.
On the software side, what Apple is doing is We talked about this
before, they’re really doubling down on 2D content. I mean, as an
initial set of use cases, right, they’re really doubling down on
watching movies, on running iPad apps, on web browsing, on maybe photos,
they’re even baking into their core conceptual models for what a
spatial app is, the concept of a floating window, like a floating 2D
window, right, with through their. Window spaces and immersive spaces or
framework. And so that’s a big departure from what, for example, meta
has been pushing for. Meta has been pushing for developers and for users
to go all the way on day one to this fully immersive, fully 3D. Yes, we
have to use the word metaverse that kind of captures what their ambition
is, whereas Apple has made this incredibly powerful. And incredibly high
resolution. 3D, you know, spatial computer for you to look at flat
things on, right? And that is, I think, a very interesting decision that
they’ve made. People have been doing this, like, if you search for the
term virtual reality on Apple.com, nothing ends up. It’s a forbidden
word and instead the term spatial computing is like in every paragraph.
And so Apple is In my mind, trying to make the smallest possible
conceptual leap from what they’re very large and very enthusiastic and
very well trained user base of iPhone users, iPad users, MacBook users,
they’re trying to make the smallest possible conceptual leap for us to
this new paradigm. And my hope is that that is just the first step that
we will have to take, and that it’s the first step of many, which then
eventually leads to a much more fully spatial, a much more natively
spatial operating system and ecosystem of apps, but this is where
they’re starting, and it’s a distinct strategy from Metas.
01:10:11 - Speaker 3: That’s a very interesting observation. I wonder
if it’s in part motivated by Apple’s desire to have a controlled and
therefore consistent framework. They basically don’t want app
developers YOLOing their own things for a variety of reasons, whether
that’s user interfaces or payment rails or whatever, right? And maybe
it was just too much of a bite to Develop a fully general purpose, fully
3D application framework, not just from like an implementation and
rendering perspective, but just like you ask how should it work, how
should it look, making it suitable to all the use cases, whereas
they’re in a very good position to transliterate their 2D UI skills
over to a 2.15D type experience.
01:10:54 - Speaker 1: Absolutely. I’ve been watching some of the
developer videos from WWDC and Apple has many and very specific opinions
on what your spatial app should look like, down to the number of points,
like 60 points is the minimum size for an intractable element like a
button. And even being able to say that, to make a statement like that,
it doesn’t make sense in this like super open world facial canvas app
development environment, cause like, what would that even mean? But in
Apple’s framework, like, they can say that because they expect your app
to always face the user to scale dynamically as the window moves away
from the users so that the apparent angular size of elements remains
essentially the same. And they’re going to make it very easy for you to
follow those guidelines, and I guess, more difficult to break them, or
at least have a very strong and well worked out case for breaking them.
At the same time, they are partnering with Unity to make it easy for
existing 3D software developers to port their experiences over into what
Apple is calling fully immersive mode, where your app, third party app
can take over the entire virtual space. For games or for existing
experiences, you know, they’re going to have to accommodate those kinds
of experiences, but yeah, the sort of default stance that Apple has
toward developers is we have figured out the exact perfect framework for
what spatial software is, and it’s very specific, and we think you
should really consider following it.
01:12:31 - Speaker 3: I guess they might be thinking, we will figure it
out, and in the meantime, we don’t want you trying anything weird. So
you will build the apps like this, and then eventually we will release
an updated framework and then you’ll build your apps like that.
01:12:44 - Speaker 1: I think you’re right. I think you’re right. I
mean, you have much more experience working with Apple and developing
for Apple, and sort of the pros and cons of Apple’s particular approach
to building a platform, right, and building an ecosystem.
Apple has done a great job, you know, it might be frustrating in certain
moments for developers or for users, but overall it works, and so I
would err on the side of like Apple is taking the right approach here.
Even if that means at the cost of more experimental ideas, or at the
cost of certain Paradigms that people who are really enthusiastic about
the past and the future of computing which could come into existence or
become more broadly adopted. I would say that first and foremost, let’s
make something comfortable and good, even if it is not as exciting or as
wide ranging in its capabilities as we might hope, because I think it’s
just to be proven in the space that something like that, that a product
like that can be made, right? And that I can do well in the market. And
in parallel, we do have meta, we do have the Quest devices, and an
ecosystem in which you can develop more adventurous apps and games and
experiences, and, you know, you can very easily take the things that you
learned from those experiments on the meta platform and bring them over
to Apple. If you show them like, hey, this works, it breaks all of your
guidelines, but it works, right? I’m hoping, I’m thinking that at some
point Apple would say, OK, that is valid and we can accommodate that.
01:14:25 - Speaker 3: Yeah, and to be clear, I don’t come at this too
much from the ough angle in the sense of is art, you know, if I had my
way, Apple would spend whatever $10 billion in developing a platform
and then allow me to do whatever I want with it, right? But the is is.
In order to develop a platform of that expense, they need, you know, to
have reasonable economics around it that implies other things and so on
and so forth, right? I think it’s just important to understand and
it’s a key lesson from the mobile era that this is as much an economic
system as a software system.
Because of the reasons that I just mentioned, where it’s now understood
to be an incredible economic opportunity to control a platform like
this, like we kind of got the desktop and open internet stuff kind of by
accident before people really figured out what was going on. And at that
point, the cat was out of the bag and you couldn’t come in and say, you
know, whatever, you can only publish Mac apps through the app store or
something. But for these, the economic situation is very different.
I think therefore implies the difference is of how the system works. I
think it’s just useful to be aware of it.
And on the topic of hardware and performance, listeners will know that
one of my big hobby horses is performance and especially latency. So in
some ways, the AR VR systems are a vindication of that interest because
as you alluded to earlier, they really have to be very low latency.
Another kind of question I had around the human eyes and how it relates
to these systems is Like vision health, and I don’t actually know how
exactly these work. Like, is this equivalent to looking at a piece of
paper that’s like an inch away from your face all the time, or is it
more like looking off into the distance and how does that implicate eye
01:16:00 - Speaker 1: So, OK, I’ll start with the eye health question.
Yeah. So the experience of looking through a headset is kind of like
looking at something in the middle to far distance. I think that’s kind
of how the fixed optics are set up, and the challenge is actually that
you cannot focus on objects which are meter or meter half from you or
Your eyes try to. Both through the lenses, through accommodation and
through virgin, which is the fact that your eyes point inward to focus
on the same point when it gets closer to your face, but the hardware
does not accommodate this, at least the existing generation of displays
and lenses. And so, I mean, this is bad, like your body is trying to do
something, right, that in reality would result in a clear image and in
this case results not like in a fuzzy image or in misaligned images. And
so there have been experiments that Meta has published prototype
devices, hardware devices that would address some of these challenges,
but these are really far away from any kind of production readiness. So,
I don’t know, and I think this is a really important area for, you
know, platform owners to research and to study and to understand because
Our sense of site is quite important. If you’re gonna get younger
people to use this, it could have developmental implications. These are
all really tricky and like important topics to get right. The
performance, so on the question of performance, I recently Read about
the fact that Vision OS is a real-time operating system. This term has a
very specific meaning. I was kind of new to this, but as a developer in
this space, as soon as I read the definition of this, it made a lot of
sense to me and I was shocked at like, all this time I’ve been
developing, not on a real time operating system. So there’s these very
hard constraints on how long a cycle of processing can take, right?
Like, you cannot have processes that maybe take 10 milliseconds on
average, but then sometimes will stretch out to 100 or even longer. And
the fact that Apple has put so much, cause Apple can do this, they have
the resources and the technical expertise to essentially write up, you
know, a whole new operating system paradigm from scratch in order to
address latency. And responsiveness, that’s something that I know Meta
tried to do. There were internal memos that came out that said, you
know, Mark Zuckerberg really wanted to get away from Android, but they
weren’t able to so far, at least. And so, even beyond the hardware
performance, which of course is superlative, right? Apple has their
custom silicon is ridiculously fast and power efficient compared to
everything else. They’ve crammed the laptop class processor in here, on
top of, like, in parallel to a dedicated sort of like signals
synthesizing processor, the R1. On top of all that, if your operating
system doesn’t make it easy to render the frame rate very steadily,
then it doesn’t matter how good your hardware is, but it’s not the
only factor here. So, Apple has put a lot of energy into making sure
that this thing is smooth and comfortable.
01:19:11 - Speaker 3: Yeah, the Zuckerberg story is very interesting
because I’ve tried for like basically my whole career to write high
performance software, and I found that when you’re working on large
projects, it’s basically impossible to have really fast software,
unless you have one of two conditions. One is you have basically a
dictator who’s like absolutely fanatical about it. The example that
comes to mind is Steve Jobs putting the iPhone prototype in the fish
tank and seeing the bubbles and saying that, you know, it’s too big
because there’s air inside, like that level of Attention to detail and
insistence that it happens, or you have a system in place that enforces
it. So one of the ideas or maybe fantasies I’ve had about software
development frameworks is you have a framework where to use the VR
example. Like your program gets 10 milliseconds to render the frame, and
if it takes 11, it just gets killed 9, and the game over. You see like a
game over screen and you’re in it to start again. And you can imagine
if you spent your entire time developing software in that environment,
that might be enough to actually make it fast, but otherwise, my
experience is, despite the incredible performance enhancements we’ve
had on the hardware side, software just ends up being kind of slow, you
know, there’s like this equilibrium process where that’s like the
natural resting place and it just gets slowness that accumulates. So I
think the only way to fight it is to have You know, basically a
framework level or a system level enforcement of it happening.
In addition to obviously, you need the hardware support it and obviously
you need the programming framework, programming language, library
support for it, but I think you need the additional like social layer to
So I’m curious if they try to do something like that. It’s like a
simple version, what would be they have app store review and a a hard
criteria is, you know, you can’t have any frame misses, which is not a
criteria we know on iOS. They don’t like it, but they’ll still publish
your app, but maybe they say, you know, in this case you’re gonna make
people sick, they’re gonna get motion sickness, you can’t have it, you
know, rejected. I don’t know, we’ll see.
01:21:05 - Speaker 1: I think they will have to have that requirement,
you know, cause I don’t know of a way that you could enforce that on
the software side, right? You could like force the app to render at
lower resolution maybe, like, uh, you know, on a systems level, but
certainly meta has that as one of their non-negotiables for admissions
to their app store, and I’m all for it.
In general with spatial computing. There’s a level of trust that the
user is placing in you as a developer and as the platform owner that
most software developers have not had to handle, because on a phone, if
the app crashes or freezes, it’s fine. I mean, it’s annoying, but
it’s not going to make us sick, it’s not going to viscerally affect
the central nervous system, whereas in the case of any immersive
You’re going to directly Put their brain in a state that is
uncomfortable or even harmful, right? And so, there’s a kind of
responsibility beyond all the others, data privacy around just like pure
functionality, there’s a responsibility that you take on first on the
basic like comfort and well-being of the user, but then also on the
design, the aesthetics, the qualities of the virtual environment and the
How those things make the user feel because it’s, you’re taking over
their entire field of view, and I think that’s something that Apple is
gonna come in and just really raise the bar on. I think that that’s
something I’m looking forward to.
I think that this space has Adopt a lot of the standards from video
games, because the the underlying technologies are so similar, like 2D
video games. A lot of the sort of like design standards have also been
ported over. They can have these like really low quality environments,
you can have like, you know, miss frames or whatever, but actually you
have a very different relationship as the developer of an immersive
software title than like Fruit Ninja on the iPhone, yeah, does to their
01:23:01 - Speaker 3: Yeah, great point.
01:23:03 - Speaker 2: Maybe that’s a good bridge to one of the last
things I was really curious to ask you about, which is the social
I think you touched on this, yeah, talking about comfort, which is, I
think we don’t really know yet, but basically people feel often
uncomfortable wearing headsets, not because of, I don’t know, it’s
heavy or there’s wires or something, but because they know they’re in
an immersive environment that’s cutting them off a little bit from
their surroundings, even if you’re alone in your home, that’s very
weird. It’s a very strange thing. I think somehow at some low primal
level, we’re not that comfortable with that and throw in now the social
side of it, which is putting on one of these headsets in a setting like
an office. Or on a plane, or anywhere else, and the reaction of others
around you because they sort of know that it’s signaling you are cut
off from your environment. And it’s a very challenging thing.
I’ll link out to an HCI paper from a few years back, which was
essentially looking at the social acceptability of VR on a plane, which
you would think, OK, this is a really simple case, right? Everyone’s in
their seats, it’s a long international flight, people are just watching
movies, they’re zoning out, they’re sleeping, who really cares? But in
this study, they determined that someone watching a movie in VR on a
plane was extremely creepy to everyone around them.
And this included that the VR software had a little bit of some kind of
cameras or something that would give you a little bit of peripheral
vision. So if someone wants to get your attention, they can do that
relatively easily, but that didn’t matter. Everyone was just super
weirded out by it. So I feel like these things are a bigger hurdle than
we might think or maybe because we’ve hit the baseline now of input
devices and displays and you know. Emotion tracking works well enough,
you don’t get sick. Now this other piece of comfort becomes a central
thing. What’s your experience of that since you have been putting on
these headsets as a part of your daily work for a long time and what do
you see the path forward there?
01:25:08 - Speaker 1: I think this is actually a very, very critical
question for the technology. What is the future that we want to want,
where head manager to displays our thing, where smart glasses are a
thing? Cause I think, you know, now is the time to start asking that
question and coming up with answers.
I would say I think that there are important values and there’s
important information embedded in those social norms and social
Like, I don’t approach this question like, how do we get people over
the hump necessarily, like, they don’t like it right now, what can we
do to like make them like it, you know, I don’t think it’s the right
It might turn out that there are certain rational biases or some, you
know, reasons that don’t reflect any intrinsic underlying values that
are just about familiarity, like umbrellas, for example, were they were
like reviled, right? Apparently when they were first introduced in
London. Obviously that wasn’t a valuable social taboo to keep around.
I think there could very well be, you know, an important signal in
people’s discomfort with head mounted displays that we should respect
and understand, and I wouldn’t necessarily want to try to like work
around or like hack or whatever.
So first of all, there’s a wide range of how comfortable and
uncomfortable both the users and people around feel with people wearing
a headset. So that’s just something to be said, because I’ve had test
users who very happily sat on a plane, you know, wearing a giant VR
headset and were excitedly sending like test notes about their
experience to me about it and took selfies and others who would like
never in a million years, like, do that. I think that the future I would
want to live in and to have future generations live in where smart
glasses are a thing. would be one where we don’t wear them all the
time, and we wear them in very well understood and well defined times
and contexts and places to do things that like, we care about, to
accomplish tasks or to, you know, do work, or to have meetings to
communicate, to collaborate, and then we can take them off and like also
enjoy our life in an unmediated fashion and I think that so far the
technology industry has not been great at making Tech that, like,
encourage you to then stop using it after the appropriate amount of time
has passed or the task has been completed. So that’s something that we
should think about, we should work on. I also think it’s very
interesting, even the distinction between a laptop and a tablet in a
01:28:04 - Speaker 2: Yeah, we even get that feedback from new users
where they say, yeah, I just get a different reaction when I’m in a
meeting. And I’m scribbling and use versus typing on my laptop, right?
01:28:14 - Speaker 1: And I think there’s something to be learned from
that, like, what is it about? The tablet that I think others around you
feel so much more comfortable with compared to a laptop. Is it as simple
as the fact that others can see, you know, because the tablet lays down
on a flat on the table, others can see what you’re seeing as well, you
know, is it as simple as that?
01:28:35 - Speaker 3: So my intuition is that it’s basically being able
to understand and therefore approve or disprove of what the person is
doing, because with a tablet, A, as you said, it’s flat and you can see
it, and B, a tablet is less powerful, it’s less general purpose,
including not having a keyboard. With a keyboard you can like type
really fast and people basically can’t see what you’re typing, which
is not the case with handwriting.
And so that theory, to my mind explains why when you put on this really
powerful headset where you can do whatever you want, and people can’t
see it at all, that’s when they get the most nervous, and then on the
other side, something like a book, you basically know exactly what
you’re doing, there’s very little that’s mysterious, there’s no
hidden powers. That’s my kind of theory for how people react.
01:29:20 - Speaker 1: That makes a lot of sense to me that like there’s
some like irrational but visceral nervousness from the people around you
that like you could be up to something nefarious, right? If you have a
keyboard and you know, you have this really powerful computer in front
of you and you’re kind of like in it, but also within our proximity.
01:29:39 - Speaker 2: Right, are you typing into a chat? Oh my God, you
wouldn’t believe this idiot in this meeting I’m in right now.
01:29:45 - Speaker 1: Exactly, yeah. I don’t fully understand that
distinction, or I don’t understand that. I very much feel it, but I
don’t know, you know, how to explain it.
01:29:53 - Speaker 2: Well, I guess that’s something for the industry
to figure out as they go again, once you get past these baseline
concerns of can you see and can you point to things and click on them,
OK, now we need to move on to these next steps.
And I do wonder, yeah, talking about the taking it off or the using it
in the right context, you know, certainly there you could talk about,
you know, how an office might differ from a plane, how that might differ
from a park or something.
But maybe there’s also an element of, you know, I think of something
like over your headphones, which you are immersing yourself in a world
and cutting yourself off, especially if you throw in the noise canceling
from the world around you, and there are probably times and places where
that’s rude. Nothing seems weird about someone wearing noise canceling
headphones or over your headphones while they’re taking a run. They
figure they’re listening to music, they’re listening to a podcast,
that makes sense for them to be not that connected from on an audio
basis to their environment, whereas there might be other settings where,
you know, it might be more rude or disconcerting or weird if someone
suddenly put on some headphones.
01:30:55 - Speaker 1: Yeah. I mean, again, so Apple is really taking
this question very seriously. They sacrificed a lot of weight and
comfort and a bunch of ergonomic sort of trade-offs were made to have
that outward facing eye display on the headset that would show people
around you, your face, you know, and this is another situation where I
think we just have to wait and see what effect this has, what difference
People around you comfort because I find, I think I’m not alone in like
finding people wearing sunglasses as they’re speaking to me a little
bit disconcerting at times, right, compared to glasses, right, or even
like tinted glasses right you can’t see where their eyes are looking.
And so, I mean, we’ve just never had a pair of VR goggles, like the
situation where someone’s talking to you or in the same space as you.
You can see their eyes, but you also know that they’re seeing other
things that you cannot see, like, it’s too weird and new. We have to
kind of just see where the chips fall on this one. I mean, people
presumably inside Apple know, but we don’t yet.
01:32:04 - Speaker 3: Yeah, maybe we’ll see some experiments with using
that forward facing display in other ways.
The physical analogy that comes to mind, like imagine if a collaborator
walked up to you, holding a piece of paper, like a fair project, like
vertically, so that you couldn’t see it and then started talking to you
about it, you know, well, look at this document, and meanwhile, your
collaborator can see it perfectly, but you can’t see it at all cause
That’s kind of the physical analogy of what I’m seeing with these VR
displays, but that could be solved potentially using the forward facing
display. You can render like translucent, even just like outlines of
windows so that you see if the collaborator, you know, control tabs over
the chat screen to complain about you, you can basically see that
without it getting, you know, too in your face. You know, we’ll see.
01:32:45 - Speaker 2: It’s true actually, many of the settings I’ve
had the chance to try VR, which often are art projects or yeah,
something like a basically an arcade where you can go to play games in
or some of the kind of demo set up. There’s very often a monitor
nearby, maybe a big one that’s right next to you that everyone can see
what you’re doing or what’s on your.
Screen and that setting seems to improve that somehow. Now it’s a very
artificial setting in a way because that’s not how you would do regular
Most people don’t want to see my note taking or whatever it is that
I’m doing, nor do I specifically want to display that to them.
But yeah, you wonder if there’s some variation on that, that is, yeah,
you have some frosted glass version that you kind of have a vague sense
of what they’re up to in the same way that you kind of peek the corner
of someone’s screen without like sort of fully seeing what they’re
Well, before we go, I’d love to hear what’s the future hold for
01:33:39 - Speaker 1: Good things, big things. So we’ve been working on
SoftSpace for quite a while now, always with the idea in mind that
eventually the hardware would catch up to our ambitions, and I think
we’re there. I think especially with this new generation of hardware
coming out not only from Apple, but also from Meta, that we are finally
going to be in a place to put our products to test.
And so we’ll be launching on the MetaQuest store this summer, mid
August, just in time for back to school. And following that, we’ll be
talking to as many users as we can, getting people in the headset, in
the app, exploring their important use cases and letting them work on
the projects that they care about and learning from them how this brave
new world of spatial computing can improve how they create, how they
communicate, and how they do the work that matters most to them.
01:34:35 - Speaker 2: Very exciting. We’ll wrap it there. Thanks
everyone for listening. You can join us in Discord to discuss this
episode with me, Mark, and our community, the links in the show notes.
And Ilio, thanks for seeing the promise in this technology and working
on it for so long, even as maybe the rest of us start to realize what
01:34:55 - Speaker 1: Thanks, Adam. Thanks Mark. It was really fun
talking through all this with you and I’ve been having such interesting
conversations with people like yourselves and others who up until very
recently maybe wouldn’t have thought too much about spatial computing
but now are finding themselves intrigued or intrigued again and I look
forward to having many more conversations as we learn more about where