Prototyping for an augmented future
I’ve spent more quality time in the past few months with Unreal Engine, Leap Motion, and the Oculus Rift DK2. After prototyping a few interaction concepts, it’s even more clear to me that VR is just a logical stop along the way to a fully augmented world. The writing is on the wall, and if you can see past all the fragmented buzz about virtual reality, IoT, machine learning, and conversational interfaces, a picture emerges where it all comes together. Plenty to be done for designers worried about their jobs.
VR as the ultimate prototyping platform
Don’t get me wrong, VR is here to stay and will happily co-exist with AR. The ability to step into another world is simply too compelling to ignore, for many applications. We’ll hopefully soon see the release of a device that brings AR and VR together in an elegant and usable way. Let’s see what Magic Leap comes up with. We could have a really nice visual switch between the two modes, building on some of the initial thinking in the sketch below.
The other thing that’s become evident to me is how perfect a prototyping platform VR provides, for the exploration of AR concepts. You are not constrained by any technical limitations that your AR hardware may suffer from, you can access the virtual environment from anywhere, and you can easily manipulate the content and environment to suit your needs. This is extremely valuable when you are designing applications for specific environments and need to test different scenarios and concepts. Obviously, VR is already partially providing some of these benefits today for architecture, environmental and automotive design, but I’m convinced we’ve only scratched the surface here. Have you seen some of those Magic Leap videos? You can already build and experience a lot of that using VR.
Putting the pieces together
What strikes me when I read today’s tech and design news, is the one-dimensional coverage of certain trends. Not many people appear to be talking about how the different technologies will come together, the bigger picture. I don’t doubt for a minute that people are actively working on it, but I have not seen many examples yet. More importantly, I have not seen a lot of concrete thoughts on what this would mean for the user experience.
What is actually possible when you bring machine learning, spatial awareness, IoT, conversational UI and augmented reality together? I am starting from the assumption that hardware-wise we will have something very similar to what Microsoft’s Hololens offers, and what Google is working on with project Tango. With that and the above capabilities supporting it, we will have something that never really existed before, ‘hyper-contextual’ interactions. I’m not talking about GPS-based location awareness or NFC, I am talking about the ability to start an interaction with any object simply by looking at it. You can be presented with audiovisual information or content related to the object, or you can bring up a user interface to interact with it.
While the ability to overlay objects with information and content is an extremely tempting one, we’ll need to design for this with care. We don’t want to end up in a situation where our direct surroundings are plastered with random objects, banners, menus and moving imagery, even though it’ll be an advertisers’ dream.
I’m looking for a scalable approach, where we decide when, and how to engage with our surroundings. The above sketch shows two random ideas, the ability to ‘summon’ the user interface of a music player, or the ability to ask questions about an object while looking at it. Kind of like a ‘Siri on steroids’, which might even help Apple’s creation become a little smarter. While contextualising content and interactions is definitely not the answer to everything, it does have the potential to provide some invaluable benefits in dealing with specific problems and situations.
the rules of engagement
I’d like to arrive at a design language with a common set of rules for engagement, on how we interact with the world around us. This will cover the different interaction modalities we’ll have at our disposal, an object classification framework, the possible object responses (visual, audio, haptic, etc.) and a design system to help us quickly prototype the different interactions. The voice assistant sketch below shows a basic example of such an interaction.
I am looking at this mainly from a design point-of-view, and everything is a work in progress. We have a massive task ahead of us identifying where and how this will solve specific problems, and how it will improve the way we interact with the world around us. One of my next steps is to get the design community more involved and get the conversation started there.
Let’s start looking at the bigger picture, this will change everything.