It’s been almost a year now since I got my Oculus DK2 VR headset, so I wanted to reflect back on a few months of experimenting, demos with friends and family, updating software and trying out a host of supported games. Also it’s a long-overdue follow-up to this previous post. With the commercial release of the Oculus and the Vive, and a number of supported games, things have become rather interesting.
It is evident to me that this technology has a lot of promise if you can see beyond the current hype. What is also clear is that this technology is by no means mainstream yet, it feels a lot like the early days of the web. Especially on the hardware front it needs to come a long way, and in my opinion needs an Apple-like player to come in and really push the boundaries of the hardware in terms of materials and engineering. Half a kilo of plastic and electronics on your face for more than an hour is just not a great experience, no matter how you put it.
My primary interest is in computer-generated VR spaces. Despite the push from content creators, I’m not a believer (yet) in VR video. This may partly be due to the lack of great content and the pace at which content creators are trying to adapt to the medium. I do feel pity for the many people whose only VR experience has been a Samsung Gear VR headset playing a poorly shot video from a drone equipped with a 360 camera rig. Aside from the sometimes sickening sensation it can generate, it’s just not the same as being able to lean in, move sideways and experience the sensation that you are part of the virtual space around you, with objects that appear to float in front of you.
Building on that, what got me most excited so far is Leap Motion’s Orion platform and what it can do in terms of building an entirely new interaction language and paradigm. Their impressive “Blocks” demo is a testament to that. The Gif shown above is something I mocked up in about one hour using Unreal Engine and the Leap Getnamo blueprint plugin.
There is something magical about it, while at the same time providing a degree of comfort from the ability to see the natural motion of your hands in front of you. This beats any HTC Vive or Oculus Touch in my opinion by a long mile, two products that augment the human body with a physical controller. I don’t think we can draw the parallels with touchscreens and the stylus (remember Steve Jobs’ quote?), but there’s certainly something to it. We start with the most basic, natural form of user input and exploring that to its full potential, before we add any accelerometers, gyroscopes, buttons or trackpads.
Some excellent write-ups are starting to pop up here and there, such as this Medium post from Jonathan Ravasz on design practices, Eugene Chung’s blog, one of the people behind “The Rose and I”, or this New Yorker write-up by Andrew Marantz on the challenges facing cinematographers and storytellers in VR.
To summarise this one, we’ve only scratched the surface. I’m excited.