Developing for HoloLens

Mar 30, 2016 | 8ninths | Augmented Reality | Design Patterns | Mixed Reality | Virtual Reality

Some Behind the Lens Info from our Development Team at 8ninths

VR/MR platform-independent tips:

  • Don’t child any visual object transforms under your HMD camera. The headlocked motion feels terrible. We sidestep this issue by placing an invisible transform locked to our camera which tracks the actual cursor position, then having a separate visual cursor in worldspace that performs a smooth lerp towards that true position. Even when the difference is imperceptible, it helps with VR sickness.
  • VR/MR platforms are truly 3D in a way that Super Mario 64 just isn’t. More than ever, prototyping/greyboxing with actual literal physical objects is effective and logical. We’ve got a pile of taped-together cardboard to prove our commitment to this idea.
    • Some major issues this work can give you early red flags on: perspective, parallax, occlusion, relative scale, text legibility, spatial aesthetics.
    • Even thinking through your most basic interactions in physical space can be helpful—while you’re reinterpreting cursors three dimensionally, also be thinking about tried-and-true physical paradigms like hinges, knobs, buttons, hand tools. With HoloLens, Vive, and Oculus, we’re using our hands spatially again.




HoloLens specific details:

  • HoloLens uses an additive display, which means there’s no true black—black renders transparent. The lower the RGB values, the less visible the color. And importantly, the space your app is used in can dramatically affect the visibility of different colors. A dark space will make your holograms read much better than they would in a brightly lit one.
  • The FOV on the HoloLens is not equal to your eye’s natural FOV, and the difference between the holograms’ edges and your natural vision can be offputting to first time users. However, with well designed apps, this issue all but disappears. Design with the FOV in mind. The rule of thumb we’ve seen thrown around is that if an objects is ~1m away, it should be about the size of a shoebox. However, in practice large objects seem to work just fine, as long as there is enough interior detail for the user to understand the context of what they’re seeing. (A screen full of a single color is confusing and disorienting, but a screen filled with a detailed scene, like in the HoloTours demo, looks and feels just fine.)
  • To situate and integrate holographic images/ objects with real objects, a lot of lighting tricks can be applied to reinforce the illusion. Several tricks are principles borrowed from painting techniques. For instance:
    • Using shadow to indicate there’s contact between the holograms and “ground.” Note that because the display is additive, shadows must be reinforced with a pool of brightness around the shadow edges.
    • Using specularity to indicate the directionality of the light.
    • A pool of light at the foot of the object, or a soft gradient grid can reinforce the illusion of the holograms contacting the physical surface.


The gap between in-editor and on-device has never been so expansive, both in terms of how long it takes to cross that gap in build time, and, more importantly, in how different your designers’ initial conceptual models will be from what their designs look like through a Lens.




Your job as an engineer in this field must be to aid your designers as they work through this new and treacherous design space.

To this end, some practical advice for your day-to-day work:

  • Focus on building a robust prototyping ‘sandbox’ for your designers to work in. There are tools they’ll want that won’t be there. Their work is important, and if you can put a multiplier on their time, you should prioritize that.
  • Get your build process together. It can be complex and waste a lot of time. Organize it, optimize it, and document it. Teach it to as many people as could conceivably use it, so that artists and designers can see their work through a Lens as soon as possible and without distracting anyone else. Tighten up the design-test feedback loop that is such a crucial part of the creative process.
  • Get as much testable functionality into your editor as possible. use #if UNITY_EDITOR precompiler directives to replace HoloLens-specific input with keyboard input in the editor so you can test the majority of your code without having to wait several minutes for a deploy.
  • Get as much editor-like functionality into your HoloLens build as possible to make repeat testing easy without a redeploy. Use obscure voice commands to enable useful developer tools like axis-aligned object movement, or scene resetting. Wish you could visualize your world’s invisible collision mesh to understand what’s causing that weird inconsistency? Take a minute to bind a keyword to toggle surface mesh visibility. Just do it. You’ll thank yourself later, and repeatedly.
  • You can debug on the device using Debug.Log and the Visual Studio debug mode. It’s not convenient, so get as much of your code tested in the editor as possible. That way you’ll know that your problem is isolated to your device-specific code.

– Kevin Maxon, Lead Unity Developer

mail Article End


Stay in the know about all the latest
Virtual & Mixed Reality news.

© 2019 8ninths, Inc.