With today’s preponderance of smartphones and tablets, we’ve started seeing some pretty cool web functionality developed specifically for these portable devices. Take, for example, this 360° video the New York Times posted, “36 Hours in Michigan’s Upper Peninsula”, which they created for their virtual reality app.
Taking advantage of these devices’ motion sensing capabilities, a user can shift the frame of the video by moving their device from side to side. While this and other motion actuated functionality, such as shaking a device to undo an action or tilting one to advance to the next page, can offer users efficient and even thrilling new ways to interact with content, it can pose some difficulties for those experiencing certain physical or motor limitations. Some people may not be able to perform the gestures required to activate a particular function. Others keep their devices on a fixed mount and so don’t have the option to move or tilt them.
Accordingly, any functionality that is triggered by moving a device or gesturing towards it (such that the device’s sensors, like its camera, can pick up and interpret the user’s gestures), can also be operated by conventional user interface components. In addition, users should have the option to turn off the motion-sensitive controls (note, this part of the rule might be satisfied by supporting operating systems that allow users to deactivate motion detection on a systemwide level).
Take another look at that video from before. You’ll notice that in the top left-hand corner of the video frame, there are arrow buttons the user can operate in lieu of moving their device from side to side to shift the frame. By including this standard interface component as an alternative, the New York Times has ensured that everyone can enjoy a virtual tour of Michigan (FINALLY).
There are a number of noteworthy cases in which this rule does not apply.