You’re at home playing a virtual reality (VR) game on the Oculus Rift, dodging zombies like a pro. But then you step too far back or look behind you, and suddenly you’re frozen in space, as the system’s infrared cameras can no longer see the lights on your goggles and it loses track of you. Instant brain food. Now, researchers have come up with a way to spare you such a frustrating end by using standard Wi-Fi technology to enhance VR’s tracking abilities. In addition to improving VR, the technology could also help track robots or drones and streamline motion capture for movies.
VR enables a user to move through a virtual 3D world projected through the video screens in the system’s headset. To track the user’s movement, the Rift uses one or more infrared cameras in a room, often on tripods. The headset has accelerometers to measure tilt, and it has infrared lights that the cameras use to track movement forward, back, or sideways. Another VR system, the HTC Vive, tracks movement by projecting infrared light from devices in the corners of the room that are detected by sensors on the headset. A related technology, called augmented reality (AR), maps virtual features onto the wearer’s view of the real world. So a user’s living room may be inhabited by virtual monsters. Microsoft’s HoloLens AR system uses several outward-facing cameras on the headset to track the user’s movement in relation to the environment
Such systems have their limitations, however. In order for VR games to work without glitches, users often need to stay within a few square meters, and the infrared sightlines can’t be blocked by furniture or other people or by turning away. Microsoft’s AR system doesn’t work in all lighting conditions, it can be confused by blank walls or windows, and it can’t track your hands if they move out of view.
A team of researchers from Stanford University in Palo Alto, California, wanted a simpler, cheaper, more robust system. So they turned to the common radio technology Wi-Fi. Wi-Fi has been used to localize people and objects in space before, but only with an accuracy of tens of centimeters, says Manikanta Kotaru, a computer scientist at Stanford, and he and his colleagues thought they could do better.
Their solution, which they call WiCapture, requires two parts: a standard Wi-Fi chip, such as the one you might find in your phone, and at least two Wi-Fi “access points,” which are transmitters such as the ones found in home routers. Communication between the chip and a transmitter comes in high-frequency radio waves. In order to track a Wi-Fi signal source with millimeter-level accuracy, one needs to measure the time it takes a signal to travel from the chip to the transmitter with picosecond-level accuracy. However, the chip and transmitter have different clocks, and no two clocks in Wi-Fi devices are perfectly synchronized.
To get around this problem the researchers took advantage of the fact that signals reach the transmitter through many paths. Some radio waves travel directly to the receiver to create the main signal, whereas others bounce off walls to create echoes. Kotaru wrote an algorithm that looks at signals from two different paths, identified by triangulating among the transmitter’s multiple antennas. Those signals will be equally affected by clock asynchrony, so the algorithm can just compare their relative change as the chip moves and ignore the drift of the clocks’ timing. Still, this method measures distance to only one transmitter; using two or more transmitters in combination allows the algorithm to use triangulation to track motion in two dimensions. (The researchers will eventually expand WiCapture to track motion in three dimensions.)
To test the idea, scientists placed the Wi-Fi chip on a mechanical device that could move it with high accuracy in an office 5 meters by 6 meters with four Wi-Fi transmitters in the corners. As they moved the chip around in various patterns, WiCapture tracked its position to within a centimeter. Next, the researchers tried an office in which all the Wi-Fi transmitters were occluded by furniture or walls. As long as two were in the same room as the chip, WiCapture’s median error was still only 1.5 centimeters. Outside, the median error was again less than a centimeter, the team will report this month at the Conference on Computer Vision and Pattern Recognition in Honolulu.
“It was really nice to bridge work in the wireless community with work in the virtual reality community,” says Dina Katabi, a computer scientist at the Massachusetts Institute of Technology in Cambridge who was not involved in the experiment. Yuval Boger, a physicist and the CEO of Sensics, a VR hardware and software company in Columbia, Maryland, says, “the need is real” for a robust hi-resolution position tracker. He notes that 1 centimeter is not a high enough accuracy for head tracking, but would work for hand tracking. In a fighting game, “I’m not sure I’m going to do any small delicate movements with a sword.”
The authors acknowledge that WiCapture still has a slower reaction time and lower accuracy than infrared cameras, but they think they can improve both by combining it with an accelerometer to add another source of data and fill in the gaps. In any case, Kotaru says, the technology is basically ready to use.
This article was originally published by Science. Read the original article.