In the Spotlight: Mobile Sensors Define Context
Author: Derek Kerton, Principal Analyst, The Kerton Group
Expanding on the Telecom Council’s “Context, Sensors, & Personalization” Meeting last Thursday (4/3), industry analyst, Derek Kerton, explains that developers need to be less reliant on GPS, and build apps leveraging the sensors, platforms, and APIs built into mobile devices.
Isn’t it kinda time we got past the GPS? While it is definitely true that location is probably the strongest signal developers have for identifying a user’s context, that strength has meant that it has become somewhat of a crutch.
Just because it is one of the best signals, and can now be easily polled, does not mean that it should be the only signal. Developers can build much better apps by leveraging the increasing number of sensors, platforms, and APIs built into recent and upcoming mobile devices.
An example of using additional sensors to improve an app could be a “safe driving” app that prevents using SMS or apps while driving. While GPS could determine that the phone is moving at vehicular speed along roadways, it cannot reveal whether the user is on a bus or a car. However, intelligent use of the microphone and the accelerometer could signal to the developer whether the sounds and g-forces are more closely mapped to a bus or a car. In such a way, the lock on SMS use could be disabled if the context is determined to be a bus or other public transit. This example illustrates how there’s more to context than GPS can reveal.
On a wearables panel held by the Telecom Council of Silicon Valley, Sam Massih, Director of Business Development at InvenSense – a maker of MEMS sensors for mobile devices – explained how the evolution in sensor platforms in mobile devices is changing. The company has moved from past standalone sensors like gyroscopes to multiple sensors on a single chip such as gyroscope, accelerometer, and compass. But Mr. Massih explained that the real innovation is that the latest silicon includes on-chip logic and algorithms that determine (OK, best guess) not just G-forces and direction, but what the user is doing based on the company’s R&D work, mapping sensor data to activities.
For example, such a single-chip solution could not only confirm that the user is in motion, but could estimate that the user is: in a vehicle, walking, running, cycling, or skiing, etc. based on patterns in the motion. In the near future, developers will not only be able to tap into the sensors of the phone through APIs for raw sensor data, but into sensor platforms that determine the most likely user activity at the moment. Even better, these platforms are designed to run their own low-power processors non-stop, and not pull the power-hungry main processor out of sleep just to monitor a sensor.
When Apple released the iPhone 5s in September 2013, it included the M7 motion co-processor, which is also inside the most recent iPad devices. The M7, much like the InvenSense chip described above, includes a gyroscope, an accelerometer, and a compass, as well as processing intelligence to derive contextual cues from the motions of the device. The M7 chip can run constantly (even when the phone is asleep) and offloads sensor work from the more powerful main processor, allowing the main A7 chip to use more sleep cycles and greatly reduce the power demands of constant contextual sensing. Apple exposes this chip through the Core Motion APIs of iOS 7. Developers can use the chip to detect the user’s current and past activity and for dead reckoning in indoor location (weak GPS) and mapping scenarios. Currently, this type of sensor is driving the exploding market for wearable accessories, like the FitBit, but with these low-power chips it can become part of the smartphone platform.
Mobile R&D company InterDigital is proposing solutions like “user adaptive video streaming”, which adapt the resolution and bandwidth of streaming video based on the users’ context. Now, this has already been done based on the device ( we don’t bother sending 4k video to a 480 x 320 screen.) But Interdigital makes greater use of the sensors, for example concluding that there is no point sending 1080p video streams – even to a 1080p phone – if that phone is also shaking around in a walking person’s hand, as indicated by the accelerometers and sensor platform. The walking and watching user would probably not detect a shift down to lower resolution, which would save bandwidth and buffering pauses.
The release of these types of sensors, sensor platforms, and APIs make 2014 a fairly clear line in the sand. From here onward, apps and services will be able to take advantage of much more granular, more persistent, and more intelligent context awareness. Enter the era of micro-context , powered by your friendly neighborhood sensor.