Intel Free Press reports on moves to harness smartphone sensors to help apps better personalize their services based on context…
By Intel Free Press
The 2013 film “Her” featured an operating system that could personalize itself to the user to the extent where the intelligence appeared anything but artificial. By taking cues from user data and its environment, the OS was able to respond to the user’s needs, even on an emotional level. While “Her” was science fiction, progress in the area of contextual computing is bringing such intelligent systems one step closer to science fact.
From GPS sensors to accelerometers to gyroscopes, smartphones already have been capturing and utilizing sensor data to enrich a user’s experience. Services such as Google Now combine user data with location to provide information on nearby attractions and travel times for calendar appointments, but much more can be done with smarter sensors.
For example, an ambient audio sensor along with calendar and location data could give a mobile device the contextual awareness to determine whether it can alert you with either an audible cue or a subtle vibration instead if you are in a meeting or movie theater.
Sensor hubs
In 2012, Intel introduced a sensor hub, a low-power hardware solution dedicated to gathering data from multiple sensors. Other industry players recognized the value of a dedicated sensor hub and also developed solutions. In 2013, Apple integrated a sensor data-collecting M7 chip into the iPhone 5S and Qualcomm repurposed itsHexagon DSP to handle sensor data.
Intel has since incorporated sensor hubs into mobile-focused Atom chips such as Merrifield, Moorefield, and, most recently, Cherry Trail.
“The demand for the sensor hub is the awakening of contextual sensing where always-on sensing is required without [the smartphone] being engaged,” says Claire Jackoski, sensor planner within the client components group at Intel.
Analytics firm IHS predicted 658.4 million sensor hubs shipped in 2014 and forecasts shipments to reach 1.3 billion units by 2017.
But what good is sensor hardware without the software to know what to do with it? To make sense of a collection a sensor data, developers need to put it into context.
Sensors that know what you’re doing
“Humans are very contextual by nature,” says Lana Nachman, principal engineer for User Experience Research, who runs the Anticipatory Computing Lab at Intel. “It’s very hard to come into somebody’s world without understanding the context.”
To this end, Nachman believes for contextual sensing technology to be adopted by users, the technology has to be able to learn proper behavior from users. Nachman likens this to a child’s learning. As they make mistakes, “you have to teach them and then over time…you can see [them] evolve.”
To enable a device to learn the proper behavior of the individual user, it must be taught and trained, and it should also have the contextual awareness to do so. Intel is providing developers the tools to accomplish this with a contextual sensing software development kit (SDK) and underlying hardware.
Ned Hayes, product manager for the context sensing SDK and context service at Intel, outlines that context also needs to pass between devices to understand which device is active, such as switching from laptop to smartphone, as well as how it is being used, allowing better interpretation and prediction of activities. Software analyzes and extrapolates data coming from hardware – think of it as big data for small devices – and developers can programmatically present actions or outputs based on the intersection and understanding of various sensor data.
“If a developer wants to know everything that a user is doing, [they] need to know the user’s context and create a narrative of the user’s day,” says Hayes. “Our system allows developers to have a holistic view of this user’s behavior.”
Intel has taken numerous algorithms believed to be helpful for understanding a user’s behavior and created on-device rules and context engines that operate within the sensor hub.
“Our intention is to make developers more productive and allow them to write an application that can run anywhere,” explains Hayes.
Contextual sensor hubs
Specialized always-on sensing chips are more efficient at their tasks than a general CPU, leading to overall power savings and decreased overhead for other systems to perform at greater capacity. But taking advantage of dedicated sensor hubs requires tweaks in software.
“All too often, these algorithms are not optimized to run in the sensor hub,” says Hayes. “So if you are trying to do a pedometer, many of the systems out there haven’t actually done the work to run it in the sensor hub as a separate call, so it is actually running in the CPU which runs down the battery and which means that depending on your connectivity and battery life, your phone life might not be as long, and the responsiveness might not be as good.”
All is not lost should a device not have a sensor hub; a developer using the context sensing SDK can gracefully execute the code within the CPU. This makes a developer’s work a bit easier as the context engine will run the appropriate code based on the hardware environment.
The Atom-powered Asus ZenFone 2 is one of the first mobile devicesto include the contextual sensor hub and associated software allowing it to respond to gestures using ZenMotion. Speaktoit, a “personal assistant” for smartphones and tablets, uses the context sensing SDK to augment its aid to go beyond the stock assistant features of Apple Siri, Google Now and Microsoft Cortana by allowing for customized commands, remembering places and services, and offering functions matching a user’s location and schedule.
With the latest sensors now listening and learning where we are, what we are doing, and guessing what our next action or activity will be, it might not be too much longer before we have intelligent conversations with our smart devices.
Tom Foremski is the Editor and Founder of the popular and top-ranked news site Silicon Valley Watcher, reporting on business and culture of innovation. He is a former journalist at the Financial Times and in 2004, became the first journalist from a leading newspaper to resign and become a full-time journalist blogger.
Tom has been reporting on Silicon Valley and the US tech industry since 1984 and has been named as one of the top 50 (#28) most influential bloggers in Silicon Valley. His current focus is on the convergence of media and technology — the making of a new era for Silicon Valley. He also writes a column at ZDNET.