Consider us your emerging tech update. We blog daily about breaking news, innovation, success stories and more.
The Push, Sponsored by Mutual Mobile
The Push, Sponsored by Mutual Mobile

We'll send the top stories to your inbox.

Is Motorola’s Moto X the future of contextual computing?

By Evan Wade / September 26, 2013


In the world of smart devices, a new “next big thing” is always lurking around the corner. With their recently released Moto X, Motorola wants to turn contextual computing — the ability to operate your phone without touching it, even when the screen is off — into this year’s must-have feature. Now that voice-assistant software like Siri has gone from cutting-edge tech to expected addition, hands-free operation is a logical next step in the evolution of smart devices.

No touching!

If you’ve used hands-free mobile accessories before (like Blue Ant’s line of audio devices), you know how the Moto X works. Simply say, “OK, Google Now” and the Android voice assistant appears on screen, allowing you to perform a number of tasks you’d normally have to tap an app for. A preview on SlashGear’s YouTube channel shows the concept at work.

“Great technology gets out of the way of its user,” says Android developer Brian Tsai. “Contextual computing is a natural evolution towards that goal.”

Motorola has accomplished this all on a relatively low-spec phone; the Moto X runs on a 1.7 GHZ dual-core Snapdragon CPU. The secret to its success is a voice-recognition chip built into the processor itself.

This, Tsai says, is device engineering done right. Low energy consumption sensors and lower cost chips — like those found in the Moto X — allow hardware to constantly run without a big impact on battery life. Because of the dedicated tools, manufacturers can commit less powerful hardware — like the X’s Snapdragon CPU — to the rest of the device’s tasks without sacrificing performance.

Though Google’s dedicating a chip to one specific feature is a sign of their investment in the future of contextual computing, Tsai says Mountain View has plenty more ideas coming down the pipeline. Google Glass, a device largely controlled by voice commands, makes heavy use of similar technology with apps like Gmail and YouTube. The result, Tsai says, is a company trying to be the first to implement an exciting new technology — something they’ve been wanting to do since they revolutionized the idea of a search engine.

No developer support — for now

While the idea is certainly cool and well implemented, the contextual computing capabilities of Moto X are for Google’s use only, at least for the foreseeable future. The Wall Street Journal quotes Motorola as saying there are “no immediate plans” to grant developers access to the feature (or the chip that runs it). This gives Google and its stable of developers a competitive advantage when it comes to the potentially game-changing feature.

Of course, Google and Motorola are far from the only phone/OS makers with an interest in contextual computing. Apple’s iPhone 5S, as reported in a recent article, uses a coprocessor called the M7 to track user movement for motion-based apps while the device sleeps. This allows developers to access some of its capabilities via their Core Motion API.

According to Newsday, Samsung has plans for contextual computing too, led by their lineup of smartphone chips and upcoming wearables like the Galaxy Gear smart watch. Tsai says gadgets like Google Glass, and Galaxy Gear in particular, hold a lot of promise for the technology, given their non-traditional form factors. It’s a lot easier to talk to a high-tech watch or pair of glasses than it is to navigate a tiny touch screen on either.

If always-on capability becomes the next must-have smartphone feature, you can bet some enterprising company will give developers the access they need to make use of it. In a industry where app support can make or break entire companies, as BlackBerry is learning, it makes sense to open the tech to developers.

Is contextual computing the future?

Until more phone/OS makers go the Apple route and open their contextual computing hardware up to developers, users will be stuck with whatever the big names want to offer. If the feature takes off in the Moto X, or some other contextual-capable smart device, you can bet companies will allow developers access to their own dedicated hardware and APIs.

With the next round of more powerful phones perpetually on the horizon, it’s only a matter of time until that changes and developers are granted complete access to the hardware and software that make contextual computing what it is. With use cases reaching far past voice recognition, it’ll certainly be exciting to see and work with the things that come from Google, Samsung and other leading innovators.