Quick Blog – NUI Tech Demo from Microsoft Research

Here is some really cool tech demoed from Microsoft Research.

We are very quickly coming upon the era of NUI and non-peripheral based computer interaction.

Not only that, but this is all going to go a long way towards robotics and allowing robots to interact with us in a more natural way.

What do you think?  Is this an early version of a viable evolution in computer interaction or is this an interesting dead-end?

Advertisements

3 thoughts on “Quick Blog – NUI Tech Demo from Microsoft Research

  1. I’m going to make an uninformed comment, as I had to watch the video with the sound off. I love these demos of new input concepts, but I don’t think their slow uptake is due only to inertia. The biggest barrier is the lack of direct feedback inside the UI. When you move a mouse, you see a direct correlation with the movement of the cursor inside the UI. With a touch interface, you manipulate the UI directly with a stylus or your finger. A gesture based input system (which, as far as I know, is every stem except the aforementioned) abstracts out from the UI into a set of coded feature gestures

    • [Accidentally submitted that comment early. Ah, the joys of writing on a phone.]

      … into a set of coded gestures. This is actually very similar to keyboard input, trading an alphabetic command vocabulary for a gestural one. While it may reduce the learning curve, it still fails to be truly intuitive in the way that working within the UI is. For an example, look how a new user to a touch based UI will tap the zoom in and out buttons rather than pinch zooming until they have been taught the new gesture (and even then will sometimes require time before making it a habit).

      The new interface that overcomes this barrier will be the one that replaces or current input paradigms.

      • Your points are well taken, Jason. It does no good to encode UI instruction in any arbitrary form, be it key commands, gestures, or even voice commands.

        The key to a successful transition to a NUI is that it be natural. How do users want to convey their instruction to the computer? Different people are going to want to do the same task in different ways. Even the same person may change their interaction behavior given different situations. NUI will rely on solid interface design even more than a GUI.

        Even a robust initial interface design will be lacking, though. It will never truly be natural if we must still adapt our behavior to what the computer understands. The interface must also be adaptable, it must be able to learn, or at least be influenced by the user.

        Why can’t the user define their own gestures? That would be natural. The user could define multiple different types of commands for the same task for different use-cases they may find themselves in. And different users could define different interface commands, because the system recognizes each user individually.

        Now the computer is learning different commands mean the same thing and the same command from different users means different things, well that is informing it about nuance and context. And that is what something like a robot needs to be successful when immersed in a human-dominated environment.

        And none of what I’ve suggested is really that far beyond what we are already doing.

What's your opinion?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s