Introduction

Microsoft Surface Touch TableI was hearing about Microsoft Surface far last few months, but did not get much information. Recently, I saw a post "PDC 2008" by Robert Levy and Brad Carpenter having a video session demonstrating the capabilities of Microsoft Session. The demonstrate the unique attributes of Microsoft Surface computing, with a dive into vision-based object recognition and core controls like ScatterView, and how the Surface SDK aligns with the multi-touch developer roadmap for Windows 7 and WPF. The software giant has built a new touch screen computer — a coffee table that will change the world. Forget the keyboard and mouse, the next generation of computer interfaces will be NUI (Natural User Interface) which uses natural motions, hand gesture and real world physical objects. The surface is capable of object recognition, object/finger orientation recognition and tracking, and is multi-touch and is multi-user. Users can interact with the machine by touching or dragging their fingertips and objects such as paintbrushes across the screen, or by placing and moving placed objects. This paradigm of interaction with computers is known as a natural user interface (NUI). Surface has been optimized to respond to 52 touches at a time.

Microsoft Surface applications can be written in Windows Presentation Foundation (WPF) and .Net language like C# or VB.Net. It is nicely integrated with Visual Studio 2008 following the same pattern of programming paradigms the .Net developers are used to. However it has custom WPF controls to allow programming for the unique interface of Surface. Developers already proficient in WPF can utilize the SDK to write Surface apps.

Object RecognitionMulti User Application

Microsoft Surface SDK 1.0

Right now, Microsoft Surface SDK is not available for public. It has been distributed only to some of the Microsoft partners. I am eagerly waiting to get my hands on the SDK once it is launched. Since, I have already worked with SilverLight and WPF, I think, it would be quite easy to pick Microsoft Surface. This is true for every developer worked on WPF and used XAML. However, there is a small catch ... Working for surface would require more creativity and unlearning the GUI which we are used to. The new interface requires us to think about the UI from entirely a new perspective ... a 360 degree multi-touch perspective.

I am sure the developers deep in love with .NET programming and having high respect for Microsoft Technologies, would be very excited after seeing the video session of PDC 2008. The features demonstrated, and the ease of programming together with seamless process of learning would make you sit up staright on your seat, open Google to find out whether you can get hold of Microsoft Surface SDK. Hmm! As you might be thinking after watching the video, whether you need the touch screen device which costs $10K-15K, to practice Microsoft Surface? The answer is No! Microsoft has a simulator for Surface which would allow you to deploy and play around with your surface code. Unfortunately, that too is not available to public right now. So you might have to wait a little more, until of course you are working for the company included in the Microsoft's Partner list, currently allowed to download and use Microsoft Surface SDK and Simulator.

It Is Not Just Another Glorified And Hyped Touch Screen Computer

Many had a wrong impression about Microsoft Surface thinking it to be another glorified and hyped touch screen computer. The touch screen computers enable users to do away with keyboard and mouse. They can navigate the menu by touching various options to reach a logical end of viewing data or printing. And there ends the comparison. Microsoft Surface device also known as Microsoft Tabletop can do many more things, which you might think are not possible!

The three main components that differentiates it from a regular Touch-Screen device, Direct interaction, Multi-Touch Contact and Object Recognition.

 Object Recognition
  • Multi-touch contact: User need not use only one finger to manage digital data. If user dips five fingers in five different colors in the on screen paint palette and draws five lines with five fingers, you get five lines with five colors . . .
  • Direct interaction: Facility to manage digital information with hand gestures and touch instead on mouse and keyboard.
  • Object Recognition: This technology allows non-digital objects to be used as input devices. This means that the input source does not necessarily have to be digital in form.

This Is The Future

Companies are putting in millions of dollar in research for NUI. There is a nice video "TED: Sixth Sense" from Pattie Maes' lab at MIT, spearheaded by Pranav Mistry. It's a wearable device with a projector that paves the way for profound interaction with our environment. It demonstrates a device which responds to natural gestures of hand which is read by a camera. The UI is developed and projected using a small projector fitted in the same device. If you guys have seen movies like "Minority Report", see these technologies nearing to the imaginations.

The NUI immensely increases interaction with digital contents in a more simplified manner. One does not need to practice with mouse. It won't be a single user interface anymore. Two users can open two different menus at the same time .... (Right now your application would open one menu at a time). A 360 degree interaction, with each person using the same application independently. Surely, we need to think and see the UI very differently now and you are bound only by your imaginations.

So friend's start unlearning the GUI you are accustomed to, and buckle your seat belts for the next gen -- NUI.

Manish