From the entry on the market of Windows 8 in last year in October until now the number of devices equipped with the new operating system is growing rapidly: 60 million licenses up to January 2012 (source: ZDNet).
With a parallel growth factor, multitouch devices that come on the market are growing every day: From small devices such as Microsoft Surface, passing to laptops Thynkpad Twist, PC like Sony Vaio L Series, or multitouch monitor of any size coming to the giant 84-inch multitouch screen by 3M and then the tables, oh how many touch tables are currently available? I don’t know exactly, I can only tell you that in two weeks I discovered two new tables: the Lenovo IdeaCentre Horizon and the Oqtopus by Displax.
Picture 1: Lenovo IdeaCentre Horizon
Picture 2: Oqtopus by Displax
Unfortunately, even if Windows 8 and its apps, are designed to scale the content based on the resolution Building Windows 8 Scaling to different screens, Guidelines for scaling to screens (Windows Store apps) (Windows), the problems of poor user experience on large format touch surfaces are well known to all. Just do a little research to find thousands of results: articles, blog posts and pages that discuss this topic.
However, why Windows 8 doesn’t "work" well on large multitouch surfaces? Simple: the user experience of Windows 8 is single user with a uni-directional design. This means that it is meant to be used by a single user: below, above, left and right are well defined. Just think to the fact that swiping up your finger from the base of the surface always something happens (turn on the app bar), ditto for the gesture of moving from the right side of the multitouch surface to left (System commands), from left side to right and then from top to bottom.
Therefore, while you are using Windows 8 on a portable device or on a monitor placed vertically everything "works" but when it comes to be used on a large format display placed horizontally (staring from 30 degree) perhaps in collaboration with other people well, then this beautiful Ux has something wrong.
I still remember Mike Angiulo presentation of Surface when approaching a large monitor that ran Windows 8 and how many times I came to think: why you didn’t have incorporated controls of the Surface SDK on Windows 8?
Picture 3: Mike Angiulo and Steven Synosky presenting Windwos 8 on a large multitouch device
Just think to the design guidelines of the Windows Store Apps and then think for a moment to the Microsoft Surface 2 Design and Interaction Guide my question is:
Why these cannot be melted together and Surface 2 design guideline become a chapter of the Windows Store Apps entitled "Windows Apps Store Design guidelines for large format multitouch devices"?
I think everyone could benefit from this:
Microsoft would solve all the problems of interaction of Windows Store Apps on large-format displays.
Developers would have advantage of the official distribution channel (the store). targeting application to the operating system which are equipped with all of today's computers, even those accompanying all the tables currently in production, would be a big benefit because we could develop applications that could be deployed on a large scale.
End users would enjoy applications made just for use on large format device with epic user interface designed for multitouch and multiuser usage.
Its true, it is not so simple, there are some ‘little' problems:
First of all the Surface SDK does not work on Windows 8 . Unfortunately, Windows 8 changed the OS to restrict how you can read HID usages (the method that the Microsoft Surface 2.0 SDK used to read touch metadata). (if you are interested in the argument a good starting point would be Introduction to HID Concepts - Windows Drivers)
That means that any compiled Surface 2.0 application that uses the input subsystem of the Surface SDK 2 will not work. If you are writing a WPF application and have the ability to re-compile it, the workaround is to use a standard WPF Window instead of the SurfaceWindow. That will bypass the Surface SDK input subsystem. The drawback is that you will not be able to read any PixelSense data such as tags, blob size, etc. just touch points.
As said you cannot (at least in the short) have all the classes and extensions in the namespace Microsoft.Surface.Presentation.Input and all that comes from this: no tag, no orientation of the finger (but this applies only to our beloved PixelSense device, does not apply to all the other tables touch available).
Another problem would be not being able to rotate elements by simply rotating a finger but in this there is help in the Gestures, manipulations, and interactions (Windows Store apps) (Windows) where it is expressly clarified that the rotation must be supplied by two fingers.
The Surface shell could be another problem, but I think using windows embedded and a special launcher could solve all problems related to the "kiosk" mode.
Picture 4: the PixelSense Shell
These are just some of the problems to which they would face, but what you could have: all controls (and the cores used by them) that they have done and will do a great Ux of Surface multitouch multiuser
Ultimately I think that we should have to start somewhere and I think the porting partial Surface SDK on Windows RT would be something really great and useful for everyone: Microsoft, developers, and not least the end users.
On the Surface Application Design and Development MSDN forum there is a discussion where I'd like to read your comments and discuss with you all about the argument.