When the Web started some 20 years ago, it brought a platform
for distributing and accessing text, with an added dimension
brought by links:
hypertext. Because it was free and could be
deployed everywhere easily, it was a revolution.
For the past few years, we've seen this additional dimension
brought to media content, building hypermedia: SVG and canvas make
it possible to build graphics that integrate or link to content
from various sources, the addition of the audio and video tags in
HTML5 are the starting points for making audio-video content as
integrated into the Web as images have been. The Popcorn.js project
illustrates for instance how video content can benefit from
hyperlinking (much in the same way
I had been exploring 2 years ago with my presentation viewer).
Because these technologies are, can be deployed everywhere easily,
I expect this will increasingly revolutionize our consumption of
media.
I believe we're now starting to see a new trend of that sort,
with the emergence of what I would call
hyperdevices.
As more and more devices (
mobile obviously, but also
tablets,
TV, cars,
lightbulbs and many more) get connected, they more often that not
get shipped with Web capabilities.
As the Web gains more and more access to the specific
capabilities of these devices (
touch interactions ,
geolocation ,
accelerometer , and
many
more ), not only does it become a platform of choice to develop
applications targeted at these devices (as we’re exploring in the
MobiWebApp project), but it
also creates new form of interactions across these devices that
were not possible previously.
To illustrate that point, I've built a couple of very simple
demos:
- the
remote
whiteboard lets you use a touch-screen device to draw on another
device screen, via the browser; in other words, it gives
touch-capabilities to devices that don't have touch screens (e.g.
most desktops or TV sets) — see the
video of
the demo in action
- the
3D
explorer lets you use a device with an accelerometer to rotate a
3D-graphic (in this case, the
HTML5 logo ) on a
separate screen (see
a video of the
demo in action); it thus gives the equivalent of a 3D-mouse to
any Web-enabled devices; whilst there probably are some 3D mouses
out there, it's by far not as ubiquitous as mobile devices that
come with an accelerometer
No Video Support. Transcript of the video:
In this demonstration, I'm using the browser on my mobile phone
with accelerometer support to manipulate a 3D object viewed in the
browser on my desktop computer. As I turn the phone, the rendering
of the 3D object turns in synchronization.
This demo was built by Dominique Hazael-Massieux, only with W3C
Web technologies. Find out more at
github.com/dontcallmedom.
Note:If your browser does not support HTML5, you
may still be able to view the video directly:
mp4
version ,
webm
version.
We're still at the very early days on this wave, but there is a
growing interest around it. Device and service discovery (see
recent discussions on Web Intents ) will play a role there
without a doubt, and the work done as part of the
webinos project(where I'm
involved) will hopefully also inform the technical challenges that
are still to be overcome. We will also need plenty of creativity to
invent user interactions that match these new capabilities.
But for sure, this is where we are going.
Follow me on Twitter:
@dontcallemdom.