Skip to contents |
W3C Ubiquitous Web Domain | Multimodal Interaction home

Multimodal Interaction Activity Statement

The Multimodal Interaction Activity seeks to extend the Web to allow users to dynamically select the most appropriate mode of interaction for their current needs including any disabilities in order to enable Web application developers to provide an effective user interface for whichever modes the user selects. With multimodal Web applications, users can provide input via speech, handwriting and keystrokes, with output presented via displays, pre-recorded and synthetic speech, audio, and tactile mechanisms such as mobile phone vibrators and Braille strips.

The goal of the Multimodal Interaction Activity is to clearly define how to author concrete multimodal Web applications, for example, coupling a local GUI (e.g., HTML user agent) with a remote Speech I/F (e.g., VoiceXML user agent). The Multimodal Interaction Working Group is important as a central point of coordination within W3C for multimodal activities, and the group collaborates with other related Working Groups, e.g. Voice Browser, Scalable Vector Graphics, Compound Document Formats, Web Applications and Ubiquitous Web Applications.

Highlights Since the Previous Advisory Committee Meeting

Ink Markup Language (InkML) was published as a W3C Recommendation on 20 September 2011.

Also the second Last Call Working Draft of Multimodal Architecture and Interfaces was published on 6 September 2011. The main normative change from the previous draft is removing the 'immediate' field from the following Life Cycle Events:

We have finalized the Implementation Report Plan document and the Disposition of Comments document, and are preparing for the Candidate Recommendation.

The procedure of the extension of the Multimodal Interaction Working Group charter is ongoing, and the new charter is expected to be reviewed by the W3C Advisory Committee Representatives shortly. The new charter for the next period includes stronger integration of the Web technology and CE devices with various modalities for user interfaces.

Upcoming Activity Highlights

The group will get re-chartered shortly.

The group is preparing for the Candidate Recommendations of MMI Architecture and Emotion Markup Language (EmotionML).

On the other hand, the group continues to discuss potential extensions for EMMA to support new features for multimodal applications in various environments.

The group will hold the next face-to-face meeting on October 31-November 1 in Santa Clara US during TPAC 2011.

Summary of Activity Structure

GroupChairTeam ContactCharter
Multimodal Interaction Working Group
(participants)
Deborah DahlKazuyuki AshimuraChartered until 31 July 2013

This Activity Statement was prepared for TPAC 2011 per section 5 of the W3C Process Document. Generated from group data.

Kazuyuki Ashimura, Multimodal Interaction Activity Lead

$Id: Activity.html,v 1.405 2012/01/11 23:02:43 sysbot Exp $
Valid XHTML 1.0!