|
2011-03-01
|
This document describes Modality Components in the MMI Architecture which
are responsible for controlling the various input and output modalities
on various devices by providing guidelines and suggestions for designing
Modality Components. Also this document shows several possible examples
of Modality Components, (1) face identification, (2) form-filling using
handwriting recognition and (3) video display.
|
|
2009-12-15
|
This is a sample short description for this specification;
over time we will replace this description with a real one.
|
|
2008-07-02
|
This document describes a multimodal system which implements
the W3C Multimodal Architecture and gives an example of a
simple multimodal application authored using various W3C markup
languages, including SCXML, CCXML, VoiceXML 2.1 and HTML.
|
|
2006-09-11
|
This document is based on the accumulated experience of several
years of developing multimodal applications. It provides a
collection of common sense advice for developers of multimodal
user interfaces.
|
|
2006-04-14
|
Several years of multimodal application development in
various business areas and on various device platforms has
provided developers enough experience to provide detailed
feedback about what they like, dislike, and want to see
improve and continue. This experience is provided here as
an input to the specifications under development in the W3C
Multimodal Interaction
and Voice Browser
Activities.
|
|
2004-05-10
|
This document describes the DOM capabilities needed to support a
heterogeneous multimodal environment and the current state of DOM
interfaces supporting those capabilities. These DOM interfaces are
used between modality components and their host environment in the
W3C Multimodal
Interaction Framework as proposed by the W3C Multimodal Interaction
Activity.
The Multimodal Interaction Framework separates multimodal
systems into a set of functional units, including Input and Output
components, an Interaction Mananger, Session Components, System and
Environment, and Application Functions. In order for those
functional components to interact with each other to form an
application interpreter, the browser implementation must allow for
communication and coordination between those components. This DOM
interface identifies the DOM APIs used to communicate and
coordinate at the browser implemention level. Multimodal browsers
can be stand-alone or distributed systems.
|
|
2003-05-06
|
This document introduces the W3C Multimodal Interaction
Framework, and identifies the major components for multimodal
systems. Each component represents a set of related functions. The
framework identifies the markup languages used to describe
information required by components and for data flowing among
components. The W3C Multimodal Interaction Framework describes
input and output modes widely used today and can be extended to
include additional modes of user input and output as they become
available.
|
|
2003-01-13
|
This document describes requirements for the Extensible
MultiModal Annotation language (EMMA) specification under
development in the W3C Multimodal Interaction
Activity. EMMA is intended as a data format for the interface
between input processors and interaction management systems. It will
define the means for recognizers to annotate application specific
data with information such as confidence scores, time stamps, input
mode (e.g. key strokes, speech or pen), alternative recognition
hypotheses, and partial recognition results, etc. EMMA is a target
data format for the semantic interpretation specification being
developed in the Voice Browser Activity, and
which describes annotations to speech grammars for extracting
application specific data as a result of speech recognition. EMMA
supercedes earlier work on the natural language semantics markup
language in the Voice Browser Activity.
|
|
2003-01-08
|
This document describes fundamental requirements for the
specifications under development in the W3C Multimodal Interaction
Activity. These requirements were derived from use case studies
as discussed in Appendix A. They have been
developed for use by the Multimodal Interaction
Working Group (W3C Members
only), but may also be relevant to other W3C working groups and
related external standard activities.
The requirements cover general issues, inputs, outputs,
architecture, integration, synchronization points, runtimes and
deployments, but this document does not address application or
deployment conformance rules.
|
|
2002-12-04
|
The W3C Multimodal
Interaction Activity is developing specifications as a basis
for a new breed of Web applications in which you can interact using
multiple modes of interaction, for instance, using speech, hand
writing, and key presses for input, and spoken prompts, audio and
visual displays for output. This document describes several use
cases for multimodal interaction and presents them in terms of
varying device capabilities and the events needed by each use case
to couple different components of a multimodal application.
|