Over a thousand people attended the Social Business Jam, including both visionary innovators such as Tim Berners-Lee and Alex "Sandy" Pentland, discussing jointly with the business technologists such as Evan Prodromou (CEO of Status.Net) and Angel Diaz (IBM's Vice President of Software Standards). The discussion ranged from identity management to social metrics, and a number of innovative concepts were discussed in detail such as the ability to "prioritize" messages into activity streams and how to make e-mail a first-class social technology. In particular, the further development of standards around the federated social web are crucial to most business use-cases.
The final report of the Social Business Jam details a number of recommendations for standardization and use-cases. The main recommendation is to form a Social Business Community Group to develop customer driven use-cases for social business and mature standards around these use-cases - so join the Social Business Community Group!
]]>First, understand that the basis for determining conformance to WCAG 2.0 is the success criteria from the WCAG 2.0 standard — not the techniques. The Techniques document provides guidance that is "informative". You do not have to use the sufficient techniques to meet WCAG. Web content can use other ways to meet the WCAG success criteria. Web content could even fail a particular technique test, yet still meet WCAG a different way. Also, content that uses the published techniques does not necessarily meet all WCAG success criteria.
To learn more about the techniques, please see:
About this Update
The updated documents published today include more coverage of non-W3C technologies (Flash, PDF, Silverlight), which will help developers who are using those technologies make their content more accessible. However, publication of techniques for a specific technology does not imply that the technology can be used in all cases to create accessible content that meets WCAG 2.0. (For example, the Flash Techniques for WCAG 2.0 say: "Flash accessibility support for assistive technology relies on use in Windows operating systems, using Internet Explorer 6 or later (with Flash Player 6 or later) or Mozilla Firefox 3 or later (with Flash Player 9 or later).") Developers need to be aware of the limitations of specific technologies and ensure that they create content in a way that is accessible to all their potential users.
Changes in this update are highlighted in diff-marked versions at: Techniques for WCAG 2.0 (Diff), Understanding WCAG (Diff).
(Note: The first links above go to the latest version of the documents. The "dated" versions of this update are: Techniques for WCAG 2.0 (dated URI), Understanding WCAG (dated URI) The difference between these links are explained in Referencing and Linking to WAI Guidelines and Technical Documents.)
Help Develop Techniques
Updating and expanding these WCAG supporting documents is on-going work, and we welcome your contributions.
And finally, a big thanks to the WCAG Working Group and everyone who is contributing to providing updated WCAG 2.0 Techniques!
]]>URIs were originally used primarily to identify documents on the Web, or with the use of fragment identifiers, portions of those documents. As Web content has evolved to include Javascript and similar applications that have extensive client-side logic, a need has arisen to use URIs to identify states of such applications, to provide for bookmarking and linking those states, etc. This finding sets out some of the challenges of using URIs to identify application states, and recommends some best practices. A more formal introduction to the Finding and its scope can be found in its abstract.
The W3C TAG would like to thank Ashok Malhotra, who did much of the analysis and editing for this work, and also former TAG member T.V. Raman, who first brought this issue to the TAG's attention, and who wrote earlier drafts on which this finding is based.
]]>Alistair MacDonald, the initial chair of the Audio Working Group and the Audio Incubator Group, explains the focus of the group’s work:
What we are working on here is completely overhauling the way the we handle audio in the web. We are moving closer every week to a solution that goes far beyond what Flash and Java can offer us. In standardizing an advanced JavaScript audio API in the web browser, we can offer a level playing field for all developers; a solution that is a better fit for the way users engage with today’s “application-web”, not forcing users to deal with the often frustrating experience of downloading new plug-ins to get to interactive content, or excluding them from a growing mobile device market.
It has been really interesting to watch clients’ interest in these features at our web-development company SignedOn in Boston. We work with several media and music companies, as well as a Grammy award winning music producer, yet we get interest from all kinds of organizations. Streaming radio, game development studios, you name it! Everyone we talk to is extremely excited to hear about the developments in the Audio Working Group and wants to know when they can start building things with this API and roll it out cross-browser. The potential of the Audio Processing API for education, music software development, streaming services and game development is quite staggering. It is an untapped industry, and we see W3C as the place to get the work done and deployed.
The Audio Working Group recently added Olivier Thereaux from the BBC as a co-chair of the group. Having worked at the W3C for many years, Olivier brings great experience in the standardization process to the group, which greatly compliments co-chair Alistair MacDonald’s knowledge of existing audio software APIs, and experience in music and TV production studios.
Oliver explains the uniqueness of Audio WG standardization challenges:
The BBC has long been at the forefront of Research and Development in Audio —exemplified as early as the 1920s with our radio operations and as recently by the launch of the Audio Research Partnership— and we are enthusiastic that this expertise can be brought to the Open Web Platform thanks to the W3C Audio Working Group.
The group grew out of the exploratory W3C Audio Incubator group, in which the many participants, among them my colleague Chris Lowis, developed use cases and requirements to enable great experiences from games to music. Other requirements come from our collaboration with other working groups, including the WebRTC WG, the W3C effort on real-time communication, to bring great audio capabilities to future online communication channels.
And all this is not just for audio consumption, but a platform for amateur and professional audio processing applications on the web, connected and accessible, so we have to meet the needs not only of the traditional web developer, but also audio engineers and musicians.
To solve these requirements, the group was presented with two separate proposed specifications in various stages of implementation, and as co-chairs, Alistair and I are tasked with helping build consensus around a single approach going forward, taking the best from both approaches.
It's an exciting challenge, which will require the participation from many sides: as of today, we have browser vendors including Google, Mozilla, and Opera, and content and app developers like SignedOn and Noteflight and we also invite feedback and participation from other stakeholders.
Even though the Audio Processing API is relatively new, early release implementations from Mozilla and Google have caused a considerable buzz, with developers everywhere already creating demos, games, music applications and all kinds of interactive web applications using these new features.
If you are using an up-to-date version of Chrome or Firefox, you can try some of these out for yourself.
If you have not had a chance to see this video by Dave Humprey at Mozilla, this is a great opportunity to get a good idea about some of the things that are possible with an Audio Processing API.
You can test the interactive slides from this demo are here for you to test in Firefox.
Additional Firefox demos and API tutorials are also available.
Here are selection of amazing demos and games using the new audio features in Chrome.
Here is a tutorial on how to get started with Audio Processing in Chrome.
Some basic examples of what users can do with an Audio Processing API.
If you are interested in the development of audio APIs on the web, we want to hear from you. Please try out the demos, and create content yourself; if you know of other demos, or have made a cool demo yourself, please let us know, and we will post links to them from the Audio WG page.
But most importantly, please review the specifications and give use concrete feedback. This is the stage where you can have a real influence on how these technologies are developed and deployed in browser and authoring tools. There is an introductory document, the Audio Processing API, which serves as a landing page for the technical specifications, the Web Audio API and MediaStream Processing API. Let us know what you think on the Audio WG mailing list, public-audio@w3.org.
]]>Ian: Thank you for speaking with me. I'm interested in how the Open Web Platform is transforming publishing.
Arun: There is a lot of interest in creating content that can be refactored for different devices automatically. One aspect of that is how "responsive design" can increase versatility and efficiency.
Ian: I recently spoke with the Filament Group about "progressive enhancement"; sounds like people are very interested in this topic.
Arun: We are also looking to move content from Flash to HTML5. The latest Flash platform is now being presented as an environment suited to gaming and high performance applications. Additionally, HTML content is versatile in that it can be either accessed directly through the web browser or packaged into an app format on all the major mobile platforms.
Chris: A major issue we see in publishing in HTML5 is that there is not yet an easy migration for users of (Adobe) InDesign. It's a big jump for users. What's missing for them is the right HTML5 authoring tool. There is still some confusion in the marketplace and I think it will be a while until we get there.
Ian: Can you summarize the key motivators for Pearson to move to HTML5?
Chris: One reason is publish once, reach all devices.
Diana: The Open Web Platform lets us play in an open field. As the world's leading learning company, we don't want to be tied to just one vendor. We are still working on device-specific apps (in particular iOS today but Android increasingly), but investing in the Open Web Platform is safer than putting all our eggs in one basket.
Chris: As a publisher, we want to maintain and grow customer relationships. App stores have drained some of that business.
Ian: What are the main business decisions are driving your technology choices?
Dan: One activity to reach new markets is our "plug and play" platform, which we launched in August. We provide developers with APIs to give them access to curated content. It's free bellow a certain threshold with a tiered fee structure for higher usage. The market is still in the early stages for this sort of thing. Similarly, Google maps now charges for access to maps above a certain threshold.
Ian: How has the platform worked so far?
Dan: It is still pretty early, but we are slowly increasing the content we provide. It's a very interesting market and reminds me of the "open data" movement when it started: we want to let people innovate using our content. For instance, DK Eyewitness Guides offer travelers respected information. Someone developing a transportation ticketing service, for example, could make use of that information and provide a richer experience to customers.
Ian: Are you making the data available using semantic web technology?
Dan: For the moment only through APIs. But if we see a demand for a more semantic approach for the APIs we'd be open to it.
Ian: Any semantic web use internally?
Dan: There are some groups using internally, for example a taxonomy project.
Ian: So how do you represent the metadata in the Plug-and-Play platform?
Dan: The APIs let you search and query and get back metadata, but it's not linked data. But through this work we have recognized the value of linked data, of being able to link our information with other people's data. I'm interested in this and think that's a good path to go down in 2012.
Ian: Beyond the authoring tool issue, how is it going with app development using HTML5 and other technologies, for instance in terms of interoperability?
Chris: This is familiar territory for me. I'm used to working with platforms and frameworks, and am seeing the same needs arise with HTML5. The list of frameworks you need to know about is always evolving. Recently, though, it has settled down a bit to jQuery and PhoneGap. I've been surprised at how well the cross-platform has worked, including in Internet Explorer. But in terms of development, it's still a very manual process.
Ian: What are the keys to compatibility?
Chris: JQuery and PhoneGap. PhoneGap has a service where you upload your HTML5 app and they compile it into local binaries. This is way easier than before - we used to have people working full-time to create binaries for different platforms. PhoneGap has lowered costs and saved us time.
Ian: Any issues with Web App performance?
Chris: Performance is an issue, but bigger than that are behavior and control. Developers want fine control of threading and priorities. I think this is most common issue that comes up in HTML5 apps. In iOS you can have someone click something, update the UI, and make a network call in the background. You can't do that yet in HTML5. Or it's hard. In JavaScript you can be more specific. There are also issues of user perception of performance (e.g., activity indicators in the UI).
Ian: What about access to device capabilities?
Dan: You get more with native apps. And you know what you are getting. When you are writing HTML5 you know less about what you are coding to.
Ian: Would you say that overall HTML5 is the path to pursue?
Chris: I am more optimistic about HTML5 all the time. Last year people were asking whether they should do HTML5. But more and more the signal keeps getting stronger, such as the Financial Times and Facebook moving in that direction. Sony Ericsson released a WebGL phone, which is huge. There's a lot of momentum right now.
Diana: HTML5 is not the solution for everything. Developers need to think about features, consumers, and choose the right technology. We are moving more toward HTML5 but currently it is still a conscious decision.
Ian: What are your criteria for going with HTML5?
Diana: If you don't need to access specific features of a phone, or if you don't need immersive graphics. We want to do more HTML5 in the future because it will save time and be more efficient to reach multiple devices.
Chris: HTML5 is good for rapid prototyping. You test, tweak, and then "burn" your native apps. Also, it's very convenient to test some things on the server. You can deploy five different versions of a banner graphic and iterate on the server side before shipping any native code.
Ian: What else would you like to see W3C work on?
Chris: Here's what would have a lot of value: there's room for a standard layer above the HTML5 that would give you a framework for different well-known applications scenarios that would make development easier. When we think about a mobile app, we don't want to have to turn to vendors to fill the gaps. I want to be able to use a tag that tells the browser a context I'm working in, for instance "a window with a menu bar" or "render that in the native OS widgets." As a designer I want to focus on functionality and let the browser interpret my description of a high-level task, rendering appropriately according to platform standards. There are SDKs that take care of this sort of thing, but I'd like things to be pushed up into standards that people can use without going to a third-party.
Ian: Thank you for the conversation!
]]>The specification has been modified to allow two syntaxes for the time element. You may write time with a T or a single space separator between the date and the time.
<time>2011-12-24T23:59</time>
<time>2011-12-24 23:59</time>
XML documents have a UTF-8 default encoding. Kornel Lesiński asked if it would be possible to do that for documents with an HTML5 doctype. Henri Sivonen (Mozilla), who is also developing the HTML5 parser for Firefox, rejected the suggestion. It would introduce more incompatibilities and more specific behaviors than the already existing explicit mechanisms.
Sometimes Web developers need to extend their content with a richer semantics by adding simple data structure to their markup. A first Working Draft for RDFa Lite 1.1 has been published. For example to specify that this column is written by a human and not a cow.
<p vocab="http://schema.org/"
resource="#karl"
typeof="Person">
This blog post is written by
<span property="name">Karl Dubost</span>.</p>
The purpose of this group is to develop a common specification in OWL for structured and unstructured annotations on Web documents, based on prior work developed by the Annotation Ontology (http://code.google.com/p/annotation-ontology/) and Open Annotation Collaboration (http://www.openannotation.org/) efforts.
You are invited to support the creation of this group: http://www.w3.org/community/groups/proposed#annotation
The WebVTT format (Web Video Text Tracks) is a format intended for marking up external text track resources. WebVTT has escaped HTML5 to be developed by the Web Media Text Tracks Community Group. They also have a twitter account. Anne van Kesteren has created a WebVTT Validator and published the source code on bitbucket. The syntax is a very simple text file.
WEBVTT
00:11.000 --> 00:13.000
<v Roger Bingham>We are in New York City
00:13.000 --> 00:16.000
<v Roger Bingham>We're actually at the Lucern Hotel, just down the street
If humanity had an UndoManager API we might have been able to fix a lot of mistakes. Ryosuke Niwa (webkit) is working on such an API for the Web and he is asking feedback. A long list of use cases has been outlined to better understand what do we need to solve.
Dominique Hazaël-Massieux (W3C) has been giving a summary of the Standards for Web Applications on Mobile. He has published an update for November 2011.
The new methods for append, prepend, … that we mentioned a few weeks ago have been addred to the DOM 4 specification in the mutation methods section. This triggered a new syntax requirement for WebIDL, which has not yet been completely defined. Anne van Kesteren (Opera) has also started to define Mutation observers.
An update has been published for CSS Image Values and Replaced Content and a new editor draft for CSS3 Grid Layout. As a kind reminder, these are drafts and then not stable. If the implementations change them or drop these features, you will have to eat your own hat :)
A tendency in Web development has emerged a little while ago. Web developers started to push hash sign in their URIs not to define an anchor in the document but the state of an application. The W3C Technical Architecture Group has summarized best practices for handling hash signs URIs.
The W3C TAG is working on a few topics in parallel. You could participate constructively to the discussions by subscribing to the www-tag mailing list.
You can now buffer this number, RFC 6455, in your memory lane. The WebSocket Protocol is accepted. Though be careful, because there might still be a bit of breakage depending if your browser has released a version of the implementation but disabled by default. Check your preferences.
In the discussion about extending HTTP status code, Roy Fielding (Adobe) gave an interesting rule for knowing how/when to extend the list of codes.
When extending HTTP status codes, the question that needs to be asked is “how will a client process this response differently than any of the existing status codes?”
This column is written by Karl Dubost, working in the Developer Relations team at Opera Software.
]]>
This document is a collective work of the members of the Home Network TF of the IG and lists the design goals and requirements that potential W3C recommendations should support in order to enable access to services and content provided by home network devices, including the discovery and playback of content available to those devices, both from services such as traditional broadcast media and internet based services but also from the other services running on another home network device.
This input document as been submitted to the Device APIs and Policies Working Group (as previously mentioned on this blog) and to the new Web Intents group.
Launched in February 2011, the Web and TV Interest Group is a forum for Web and TV technical discussions, aimed to review existing services and technologies as well as new scenarios and identify gaps in the web platform that would prevent these services to be deployed in an effective and interoperable way across devices.
Gaps analysis is the first, yet important, step in creating new standards. The IG Task Forces are in charge of reviewing use cases from the TV community and bringing the identified (potential) gaps as input to one or more W3C Working Groups. TF members follow-up the discussion in the WGs and make sure that the identified use cases are accepted and addressed by WGs (or brought back to the IG for refinement, if needed).
The Web&TV IG is a pretty young group but it grew quite a lot in the last months, reaching 123 participants from 44 organizations. At the moment there is another TF actively discussing the impact of requirements posed by media formats commonly used by TV services on the HTML5 media interfaces.
The IG chairs are planning to start few new activities. More news soon on this blog and on the IG home page
]]>Yehuda Katz and a few others have started a discussion on Restoring PUT and DELETE in HTML5 forms (Issue 1067). The Ruby on Rails Web framework is currently using a hack for simulating PUT and DELETE.
The Web is an amazing big pile of history. border attribute on table elements didn’t have any units. Though people had a tendency to put units such as the wrong <table border="5px">. So browsers repaired automagically to take into account only the beginning of the string and ignore any trailing characters. Sylvain Galineau (Microsoft) raised an issue because he thought it would create issues for microdata values. Ian Hickson mentioned that the incorrect values were not valid but fixed by the browser if wrong.
INS and DEL elements which are used to track insertion and deletion of contents in HTML have a very simple model. So simple that according to Daniel Glazman (Disruptive Innovations), it is not easily implementable in any useful way for authoring tools.
A Community Group has been proposed to discuss ideas around the future of HTML and associated features.
Webkit has a proposed patch for the Network Information API. This is an interesting API because it allows to create apps which behave differently depending if the network is 3g, wifi, etc. For example, imagine a responsive Web design where images of adequate sizes are sent depending on the type of network which gives a good idea of what could be the bandwidth.
Rich Tibbet (Opera) has proposed a model for the Web intents work.
Simon Pieters (Opera) wanted an API to queue a task. After discussing about the opportunity of such a need, Glenn Maynard proposed a piece of code that finally Simon extended.
var queueTask = function(task) {
var mc = new MessageChannel();
var args = [].slice.call(arguments, 1);
mc.port1.onmessage = function(){
this.onmessage = null; task.apply(task, args);
};
mc.port2.postMessage(null);
};
queueTask(function(arg) { console.log(arg, this) }, "test");
Dimitri Glazkov (Chromium team) has proposed a high level overview of Web Components for Web Developers.
The setAttributeNS() is implemented differently in IE, Firefox, Webkit and Opera. The discussion, which started on the mutability of attributes, led to discuss about simplifying the platform for HTML documents by removing the namespacing of attributes. According to Jonas Sicking (Mozilla), that would also improve browser performances. There would still be needed for XMLDocument interface.
When exchanging data in between client and server, there are a few techniques. One of them is XMLHttpRequest which helps inject a data flow into the page without reloading the full context. People often uses it to transfer JSON packaged data. Anne van Kestern (Opera) has added json response type to XMLHttpRequest.
When a client and server interact on the Web, the server answer to client requests with 3 number codes. These have very specific meanings. For example, 200 means that the server has successfully answered the client request. It is happening quite often that Web developers (specifically those developing Web APIs) lack some HTTP status code to have a richer interactions between the client and the server. Mark Nottingham has been working for a while on new HTTP status code.
HTTP/1.1 allows many types of characters. This has a tendency to create security issues when, for example in CGI/1.1, translating these characters into UNIX environment variables. Some of them are not valid and/or parseable characters. Yutaka Oiwa brought the subject on HTTPbis mailing list.
This week, the theme of Anne Van Kesteren’s report about Encoding woes and WebVTT.
This column is written by Karl Dubost, working in the Developer Relations team at Opera Software.
]]>To my surprise and delight people took notice, including some of my heros. Fast forward 8 months, I released "HTML5 for developers", a specification that was built to remove the "unimportant implementor details", enhance readability, be searchable, and function even when offline.
Since then, I've had a number of enlightening conversations about the future of the web with developers, browser implementors and working groups, but now more than ever it's clear that we need to improve communication as a community, by using design.
The W3C agrees, and have invited me to join the CSS working group to continue my work in a more collaborative, consultative manner.
I'll be working with the likes of Vincent Hardy, Divya Manian and Elika Etemad. Vincent is the author of the "CSS Shaders" specification (probably the most exciting specification under works), and Divya and Elika have both worked on improving CSS Working Group’s visual design.
Firstly, we'll experiment with the design of the CSS Shaders draft and do some R&D on less stylistic features (like offline, search, inline bug tracking, and many others). We'll endeavour to share our working process with the community at large.
Hopefully this is the start of a successful emergence of communication for all of us — as web makers.
Our hope is to get feedback from this early work and then find a way to make it available on a wider set of specifications, beyond the initial work we are doing on the CSS Working Group technical documents.
For now, please leave your questions and commentary on the www-style mailing list, or get in touch with me directly.
Ben Schwarz
germanforblack.com, @benschwarz on twitter, google+ and github.
]]>
On the eve of his retirement, I spoke with Roger Cutler, longtime W3C participant from Chevron.
IJ: Roger, since you have been participating in W3C for some time, can you describe how Chevron's interests have changed?
RC: In 2000 our focus was on XML and Web Services. As those areas matured, we turned our interests to Semantic Web technology. As end users of Web technology we are often insulated from standards through our dependency on vendors. For example, although we think HTML5 will be important to us, nonetheless we don't have a particular axe to grind since vendors will come between us and HTML5 much of the time. But in the case of the Semantic Web we are trying to access the technology more directly and much of the expertise in the field can be found within the W3C.
IJ: How is Chevron using Semantic Web technology?
RC: In one project we sought to exploit the technical strengths of Semantic Web technology such as the expressiveness and reasoning achievable with OWL. While our efforts in that project have been a success as far as the technology goes, we have not yet seen a significant business benefit.
A second effort focused on challenging integration problems that involve information about equipment in major capital projects such as an oil rig or platform. These capital projects involve tens of thousands of objects: flanges, pumps, blowout preventers, sub-assemblies, and so on. All the pieces of equipment come with documents (for safety and regulatory reasons, engineering drawings, etc.) and manufacturer’s specifications (e.g., temperatures at which the components function). The equipment data associated with these projects are both valuable and complex. We tried to exploit the expressiveness of OWL to create an ontology that puts this together and deals with the complexity. This has been technically successful but again we haven’t yet deployed the solution so we have not yet derived any business benefit from it. Obviously we’re at the stage of learning and experimenting with the technology.
IJ: Can you describe where the data comes from and how it is managed?
RC: All this information about equipment lives in different forms in a number of different systems and is handled separately by different organizations with different data models. For example, the people who build facilities and determine what equipment they will use have data about the equipment. The people doing the maintenance of the equipment have much the same data, only structured differently, and with additional information specific to their needs. Then there are the production people running the equipment, turning on valves and seeking to maximize production on a daily basis. They have their own systems and equipment. Still others are modeling the characteristics of reservoirs. The list goes on. Each party optimizes information for their own needs, and all of the systems have evolved independently. And yet, much of the time they are dealing with the same information.
As an example of what we need, suppose that someone replaces a pump or reroutes some pipes. We need to propagate information about these changes into all these different systems, which is a time-consuming, manually intensive, and fragile process. Some sort of communication among systems is necessary on an ongoing basis. It is very important that it be done right. We would like to use the Semantic Web to help us do it right.
The integration can also help us learn more from the data we have. For instance, we might want to combine information about production scheduling and maintenance scheduling to optimize them simultaneously. If you want to do these things, the systems need to talk to each other, but, as I said, it is difficult.
IJ: What approaches can you take to address this?
RC: People use point-to-point solutions or big data warehouses, but neither approach scales gracefully. Point-to-point solutions become very complex and hard to maintain. Data warehouses create replication issues and tend to be fragile. So, the possibility of a smarter, more agile, more cost-effective way of dealing with integration would have a great deal of value to us. The Semantic Web is not guaranteed to be the solution, but it looks plausible and we’d like to see if it lives up to its promise in practice. .
IJ: Earlier you said that you were successful with the technology but not sure you would deploy it. Why not?
RC: Quite simply, it's hard. We are slowly learning how to apply it to our world. The big target --- the thing that would make this investment in technology worthwhile --- is integration. But to integrate things you need more than one thing to integrate! So if we start by building an ontology for equipment that attempts to exploit the expressivity and flexibility of OWL, then later we may be able to build another for maintenance and link them. It may be, in fact, that this stepwise approach has caused us to try to be more aggressive in using the advanced features of OWL than might be optimal for integration purposes – but I guess we’ll find out whether that’s the case as we proceed.
IJ: Has reuse of existing vocabularies proved valuable?
RC: Not in the project I've just described. We do think that using an "upper ontology" -- one that defines very general concepts like units of measure or geographical concepts -- to structure class relationships is probably a good way to go. But reuse has not been our primary motivation. Rather, it has been to integrate the information in our internal systems.
IJ: How does this work?
RC: I'm fond of telling people within Chevron who ask about Semantic Web technology anything you can do with the Semantic Web you can do with relational databases – if you’re willing to write enough code, which can lead to higher cost and complexity. In fact, we have demonstrated a case in which similar objectives were obtained in the context of an ontology with about fifteen lines of readily comprehensible rules and in a relational database context with over 1000 lines of pretty complex code. So in the equipment catalog project, there is a solution in a relational database, but it involves a bunch of obscure pointers in the tables and associated code. In that system we try to maintain some relationships of interest to us, but the system doesn't handle all of them. It is not only incomplete, it would be complicated to make it complete. So we wanted to get the data out and do better. We wrote programs to generate OWL from schemas in our databases. The declarative techniques serve as a framework that lets us express the complex relationships in a way that is more maintainable and scalable. I think we've demonstrated that. The result is that we have reproduced our internal system in OWL, and the OWL version should be more maintainable and scalable, as well as more complete.
IJ: What have you learned from this project?
RC: One thing that intimidates us is OWL reasoning. It is very daunting to figure out how to gain the organizational capability to support a technology that is so difficult to understand and use effectively.
IJ: What makes it so challenging?
RC: For one, how reasoning works in the open world model. An innocent looking statement can cause unexpected results, and it can be challenging to understand why. We are also making extensive use of OWL restriction classes, which can be tricky.
IJ: Does participation in the Working Groups give you the opportunity to address some of these issues?
RC: To some extent it is relevant to us to have the opportunity to influence what gets standardized. Sharing use cases with working groups can also be valuable. I have contributed use cases and sample data directly to Working Groups in which I was participating. Doing so makes it more likely they will create something that works for us. Being aware of that activity may give Chevron a leg up, or may benefit our entire industry. For example, I contributed use cases and data to the Efficient XML (EXI) work.
However, the strongest motivator by far for W3C Membership is the experience, knowledge and flow of information through personal contacts. Participation leads to relationships with world-class experts in a wide variety of fields. The conversations at membership meetings and online discussions in which we freely express our opinions generates trust. I come back from these meetings with knowledge about industry direction and technology development. There are a lot of people in the W3C community who can help us learn about topics of interest not only related to the Semantic Web but also many other technologies. And this has enabled me to do a much better job of advising Chevron on these technologies -- where we should play a leadership role and why, and possible solutions to specific problems. We also learn from observing the W3C process. This is an extensive consensus-based process that has both formal aspects and informal traditions. Chevron learns from observing how this consensus-driven organization does its business.
IJ: Are there other areas of W3C work you are watching?
RC: There's a good deal of interest in Chevron in HTML5. Other hot tickets for us include mobile, social networking, cloud computing, and big data. We're glad to see the standards work in these areas but don't have a particular outcome in mind. For the cloud, security and policy are important. We want to avoid vendor lock-in of services. We want the protocols of cloud solutions to be standardized so that we can change suppliers if necessary.
IJ: What would you like to see W3C do differently?
RC: It is my perception that historically the W3C is much more concerned and knowledgeable about the public Web than in how Web technologies are used in corporate intranets. The enterprise environment is different in fundamental ways from the public Web. And the issues and concerns are not the same. For example, there is no anonymity and access control needs are different. In the Oil and Gas industry we also get into federation issues because there are a lot of joint ventures between highly controlled environments.. Certainly many of the W3C Members that market to companies like ours have a good understanding of those issues, but I think the W3C leadership should do more to understand that world.
W3C also needs to increase investment in authoring tools. It is a big issue for us if authoring tools create output that doesn’t conform to specifications, is not accessible, inefficient or hard to maintain. I would like to see more attention paid to authoring tools and testing to ensure they conform.
Lastly, most W3C Members are technology vendors, universities, and government agencies. Chevron, on the other hand, is an end user and I think we would all benefit if there were more Members like that in W3C. Just as we gain insight and information and benefit from participation, W3C can benefit from the insights and views of end user companies. I have been involved in the creation of W3C's first Business Group, and I see those as another mechanism to help bring more end user companies into the W3C community.
IJ: Though you are retiring will you continue to participate?
RC: Perhaps! From a personal perspective, I have really valued the friendships and working relationships I have formed with people in W3C. It's been a wonderful thing.
IJ: And we have loved having you. Thank you, Roger, and best wishes in your retirement!
]]>find and findAll methods. The Open Web Platform weekly summary is also mentioning Web architecture, Web Apps WG hosting new work.
The old HTML4 abbr attribute has been deprecated in the HTML5 specification. The role of the attribute was to give a short form of table cells content. It was meant to help users getting the content of these cells quickly. A Firefox patch has just been proposed to implement it.
Frank Olivier (Microsoft) said:
Text editing is certainly a fool’s errand in canvas.
and indeed there was previous attempt to recreate text-editor all in Canvas. Some of these projects have been abandoned since. That said the group is struggling to find solutions for raising accessibility in canvas to an acceptable level. One solution which is being explored is to add primitives for Path.
There are proposals for
The work on Component Models that I mentioned a few times about under the label shadow DOM is moving to the Web Apps WG led by Dimitry Glazkov (Chromium team).
Darin Fischer (Google) proposes to add the Pointer Lock (formerly known as Mouse Lock) spec and the Gamepad spec be added to the Web Applications WG’s charter.
Some fundamental features are missing in JavaScript. I know for example missing things like startsWith and endsWidth on strings annoy me a lot. There is a proposal for evolving ECMAScript on IE blog.
There has been a gigantic thread (with a lot of misunderstandings and rebuttals) about allowing XPath in the find/findAll APIs we were talking about last week. The discussion goes along the common permathread about CSS selectors and XPath has a way to select a path in a DOM. With similar goals, they often addressed different problem spaces and they do not have the exact same set of features. Some people argue it is not worth the cost adding XPath for selecting nodes. Eventually, people will reach an agreement. We are not there yet.
matchesSelector is verbose and people start to look at ways to make it shorter for Web developers. Two proposals have been by Tab Atkins (Google) with .matches() and .is(). Though Dimitri Glazkov said that he wishes to use .is() for components. It would be used like
elt.matches("div span")
Jake Archibald (Lanyrd) is not satisfied with the Shadow DOM and scope stylesheets we mentioned a few times in that column.
Experimenting with new styles for CSS Specifications. CSS shaders is currently having the proposal.
URIs are one of the corner stones of the Web architecture. There is a specification clearly defining the URI syntax and meaning. But as usual with the human Web, things get deployed with errors in a distributed way. What is happening when you get something which looks like an URI but is not really a URI. User agents have for long implemented techniques to cope with the common URI mispellings found on the Web. Mike Smith will start working on a document on how browsers process URIs, following a proposal made at HTML WG F2F during the TPAC 2011. It has been suggested that this should be part of the URL API document.
The Referer HTTP header has been a concern for a long time in terms of security and privacy. Adam Barth is proposing to add a referrer attribute in HTML (meta element) for suppressing its value from each HTTP requests.
This week, the theme of Anne Van Kesteren's report is about XMLHttpRequest.
This column is written by Karl Dubost, working in the Developer Relations team at Opera Software.
]]>An example may be the schema.org example I used in another blog lately:
<div vocab="http://schema.org/" typeof="Product">
<img property="image" src="dell-30in-lcd.jpg" />
<span property="name">Dell UltraSharp 30" LCD Monitor</span>
<div property="aggregateRating" typeof="AggregateRating">
<span property="ratingValue">87</span>
out of <span property="bestRating">100</span>
based on <span property="ratingCount">24</span> user ratings
</div>
<div property="offers" typeof="AggregateOffer">
<span property="lowPrice">$1250</span>
to <span property="highPrice">$1495</span>
from <span property="offerCount">8</span> sellers
</div>
Sellers:
<div property="offers" typeof="Offer" >
<a property="url" href="save-a-lot-monitors.com/dell-30.html">
Save A Lot Monitors - $1250</a>
</div>
<div property="offers" typeof="Offer">
<a property="url" href="jondoe-gadgets.com/dell-30.html">
Jon Doe's Gadgets - $1350</a>
</div>
...
</div>
Running this through the distiller, one gets the following JSON output:
{
"@context": {
"@vocab": "http://schema.org/",
"@coerce": {
"@iri": [
"http://schema.org/image",
"http://schema.org/offers",
"http://schema.org/url",
"http://schema.org/aggregateRating"
]
}
},
"@type": "Product",
"aggregateRating": {
"@type": "AggregateRating",
"ratingCount": "24",
"ratingValue": "87",
"bestRating": "100"
},
"offers": [
{
"@type": "Offer",
"url": "http://www.example.org/save-a-lot-monitors.com/dell-30.html"
},
{
"@type": "AggregateOffer",
"lowPrice": "$1250",
"highPrice": "$1495",
"offerCount": "8"
}
],
"name": "Dell UltraSharp 30\" LCD Monitor",
"image": "http://www.example.org/dell-30in-lcd.jpg"
}
Of course, the generated JSON may be a bit more complex, e.g., if the original page contains other RDFa attributes generating other triples. But it still looks pretty readable to me…
]]>Tidy, the useful piece of code that was helping you to fix your broken XHTML and HTML had not evolved. Björn Hörhmann published a patch to fix it. Dominique Hazaël-Massieux (W3C) decided to create a HTML5 tidy github project
There are many ways of storing information on the Web on the client side. The cookies was one of the first one, but since AppCache and Web Storage have been developed. There is a wiki page on client side database solutions documenting what are the relations between the different technologies.
There is a lot of work going on for enabling a FullScreen API.
I mentioned Web Intents last week. An introduction about Web Intents has been written by the priceless timeless.
There is a proposal for a new findAll property. Jonas Sicking (Mozilla) is asking what type findAll should return.
Anne van Kesteren (Opera) had opened a bug on Webkit bug reporting system about the deprecated document.width and document.height properties. It is fixed! The two properties have been removed from the Webkit source code. He also started a new round of discussions on how to improve the DOM
Rafael Weinstein (Chromium Team) is proposing to have a fragment of DOM being inactive but inside the page for future use. Dynamic web pages could use them during the user interaction later on. The proposal is a declarative Inert DOM (a <template> element)
Fantasai (Mozilla) is explaining how the CSS Working Group is working.
scoped stylesheets have been introduced to define a mechanism where the stylesheet would apply only in a precise context. scoped is more complicated to implement than initially thought.
RDFa is the swiss army knife for injecting rich data into your Web pages. Initially designed for XHTML, the group is in the process of evolving it for HTML. There is a lot of discussion around it on how to make it easy for developers and compatible with the current Web. Sebastian Heath proposed to change a bit the consumption of RDFa to take into account id. Now a Web author needs to declare an about attribute:
<p id="item1" typeof="ex:item" about="#item1">
<span property="item_name">An interesting item (1)</span>
</p>
His proposal is to reduce it to:
<p id="item1" typeof="ex:item">
<span property="rdfs:label">An interesting item (1)</span>
</p>
which would produced the triples of information.
<http://example.org/document1#item1> rdf:type <http://example.org/ns/item> .
<http://example.org/document1#item1> rdfs:label "An interesting item (1)" .
There are at a regular pace discussions about vendor extensions in CSS. In my daily job, I have to contact Web sites which have improper use of CSS and makes it hard to have a good Web experience for any users. So I have written a mail to explain why I disliked them: CSS vendor extension issues. Henri Sivonen (Mozilla) extended the discussion in a more general discussions on how Vendor Prefixes Are Hurting the Web. Daniel Glazman (CSS WG co-chair) doesn’t completely agree, or maybe he does, for certain parts and decided to write an answer to Henri Sivonen, which triggered another supportive post by Alex Russel (Google) on why Vendor Prefixes Are A Rousing Success.
Conclusion? The discussion is going on.
I had missed that last July, but it seems that there is a “new” quaterly publication about SVG. The first issue has 3 articles covering DOM Helper, D3.js and sensorial expressions and emotion.
Henri Sivonen (Mozilla) landed support for HTML parsing in XMLHttpRequest in Firefox engine (Gecko). He is giving details on how he has implemented it. His implementations create a direct feedback for changing XMLHttpRequest specification. The same way Julian Reschke has implemented and tested a part of the specification about content-type rewriting. These are two of the many ways you can help specifications development.
<audio> Soundspubdate attribute.This week, the theme of Anne van Kesteren’s report is about <time> and findAll. I feel there is fatigue desire to know if it's useful in Anne’s report. He is asking for feedback.
This column is written by Karl Dubost, working in the Developer Relations team at Opera Software.
]]>For the past few years, we've seen this additional dimension brought to media content, building hypermedia: SVG and canvas make it possible to build graphics that integrate or link to content from various sources, the addition of the audio and video tags in HTML5 are the starting points for making audio-video content as integrated into the Web as images have been. The Popcorn.js project illustrates for instance how video content can benefit from hyperlinking (much in the same way I had been exploring 2 years ago with my presentation viewer). Because these technologies are, can be deployed everywhere easily, I expect this will increasingly revolutionize our consumption of media.
I believe we're now starting to see a new trend of that sort, with the emergence of what I would call hyperdevices.
As more and more devices (mobile obviously, but also tablets, TV, cars, lightbulbs and many more) get connected, they more often that not get shipped with Web capabilities.
As the Web gains more and more access to the specific capabilities of these devices (touch interactions, geolocation, accelerometer, and many more), not only does it become a platform of choice to develop applications targeted at these devices (as we’re exploring in the MobiWebApp project), but it also creates new form of interactions across these devices that were not possible previously.
To illustrate that point, I've built a couple of very simple demos:
We're still at the very early days on this wave, but there is a growing interest around it. Device and service discovery (see recent discussions on Web Intents) will play a role there without a doubt, and the work done as part of the webinos project (where I'm involved) will hopefully also inform the technical challenges that are still to be overcome. We will also need plenty of creativity to invent user interactions that match these new capabilities.
But for sure, this is where we are going.
Follow me on Twitter: @dontcallemdom.
]]>His pioneering work on developing private communications between social networks in Diaspora has had a great impact and we will sorely miss his participation in their further development. Ilya's vision of preserving user-control and privacy on the social web will continue to influence open standards for the Web in the years to come.
We extend our sympathies to his family, friends and colleagues.
]]>