W3C

Web Architecture

Web Architecture focuses on the foundation technologies and principles which sustain the Web, including URIs and HTTP.

Architecture Principles Header link

Web Architecture principles help to design technologies by providing guidance and articulating the issues around some specific choices.

Identifiers Header link

We share things by their names. URL, URI, IRI is the way to name things on the Web and manipulate them. Some additional addressing needs in the Web Services stack motivated some additional layers.

Protocols Header link

Protocols are the vehicle for exchanging our ideas. HTTP is the core protocol of the Web. W3C is also working on XML Protocols and SOAP in relation to Web Services.

Meta Formats Header link

XML, the Extensible Markup Language, is used to build new formats at low cost (due to widely available tools to manipulate content in those new formats). RDF and OWL allow people to define vocabularies (“ontologies”) of terms as part of the Semantic Web.

Protocol and Meta Format Considerations Header link

Documents on the Web are loosely joined pieces by identifiers. It creates a maze of rich interactions between protocols and formats.

Internationalization Header link

W3C has worked with the community on the internationalization of identifiers (IRIs) and a general character model for the Web.

News Atom

The W3C TAG is pleased to announce the publication of a new TAG Finding " Identifying Application State."

URIs were originally used primarily to identify documents on the Web, or with the use of fragment identifiers, portions of those documents. As Web content has evolved to include Javascript and similar applications that have extensive client-side logic, a need has arisen to use URIs to identify states of such applications, to provide for bookmarking and linking those states, etc. This finding sets out some of the challenges of using URIs to identify application states, and recommends some best practices. A more formal introduction to the Finding and its scope can be found in its abstract.

The W3C TAG would like to thank Ashok Malhotra, who did much of the analysis and editing for this work, and also former TAG member T.V. Raman, who first brought this issue to the TAG's attention, and who wrote earlier drafts on which this finding is based.

Video recordings of speakersat the MultilingualWeb workshop in Limerick are now available, in addition to the previously uploaded slides and IRC notes.

Entitled “A Local Focus for the Multilingual Web”, the workshop surveyed and shared information about currently available best practices and standards that can help content creators and localizers address the needs of the multilingual Web, including the Semantic Web. Attendees also heard about gaps that need to be addressed, and enjoyed opportunities to network and share information between the various different communities involved in enabling the multilingual Web. The second day was given over to an Open Space discussion with breakouts.

Work is under way on a summary report for the workshop, which will be announced in due course.

Building on the success of the Madrid, Pisa and Limerick workshops, preparations have now begun for the next workshop, to be held in Luxembourg, at the European Commission, in March 2012. See the Call for Participation.

Thanks to VideoLectures for hosting the videos.

15 – 16 March 2012, Luxembourg. Co-located with the European Commission’s Language Technology Showcase Days, and hosted by the Directorate-General for Translation (DGT) of the European Commission.

The MultilingualWeb projectis looking at best practices and standards related to all aspects of creating, localizing and deploying the Web multilingually. The project aims to raise the visibility of existing best practices and standards and identify gaps. The core vehicle for this is a series of four events which are planned over two years.

After three highly successful workshops in Madrid, Pisa, and Limerick, this final workshop in the series will continue to investigate currently available best practices and standards aimed at helping content creators, localizers, tools developers, and others meet the challenges of the multilingual Web.

Participation is free. We welcome participation from both speakers and non-speaking attendees. For more information, see the Call for Participation

The MultilingualWeb Workshop in Limerickwas once more a success, thanks to the efforts of the excellent speakers and the local organizers, but also thanks this time to the participants themselves who enthusiastically took part in the Open Space discussion organized by TAUS. This will hopefully lead to some longer term initiatives, and most groups are already planning to continue their discussions in Luxembourg, next Spring. We had around 90 attendees.

The program pagehas now been updated to point to speakers’ slides and to the relevant parts of the IRC logs. Links to video recordings will follow shortly.

There will also be a page pointing to social media reports, such as blog posts, tweets and photos, related to the workshop. If you have any blog posts, photos, etc. online, please let Richard Ishida know (ishida@w3.org) so that we can link to them from this page.

A summary report of the workshop will follow a little later.

Register nowif you want to ensure that you get a place.

Participation in the workshop is free, but spaces are limited. We have another great programin place.

The keynote speaker will be Daniel Glazman, of Disruptive Innovations, and co-chair of the W3C CSS Working Group. He is followed by a strong line up in sessions entitled Developers, Creators, Localizers, Machines, Users, and Policy. On the morning of the second day Jaap van der Meer of TAUS will facilitate “Open Space” style discussion sessions, where workshop participants themselves will choose topics to discuss in several breakout groups.

There will be a dinner reception on the evening of 21 September (free of charge, workshop registrants only).

The MultilingualWeb workshops, funded by the European Commission and coordinated by the W3C, look at best practices and standards related to all aspects of creating, localizing and deploying the multilingual Web. The workshops are successful because they attracted a wide range of participants, from fields such as localization, language technology, browser development, content authoring and tool development, etc., to create a holistic view of the interoperability needs of the multilingual Web.

This workshop is co-located with the 16th Annual LRC Conference, and hosted by the LRC (Language Research Centre) and the University of Limerick.

We look forward to seeing you in Limerick!

An initial program has been published for the upcoming W3C MultilingualWeb workshop in Limerick, Ireland, 21-22 September 2011. (Co-located with the 16th Annual LRC Conference.)

The keynote speaker will be Daniel Glazman, of Disruptive Innovations, and co-chair of the W3C CSS Working Group. He is followed by a strong line up in sessions entitled Developers, Creators, Localizers, Machines, Users, and Policy.

See the Call for Participationfor details about how to register for the workshop. Participation in the workshop is free.

The MultilingualWeb workshops, funded by the European Commission and coordinated by the W3C, look at best practices and standards related to all aspects of creating, localizing and deploying the multilingual Web. The workshops are successful because they attracted a wide range of participants, from fields such as localization, language technology, browser development, content authoring and tool development, etc., to create a holistic view of the interoperability needs of the multilingual Web.

We look forward to seeing you in Limerick!

An updated version of Working with Time Zoneshas just been published as a Working Group Note.

Date and time values can be complex and the relationship between computer and human timekeeping systems can lead to problems. The working group has updated this version to contain more comprehensive guidelines and best practices for working with time and time zones in applications and document formats. Use cases are provided to help choose an approach that ensures that geographically distributed applications work well. This document also aims to provide a basic understanding and vocabulary for talking about time and time handling in software.

Editor: Addison Phillips, Lab126.

This is reminder to register for the upcoming W3C MultilingualWeb workshop in Limerick, Ireland, 21-22 September 2011. (Co-located with the 16th Annual LRC Conference.)

See the Call for Participationfor details about how to register for the workshop and propose a talk. Participation in the workshop is free.

The MultilingualWeb workshops, funded by the European Commission and coordinated by the W3C, look at best practices and standards related to all aspects of creating, localizing and deploying the multilingual Web. The workshops are successful because they attracted a wide range of participants, from fields such as localization, language technology, browser development, content authoring and tool development, etc., to create a holistic view of the interoperability needs of the multilingual Web.

We look forward to seeing you in Limerick!

Hash URIs

13 May 2011

from W3C TAG Lines

Note: This was initially posted at http://www.jenitennison.com/blog/node/154.

There’s been quite a bit of discussion recently about the use of hash-bang URIs following their adoption by Gawker, and the ensuing downtime of that site.

Gawker have redesigned their sites, including lifehacker and various others, such that all URIs look like http://{domain}#!{path-to-content}— the #!is the hash-bang. The home page on the domain serves up a static HTML page that pulls in Javascript that interprets the path-to-contentand requests that content through AJAX, which it then slots into the page. The sites all suffered an outage when, for whatever reason, the Javascript couldn’t load: without working Javascript you couldn’t actually view any of the content on the site.

This provoked a massive cry of #FAIL (or perhaps that should be #!FAIL) and a lot of puns along the lines of making a hash of a website and it going bang. For analysis and opinions on both sides, see:

While all this has been going on, the TAG at the W3C have been drafting a document on Repurposing the Hash Sign for the New Web (originally named Usage Patterns For Client-Side URI parametersin April 2009) which takes a rather wider view than just the hash-bang issue, and on which they are seeking comments.

All matters of design involve weighing different choices against some criteria that you decide on implicitly or explicitly: there is no single right way of doing things on the web. Here, I explore the choices that are available to web developers around hash URIs and discuss how to mitigate the negative aspects of adopting the hash-bang pattern.

Background

The semantics of hash URIs have changed over time. Look back at RFC 1738: Uniform Resource Locators (URL) from December 1994 and fragments are hardly mentioned; when they are, they are termed “fragment/anchor identifiers”, reflecting their original use which was to jump to an anchor within an HTML page (indicated by an <a>element with a nameattribute; those were the days).

Skip to RFC 2396: Uniform Resource Identifiers (URI): Generic Syntax from August 1998 and fragment identifiershave their own section, where it says:

When a URI reference is used to perform a retrieval action on the identified resource, the optional fragment identifier, separated from the URI by a crosshatch (“#”) character, consists of additional reference information to be interpreted by the user agent after the retrieval action has been successfully completed. As such, it is not part of a URI, but is often used in conjunction with a URI.

At this point, the fragment identifier:

  • is not part of the URI
  • should be interpreted in different ways based on the mime type of the representation you get when you retrieve the URI
  • is only meaningful when the URI is actually retrieved and you know the mime type of the representation

Forward to RFC 3986: Uniform Resource Identifier (URI): Generic Syntaxfrom January 2005 and fragment identifiers are defined as part of the URI itself:

The fragment identifier component of a URI allows indirect identification of a secondary resource by reference to a primary resource and additional identifying information. The identified secondary resource may be some portion or subset of the primary resource, some view on representations of the primary resource, or some other resource defined or described by those representations.

This breaks away from the tight coupling between a fragment identifier and a representation retrieved from the web and purposefully allows the use of hash URIs to define abstract or real-world things, addressing TAG Issue 37: Definition of abstract components with namespace names and frag ids and supporting the use of hash URIs in the semantic web.

Around the same time, we have the growth of AJAX, where a single page interface is used to access a wide set of content which is dynamically retrieved using Javascript. The AJAX experience could be frustrating for end users, because the back button no longer worked (to let them go back to previous states of their interface) and they couldn’t bookmark or share state. And so applications started to use hash URIs to track AJAX state(that article is from June 2005, if you’re following the timeline).

And so we get to hash-bangs. These were proposed by Google in October 2009 as a mechanism to distinguish between cases where hash URIs are being used as anchor identifiers, to describe views, or to identify real-world things, and those cases where they are being used to capture important AJAX state. What Google proposed is for pages where the content of the page is determined by a fragment identifier and some Javascriptto alsobe accessible by combining the base URI with a query parameter ( _escaped_fragment_={fragment}). To distinguish this use of hash URIs from the more mundane kinds, Google proposed starting the fragment identifier #!(hash-bang). Hash-bang URIs are therefore associated with the practice of transcludingcontent into a wrapper page.

To summarise, hash URIs are now being used in three distinct ways:

  1. to identify parts of a retrieved document
  2. to identify an abstract or real-world thing (that the document says something about)
  3. to capture the state of client-side web applications

Hash-bang URIs are a particular form of the third of these. By using them, the website indicates that the page uses client-side transclusion to give the true content of the page. If it follows Google’s proposal, the website also commits to making that content available through an equivalent base URI with a _escaped_fragment_parameter.

Hash-bang URIs in practice

Let’s have a look at how hash-bang URIs are used in a couple of sites.

Lifehacker

First, we’ll look at lifehacker, which is one of Gawker’s sites whose switch to hash-bangs triggered the recent spate of comments. What happens if I link to the article http://lifehacker.com/#!5770791/top-10-tips-and-tricks-for-making-your-work-life-better ?

The exact response to this request seems to depend on some cookies (it didn’t work the first time I accessed it in Firefox, having pasted the link from another browser). If it works as expected, in a browser that supports Javascript, the browser gets the page at the base URI http://lifehacker.com/ , which includes (amongst a lotof other things) a script that POSTs to http://lifehacker.com/index.php?_actn_=ajax_post a request with the data:

op=ajax_post
refId=5770791
formToken=d26bd943151005152e6e0991764e6c09

The response to this POSTis a 53kB JSON document that contains a bit of metadata about the post and then its escaped HTML content. This gets inserted into the page by the script, to display the post. As this isn’t a GETtable resource, I’ve attached this fileto this post so you can see what it looks like.

(Honestly, I could hardly bring myself to describe this: a POSTto get some data? a .phpURL? query parameter set to ajax_post? massive amounts of escaped HTML in a JSON response? Geesh. Anyway, focus… hash-bang URIs…)

A browser that doesn’t support Javascript simply gets the base URI and is none the wiser about the actual content that was linked to.

What about the _escaped_fragment_equivalent URI, http://lifehacker.com/?_escaped_fragment_=5770791/top-10-tips-and-tricks-for-making-your-work-life-better ? If you request this, you get back an 200 OKresponse which is an HTML page with the content embedded in it. It looks just the same as the original page with the embedded content.

What if you make up some rubbish URI, which in normal circumstances you would expect to give a 404 Not Foundresponse? Naturally, a request to the base URI of http://lifehacker.com/is always going to give a 200 OKresponse, although if you try http://lifehacker.com/#!1234/made-up-page you get page furniture with no content in the page. A request to http://lifehacker.com/?_escaped_fragment_=1234/made-up-page results in a 301 Moved Peramentlyto the hash-bang URI http://lifehacker.com/#!1234 rather than the 404 Not Foundthat we’d want.

Twitter

Now let’s look at Twitter. What happens if I link to the tweet http://twitter.com/#!/JeniT/status/35634274132561921 ? Although it’s not indicated in the Varyheader, Twitter determines what to do about any requests to this hashless URI based on whether I’m logged in or not (based on a cookie).

If I am logged on, I get the new home page. This home page GETs (through various iframes and Javascript obfuscation) several small JSON files through Twitter’s API:

This JSON gets converted into HTML and embedded within the page using Javascript. All the links within the page are to hash-bang URIs and there is no way of identifying the hashless URI (unless you know the very simple pattern that you can simply remove it to get a static page).

If I’m not logged on but am using a browser that understands Javascript, the browser GETs http://twitter.com/; the script in the returned page picks out the fragment identifier and redirects (using Javascript) to http://twitter.com/JeniT/status/35634274132561921 .

If, on the other hand, I’m using curl or a browser without Javascript activated, I just get the home page and have no idea that the original hash-bang URI was supposed to give me anything different.

The response to the hashless URI http://twitter.com/JeniT/status/35634274132561921 also varies based on whether I’m logged in or not. If I am, the response is a 302 Foundto the hash-bang URI http://twitter.com/#!/JeniT/status/35634274132561921 . If I’m not, for example using curl, Twitter just returns a normal HTML page that contains information about the tweet that I’ve just requested.

Finally, if I request the _escaped_fragment_version of the hash-bang URI http://twitter.com/?_escaped_fragment_=/JeniT/status/35634274132561921 the result is a 301 Moved Permanentlyredirection to the hashless URI http://twitter.com/JeniT/status/35634274132561921 which can be retrieved as above.

Requesting a status that doesn’t exist such as http://twitter.com/#!/JeniT/status/1 in the browser results in a page that at least tells you the content doesn’t exist. Requesting the equivalent _escaped_fragment_URI redirects to the hashless URI http://twitter.com/JeniT/status/1 . Requesting this results in a 404 Not Foundresult as you would expect.

Advantages of Hash URIs

Why are these sites using hash-bang URIs? Well, hash URIs in general have four features which make them useful to client-side applications: they provide addresses for application states; they give caching (and therefore performance) boosts; they enable web applications to draw data from separate servers; and they may have SEO benefits.

Addressing

Interacting with the web is all about moving from one state to another, through clicking on links, submitting forms, and otherwise taking action on a page.

Backend databases on web servers, cookies, and other forms of local storage provide methods of capturing application state, but on the web we’ve found that having addressesfor states is essential for a whole bunch of things that we find useful:

  • being able to use the back buttonto return to previous states
  • being able to bookmarkstates that we want to return to in the future
  • being able to sharestates with other people by linking to them

On the web, the only addressing method that meets these goals is the URI. Addresses that involve more than a URI, such as “search http://example.com/with the keyword X and click on the third link” or “access http://example.org/with cookie X set to Y” or “access http://example.netwith the HTTP header X set to Y” simply don’t work. You can’t bookmark them or link to them or put them on the side of a bus.

Application state is complex and multi-faceted. As a web developer, you have to work out which parts of the application state need to be addressable through URIs, which can be stored on the client and which on a server. They can be classified into four rough categories; states that are associated with:

  1. having particular contentin the page, such as having a particular thread open in a webmail application
  2. viewing a particular partof the content, such as a particular message within a thread that is being shown in the page
  3. having a particular viewof the content, such as which folders in a navigational folder list are collapsed or expanded
  4. a user-interface feature, such as whether a drop-down menu is open or closed

States that have different content almost certainly need to have different URIs so that it’s possible to link to that content (the web being nothing without links). At the other extreme, it’s very unlikely that the state of a drop-down menu would need to be captured at all. In between is a large grey area, where a web developer might decide not to capture state at all, to capture it in the client, in the server, or to make it addressable by giving it a URI.

If a web developer chooses to make a state addressable through a URI, they again have choices to make about which part of the URI to use: should different states have different domains? different paths? different query parameters? different fragment identifiers? Hash URIs make states addressable that developers might otherwise leave unaddressable.

To give some examples, on legislation.gov.ukwe have decided to:

The last of these states would probably have gone un-addressed if we couldn’t use a hash URI for it. The only changes that it makes to the normal page are currently to the links to other legislation content, so that you can go (back) to a highlighted table of contents (though we hope to expand it to provide in-section highlighting). Given that we rely heavily on caching to provide the performance that we want and that there’s an infinite variety of free-text search terms, it’s simply not worth the performance cost of having a separate base URI for those views.

Caching and Parallelisation

Fragment identifiers are currently the only part of a URI that can be changed without causing a browser to refresh the page (though see the note below). Moving to a different base URI — changing its domain, path or query — means making a new request on the server. Having a new request for a small change in state makes for greater load on the server and a worse user experience due both to the latency inherent in making new requests and the large amount of repeated material that has to be sent across the wire.

Note: HTML5 introduces pushState()and changeState()methodsin its history API that enable a script to add new URIs to the browser’s history without the browser actually navigating to that page. This is new functionality, at time of writing only supported in Chrome, Safari and Firefox (and not completely in any of them) and unlikely to be included in IE9. When this functionality is more widely adopted, it will be possible to change state to a new base URI without causing a page load.

When a change of state involves simply viewing a different part of existing content, or viewing it in a different way, a hash URI is often a reasonable solution. It supports addressability without requiring an extra request.

Things become fuzzier when the same base URI is used to support different content, where transclusion is used. In these cases, the page that you get when you request the base URI itself gets content from the server as one or more separate AJAX requests based on the fragment identifier. Whether this ends up giving better performance depends on a variety of factors, such as:

  • How large are the static portions of the page (served directly) compared to the dynamic parts (served using AJAX)?If the majority of the content is static as a user moves through the site, you’re going to benefit from only loading the dynamic parts as state changes.
  • Can different portions of the page be requested in parallel?These days, making many small requests may lead to better performance than one large one.
  • Can the different portions of the page be cached locally or in a CDN?You can make best use of caches if the rapidly changing parts of a page are requested separately from the slowly changing parts.

Distributed Applications

Hash URIs can also be very useful in distributed web applications, where the code that is used to provide an interface pulls in data from a separate, unconnected source. Simple examples are mashups that use data provided by different sources, requested using AJAX, and combine that data to create a new visualisation.

But more advanced applications are beginning to emerge, particularly as a reaction to silo sites such as Google and Facebook, which lock us in to their applications by controlling our data. From the unhosted manifesto:

To be unhosted, a website’s code will need to be very ajaxy first, so that all the servers do is store and serve json data. No server-side processing. This is because we need to switch from transport-layer encryption to client-side payload encryption (we no longer necessarily trust the server we’re talking to). From within the app’s source code, that should run entirely in JavaScript and HTML5, json-objects can be stored, retrieved, sent, and received. The user will have the same experience (we even managed to avoid needing a plugin), but the website is unhosted in the sense that the servers you talk to only see encrypted data and don’t even know which application you are running.

The aim of unhosted is to separate application code from user data. This divides servers (at least functionally) into those that store and make available user data, and those that host applications and any supporting code, images and so on. The important feature of these sites is that user data never passes through the web application’s server. This frees users to move to different applications without losing their data.

This doesn’t necessarily stop the application server from doing anyprocessing, including URI-based processing; it is only that the processing cannot be based on user data — the content of the site. Since this content is going to be accessed through AJAX anyway, there’s little motivation for unhosted applications to use anything other than local storage and hash URIs to encode state.

SEO

A final reason for using hash URIs that I’ve seen cited is that it increases the page rank for the base URI, because as far as a search engine is concerned, more links will point to the same base URI (even if in fact they are pointing to a different hash URI). Of course this doesn’t apply to hash-bang URIs, since the point of them is precisely to enable search engines to distinguish between (and access content from) URIs whose base URI is the same.

Disadvantages of Hash URIs

So hash-bangs can give a performance improvement (and hence a usability improvement), and enable us to build new kinds of web applications. So what are the arguments against using them?

Restricted Access

The main disadvantages of using hash URIs generally to support AJAX state arise due to them having to be interpreted by Javascript. This immediately causes problems for:

  • users who have chosen to turn off Javascript because:
    • they have bandwidth limitations
    • they have security concerns
    • they want a calmer browser experience
  • clients that don’t support Javascript at all such as:
    • search engines
    • screen scrapers
  • clients that have buggy Javascript implementations that you might not have accounted for such as:
    • older browsers
    • some mobile clients

The most recent statistic I could find, about access to the Yahoo home page indicates that up to 2% of access is from users without Javascript (they excluded search engines). According to a recent survey, about the same percentage of screen reader users have Javascript turned off.

This is a low percentage, but if you have large numbers of visitors it adds up. The site that I care most about, legislation.gov.uk, has over 60,000 human visitors a day, which means that about 1,200 of them will be visiting without Javascript. If our content were completely inaccessible to them we’d be inconveniencing a large number of users.

Brittleness

Depending on hash-bang URIs to serve content is also brittle, as Gawker found. If the Javascript that interprets the fragment identifier is temporarily inaccessible or unable to run in a particular browser, any portions of a page that rely on Javascript also become inaccessible.

Replacing HTTP

There are other, less obvious, impacts which occur when you use a hash-bang URI.

The URI held in the HTTP Referer header “MUST NOT include a fragment”. As Mike Davies noted, this prevents such URIs from showing up in server logs, and stops people from working out which of your pages are linking to theirs. (Of course, this might be a good thing in some circumstances; there might be aspects of the state of a page that you’d rather a referenced server not know about.)

You should also consider the impact on the future proofing of your site. When a server knows the entirety of a URI, it can use HTTP mechanisms to indicate when pages have moved, gone, or never existed. With hash URIs, if you change the URIs you use on your site, the Javascript that interprets the fragment identifier needs to be able to recognise and support any redirections, missing, or never existing pages. The HTTP status code for the wrapper page will always be 200 OK, but be meaningless.

Even if your site structure doesn’t change, if you use hash-bang URIs as your primary set of URIs, you’re likely to find it harder to make a change back to using hashless URIs in the future. Again, you will be reliant in perpetuity on Javascript routing to decipher the hash-bang URI and redirect it to a hashless URI.

Lack of Differentiation

A final factor is that fragment identifiers can become overcrowded with state information. In a purely hash-URI-based site, what if you wanted to jump to a particular place within particular content, shown with a particular view? The hash URI has to encode all three of these pieces of information. Once you start using hash-bang URIs, there is no way to indicate within the URI (for search engines, for example) that a particular piece of the URI can be ignored when checking for equivalence. With normal hash URIs, there is an assumption that the fragment identifier can basically be ignored; with hash-bang URIs that is no longer true.

Good Practice

Having looked at the advantages and disadvantages, I would echo what seems to be the general sentiment around traditional server-based websites that use hash-bang URIs: pages that give different content should have different base URIs, not just different fragment identifiers. In particular, if you’re serving large amounts of document-oriented content through hash-bang URIs, consider swapping things around and having hashless URIs for the content that then transclude in the large headers, footers and side bars that form the static part of your site.

However, if you are running a server-based, data-driven web application and your primary goal is a smooth user experience, it’s understandable why you might want to offer hash URIs for your pages to the 98% of people who can benefit from it, even for transcluded content. In these cases I’d argue that you should practice progressive enhancement:

  1. support hashless URIs which do notsimply redirect to a hash URI, and design your site around those
  2. use hash-bang URIs as suggested by Google rather than simple hash URIs
  3. provide an easy way to get the sharable, hashless URI for a particular page when it is accessed with a hash-bang URI
  4. use hashless URIs within links; these can be overridden with onclick listeners for those people with Javascript; using the hashless URI ensures that ‘Copy Link Location’ will give a sharable URI
  5. use the HTML5 history API where you can to add or replace the relevant hashless URI in the browser history as state changes
  6. ensure that only those visitors that both have Javascript enabled and do not have support for HTML5’s history API have access to the hash-bang URIs by using Javascript to, for example:
    • redirect to a hash-bang URI
    • rewrite URIs within pages to hash-bang URIs
    • attach onclick URIs to links
  7. support the _escaped_fragment_query parameter, the result of which should be a redirection to the appropriate hashless URI

This is roughly what Twitter has done, except that it doesn’t make it easy to get the hashless URI from a page or from links within the page. Of course the mapping in Twitter’s case is the straight-forward removal of the #!from the URI, but as a human it’s frustrating to have to do this by hand.

The above measures ensure that your site will remain as accessible as possible to all users and provides a clear migration path as the HTML5 history API gains acceptance. The slight disadvantage is that encouraging people to use hashless URIs for links means that you you can no longer depend quite so much on caching as the first page that people access in a session might be any page (whereas with a pure hash-bang scheme everyone goes to the same initial page).

Distributed, client-based websites can take the same measures — the application’s server can send back the same HTML page regardless of the URI used to access it; Javascript can pull information from a URI’s path as easily as it can from a fragment identifier. The biggest difficulty is supporting the static page through the _escaped_fragment_convention without passing user data through the application server. I suspect we might find a third class of service arise: trusted third-party proxies using headless browsers to construct static versions of pages without storing either data or application logic. Time will tell.

The Deeper Questions

There are some deeper issues here regarding web architecture. In the traditional web, there is a one-to-one correspondence between the representation of a resource that you get in response to a request from a server, and the content that you see on the page (or a search engine retrieves). With a traditional hash URI for a fragment, the HTTP headers you retrieve for the page are applicable to the hash URI as well. In a web application that uses transclusion, this is not the case.

Note: It’s also impossible to get metadata about hash URIs used for real-world or abstract things using HTTP; in these cases, the metadata about the thing can only be retrieved through interpreting the data within the page (eg an RDF document). Whereas with the 303 See Otherpattern for publishing linked data, it’s possible to use a 404 Not Foundresponse to indicate a thing that does not exist, there is no equivalent with hash URIs. Perhaps this is what lies at the root of my feeling of unease about them.

With hash-bang URIs, there are in fact three (or more) URIs in play: the hash-bang URI (which identifies a wrapper page with particular content transcluded within it), a base URI (which identifies the wrapper HTML page) and one or more content URIs (against which AJAX requests are made to retrieve the relevant content). Requests to the base URI and the content URIs provide us with HTTP status codes and headers that describe those particular representations. The only way of discovering similar metadata about the hash-bang URI itself is through the _escaped_fragment_query parameter convention which maps the hash-bang URI into a hashless URI that can be requested.

Does this matter? Do hash-bang URIs “break the web”? Well, to me, “breaking the web” is about breaking the implicit socio-technical contract that we enter into when we publish websites. At the social level, sites break the web when they withdraw support for URIs that are widely referenced elsewhere , hide content behind register- or pay-walls, or discriminate against those who suffer from disabilities or low bandwidth. At the technical level, it’s when sites lie in HTTP. It’s when they serve up pages with the title “Not Found” with the HTTP status code 200 OK. It’s when they serve non-well-formed HTML as application/xhtml+xml.

These things matter because we base our own behaviour on the contract being kept. If we cannot trust major websites to continue to support the URIs that they have coined, how can we link to them? If we cannot trust websites to provide accurate metadata about the content that they serve, how can we write applications that cache or display or otherwise use that information? On their own, pages that use Javascript-based transclusion break both the social side (in that they limit access to those with Javascript) and the technical side (in that they cannot properly use HTTP) of the contract.

But contracts do get rewritten over time. The web is constantly evolving and we have to revise the contract as new behaviours and new technologies gain adoption. The _escaped_fragment_convention gives a life line: a method of programmatically discovering how to access the version of a page without Javascript, and to discover metadata about it through HTTP. It is not a pretty pattern (I would much prefer that the server returned a header containing a URI templatethat described how to create the hashless equivalent of a hash-bang URI, and to have some rules about the parsing of a hash-bang fragment identifier so that it could include other fragments identifiers) but it has the benefit of adoption.

In short, hash-bang URIs are an important pattern that will be around for several years because they offer many benefits compared to their alternatives, and because HTML5’s history API is still a little way off general support. Rather than banging the drum against hash-bang URIs, we need to try to make them work as well as they can by:

  • berating sites that use plain hash URIs for transcluded content
  • encouraging sites that use hash-bang URIs to follow some good practices such as those I outlined above
  • encouraging applications, such as browsers and search engines, to automatically map hash-bang URIs into the _escaped_fragment_pattern when they do not have Javascript available

We also need to keep a close eye on emerging patterns in distributed web applications to ensure that these efforts are supported in the standards on which the web is built.

21-22 September 2011, Limerick, Ireland. Co-located with the 16th Annual LRC Conference and hosted by the University of Limerick.

The MultilingualWeb projectis looking at best practices and standards related to all aspects of creating, localizing and deploying the Web multilingually. The project aims to raise the visibility of existing best practices and standards and identify gaps. The core vehicle for this is a series of four events which are planned for the coming two years.

After two highly successful workshops in Madrid and Pisa, this workshop will continue to investigate currently available best practices and standards aimed at helping content creators, localizers, tools developers, and others meet the challenges of the multilingual Web.

Participation is free. We welcome participation from both speakers and non-speaking attendees. For more information, see the Call for Participation