November 4, 2013

Forking Standards and Document Licensing

I thought I would post here a pointer to the Adobe Standards Blog on "Forking Standards and Document Licensing" that Dave McAllister and I wrote in reaction to some of the controversy around the document license issue in W3C. Amazingly, this doesn't seem to be as much of an issue in IETF.

September 10, 2013

HTTP/2.0 worries

I tried to explain HTTP/2.0 in my previous post. This post notes some nagging worries about HTTP/2.0 going forward. Maybe these are nonsense, but ... tell me why I'm wrong ....

Faster is better, but faster for whom?

It should be no surprise that using software is more pleasant when it responds more quickly.  But the effect is pronounced and the difference between "usable" and "just frustrating".  For the web, the critical time is between when the user clicks on a link and the results are legible and useful. Studies (and others) show that improving page load time has a significant effect on the use of web sites.  And a primary component of web speed is the network speed: not just the bandwidth but, for the web, the latency. Much of the world doesn't have high-speed Internet, and the web is often close to unusable.

The problem is -- faster for whom? In general, when optimizing something, one makes changes that speed up common cases, even if making uncommon cases more expensive. Unfortunately, different communities can disagree about what is "common", depending on their perspective.

Clearly, connection multiplexing helps sites that host all of their data at a single server more than it helps sites that open connection to multiple systems.

It should be a good thing that the protocol designers are basing optimizations by measuring the results on real web sites and real data. But the data being used risks a bias; so far little of the data used has been itself published and results reproduced. Decisions in the working group are being made based on limited data, and often are not reproducible or auditable.

Flow control at multiple layers can interfere

This isn't the first time there's been an attempt to revise HTTP/1.1; the HTTP-NG effort also tried. One of the difficulties with HTTP-NG was that there was some interaction between TCP flow control and the framing of messages at the application layer, resulting in latency spikes.  And those working with SPDY report that SPDY isn't effective without server "prioritization", which I understand to be predictively deciding which resources the client will  need first, and returning their content chunks with higher priority for being sent sooner. While some servers have added such facilities for prioritization and prediction, those mechanisms are unreported and proprietary.

Forking  

While HTTP/2.0 started with SPDY, SPDY development development continues independently of HTTP/2.0. While the intention is to roll good ideas from SPDY into HTTP/2.0, there still remains the risk that the projects will fork. Whether the possibility of forking is itself positive or negative is itself controversial, but I think the bar should be higher.

Encryption everywhere 

There is a long-running and still unresolved debate around the guidelines for using, mandating, requiring use of, or implementation of encryption, in both HTTP/1.1 and HTTP/2.0. It's clear that HTTP/2.0 changes the cost of multiple encrypted connections to the same host significantly, thus reducing the overhead of using encryption everywhere: Normally, setting up an encrypted channel is relatively slow, requiring a lot more network round trips to establish. With multiplexing, the setup cost only happens once, so encrypting everything is less of a problem.

But there are a few reasons why that might not actually be ideal. For example, there is also a large market for devices which monitor, adjust, redirect or otherwise interact with unencrypted HTTP traffic; a company might scan and block some kinds of information on its corporate net. Encryption everywhere will have a serious impact for sites that have these interception devices, for better or worse. And adding encryption in a situation where the traffic is already protected is less than ideal, adding unnecessary overhead.

In any case, encryption everywhere might be more feasible with HTTP/2.0 than HTTP/1.1 because of the lower overhead, but it doesn't promise any significant advantage for privacy per se.

Need realistic measurement data

To insure that HTTP/2.0 is good enough to completely replace HTTP 1.1, it's necessary to insure that HTTP/2.0 is better in all cases. We do not have agreement or reproducable ways of measuring performance and impact in a wide variety of realistic configurations of bandwidth and latency. Measurement is crucial, lest we introduce changes which make things worse in unanticipated situations, or wind up with protocol changes that only help the use cases important to those who attend the meetings regularly and not the unrepresented.

Why HTTP/2.0? A Perspective

When setting up for the HTTP meeting in Hamburg, I was asked, reasonably enough, what the group is doing, why it was important, and my prognosis for its success.  It was hard to explain, so I thought I'd try to write up my take "why HTTP/2.0?"  Corrections, additions welcome.

HTTP Started Simple

The HyperText Transfer Protocol when first proposed was a very simple network protocol, much simpler than FTP (File Transfer Protocol), and quite similar to Gopher. Basically, the protocol is layered on the Transport Control Protocol (TCP)  which sets up bi-directional reliable streams of data. HTTP/0.9 expected one TCP connection per user click to get a new document. When the user clicks a link, it takes the URL of the link (which contains the host, port, and path of the link) and
  1. Using DNS, client get the IP address of the server in the URL
  2. opens a TCP connection to that server's address on the port named in the URL
  3. client writes "GET" and the path of the URL onto the connection
  4. the server responds with HTML for the page
  5. the client reads the HTML and displays it
  6. the connection is closed
Simple HTTP was adequate, judging by latency and bandwidth, as the overhead of HTTP/0.9 was minimal; the only overhead is the time to look up the DNS name and set up the TCP connection. 

Growing Complexity

HTTP got lots more complicated; changes were reflected in a series of specifications, initially with HTTP/1.0, and subsequently HTTP/1.1. Evolution has been lengthy, painstaking work; a second edition of the HTTP/1.1 specification (in six parts, only now nearing completion) has been under development for 8 years. 

Adding Headers

HTTP/1.0 request and response (steps 3 and 4 above) added headers: fields and values that modified the meaning of requests and responses. Headers were added to support a wide variety of additional use cases, e.g., adding a "Content-Type" header to allow images and  other kinds of content, a "Content-Transfer-Encoding" header and others to allow optional compression, quite a number of headers for support of caching and cache maintenance, a "DNT" header to express user privacy preferences.

While each header has its uses and justification, and many are optional, headers add both size and complexity to every HTTP request. When HTTP headers get big, there is more chance of delay (e.g., the request no longer fits in a single packet), and the same header information gets repeated.

Many More Requests per Web Page

The use of HTTP changed, as web expressiveness increased. Initially NCSA Mosaic led by supporting embedded  images in web pages, doing this by using a separate URL and HTTP request for each image.  Over time, more elements also have been set up as separate cachable resources, such as style sheets, JavaScript and fonts. Presently, the average popular web home page makes over 40 HTTP requests 

HTTP is stateless

Neither client nor server need to allocate memory or remember anything from one request/response to the next. This is an important characteristic of the web that allows highly popular web sites to serve many independent clients simultaneously, because the server need not allocate and manage memory for each client.  Headers must be repeatedly sent, to maintain the stateless nature of the protocol.

Congestion and Flow Control

 Flow control in TCP, like traffic metering lights, throttles a sender's output to match the receivers capability to read. Using many simultaneous connections does not work well, because the streams use the same routers and bridges which must manage the streams independently, but the TCP flow control algorithms do not, cannot, take into account the other traffic on the other connections. Also, setting up a new connection potentially involves additional latency, and opening encrypted connections is even slower since it requires more round-trips of communication of information.

Starting HTTP/2.0

While these problems were well-recognized quite a while ago, work on optimizing HTTP labeled "HTTP-NG" (next generation) foundered. But more recent work (and deployment) by Google on a protocol called SPDY shows that, at least in some circumstances, HTTP can be replaced with something which can improve page load time. SPDY is already widely deployed, but there is an advantage in making it a standard, at least to get review by those using HTTP for other applications. The IETF working group finishing the HTTP/1.1 second edition ("HTTPbis") has been rechartered to develop HTTP/2.0 which addresses performance problems. The group decided to start with (a subset of) SPDY and make changes from there.

HTTP/2.0 builds on HTTP/1.1; for the most part, it is not a reduction of the complexity of HTTP, but rather adds new features primarily for performance.

Header Compression

The obvious thing to do to reduce the size of something is to try to compress it, and HTTP headers compress well. But the goal is not just to speed transmission, it's also to reduce parse time of the headers. The header compression method is undergoing significant changes.

Connection multiplexing

One way to insure coordinated flow control and avoid causing network congestion is to "multiplex" a single connection. That is, rather than open 40 connections, only open one per destination. A site that serves all of its images and style sheets and JavaScript libraries on the same host could send the data for the page over the same connection. The only issue is how to coordinate independent requests and responses which can either be produced or consumed in chunks.

Push vs. Pull

A "push" is when the server sends a response that hadn't been asked for. HTTP semantics are strictly request followed by response, and one of the reasons why HTTP was considered OK to let out through a firewall that filtered out incoming requests.  When the server can "push" some content to clients even when the client didn't explicitly request it, it is "server push".  Push in HTTP/2.0 uses a promise "A is what you would get if you asked for B", that is, a promise of the result of a potential pull. The HTTP/2.0 semantics are developed in such a way that these "push" requests look like they are responses to requests not made yet, so it is called a "push promise".  Making use of this capability requires redesigning the web site and server to make proper use of this capability.

With this background, I can now talk about some of the ways HTTP/2.0 can go wrong. Coming up!

September 6, 2013

HTTP meeting in Hamburg

I was going to do a trip report about the HTTPbis meeting August 5-7 at the Adobe Hamburg office, but wound up writing up a longer essay about HTTP/2.0 (which I will post soon, promise.) So, to post the photo:

It was great to have so many knowledgeable implementors working on live interoperability: 30 people from around the industry and around the world came, including participants from Adobe, Akamai, Canon, Google, Microsoft, Mozilla, Twitter, and many others representing browsers, servers, proxies and other intermediaries.
It's good the standard development is being driven by implementation and testing. While testing across the Internet is feasible, meeting face-to-face helped with establishing coordination on the standard.
I do have some concerns about things that might go wrong, which I'll also post soon.

July 21, 2013

Linking and the Law

Ashok Malhotra and I (with help from a few friends) wrote a short blog post  "Linking and the Law" as a follow-on of the W3C TAG note Publishing and Linking on the Web (which Ashok and I helped with after its original work by Jeni Tennison and Dan Appelquist.)

Now, we wanted to make this a joint publication, but ... where to host it? Here, Ashok's personal blog, Adobe's, the W3C?

Well, rather than including the post here (copying the material) and in lieu of real transclusion, I'm linking to Ashok's blog: see "Linking and the Law".

Following this: the problems identified in Governance and Web Architecture are visible here:
  1. Regulation doesn't match technology
  2. Regulations conflict because of technology mis-match
  3. Jurisdiction is local, the Internet is global
These principles reflect the difficulties for Internet governance ahead. The debates on managing and regulating the Internet are getting more heated. The most serious difficulty for Internet regulation is the risk that the regulation won't actually make sense with the technology (as we're seeing with Do Not Track).
The second most serious problem is that standards for what is or isn't OK to do will vary widely across communities to the extent that user created content cannot be reasonably vetted for general distribution.

April 2, 2013

Safe and Secure Internet

The Orlando IETF meeting was sponsored by Comcast/NBC Universal. IETF sponsors get to give a talk on Thursday afternoon of IETF week, and the talk was a panel, "A Safe, Secure, Scalable Internet".

What I thought was interesting was the scope of what the speaker's definition of "Safe" and "Secure", and the mismatch to the technologies and methods being considered. "Safety" included "letting my kids surf the web without coming across pornography or being subject to bullying", while the methods they were talking about were things like site blocking by IP address or routing.

This seems like a oomplete mismatch. If bullying happens because harassers facebook post nasty pictures which they label with the victim's name, this problem cannot be addressed by IP-address blocking. "Looking in the wrong end of the telescope."

I'm not sure there's a single right answer, but we have to define the question correctly.

March 25, 2013

Standardizing JSON

Update 4/2/2013: in an email to the IETF JSON mailing list, Barry Leiba (Applications Area director in IETF) noted that discussions had started with ECMA and ECMA TC 39 to reach agreement on where JSON will be standardized, before continuing with the chartering of an IETF working group.

JSON (JavaScript Object Notation) is a text representation for data interchange. It is derived from the JavaScript scripting language for representing data structures and arrays. Although derived from JavaScript, it is language-independent, with parsers available for many programming languages.

JSON is often used for serializing and transmitting structured data over a network connection. It is commonly used to transmit data between a server and web application, serving as an alternative to XML.

JSON was originally specified by Doug Crockford in RFC 4627, an "Informational" RFC.  IETF specifications known as RFCs come in lots of flavors: an "Informational" RFC isn't a standard that has gone through careful review, while a "standards track" RFC is.

An increasing number of other IETF documents want to specify a reference to JSON, and the IETF rules generally require references to other documents that are the same or higher levels of stability. For this reason and a few others, the IETF is starting a JSON working group (mailing list) to update RFC 4627.

The JavaScript language itself is standardized by a different committee (TC-39) in a different standards organization (ECMA).  For various reasons, the standard is called "ECMAScript" rather than JavaScript.  TC 39 published ECMAScript 5.1, and are working on ECMAScript 6, with a plan to be done in the same time frame as the IETF work.

The W3C  also is developing standards that use JSON and need a stable specification.

Risk of divergence

Unfortunately, there is a possibility of (minor) divergence between the two specifications without coordination, either formally (organizational liaison) or informally, e.g., by making sure there are participants who work in both committees.

There is a formal liaison between IETF and W3C. There is currently no also a formal liaison between W3C and ECMA (and a mailing list, public-script-coord@w3.org ). There is no formal liaison between TC39/ECMA and IETF.

Having multiple conflicting specifications for JSON would be bad. While some want to avoid the overhead of a formal liaison, there needs to be explicit assignment of responsibility. I'm in favor of a formal liaison as well as informal coordination. I think it makes sense for IETF to specify the "normative" definition of JSON, while ECMA TC-39's ECMAScript 6.0 and W3C specs should all point to the new IETF spec.

JSON vs. XML

JSON is often considered as an alternative to XML as a way of passing language-independent data structures as part of network protocols.

In the IETF, BCP 70 (also known as RFC 3470"Guidelines for the Use of Extensible Markup Language (XML) within IETF Protocols" gives guidelines for use of XML in network protocols. However, this published in 2003. (I was a co-author with Marshall Rose and Scott Hollenbeck.)

But of course these guidelines don't answer the question many have: When people want to pass data structures between applications in network protocols, do they use XML or JSON and when? What is the rough consensus of the community? Is it a choice? What are the alternatives and considerations? (Fashion? deployment? expressiveness? extensibility?) 

This is a critical bit of web architecture that needs attention. The community needs guidelines for understanding the competing benefits and costs of XML vs. JSON.  If there's interest, I'd like to see an update to BCP 70 which covers JSON as well as XML.

December 30, 2012

Reinventing the W3C TAG

This is the fourth in a series of blog posts about my personal priorities for Web standards and the W3C TAG, as part of the ongoing TAG election.

The Mission of the W3C TAG has three aspects:

  1. to document and build consensus around principles of Web architecture and to interpret and clarify these principles when necessary;
  2. to resolve issues involving general Web architecture brought to the TAG; and
  3. to help coordinate cross-technology architecture developments inside and outside W3C.

Success has been elusive:

  1. After the publication of Architecture of the World Wide Web in 2004, attempts to update it, extend it, or even clarify it have foundered.
  2. Issues involving general Web architecture are rarely brought to the TAG, either by Working Group chairs, W3C staff, or the W3C Director, and those issues that have been raised have rarely been dealt with promptly or decisively.
  3. The TAG's efforts in coordinating cross-technology architectural developments within W3C (XHTML/HTML and RDFa/Microdata) have had mixed results. Coordinating cross-technology architecture developments outside W3C would require far more architectural liaison, primarily with IETF's Internet Architecture Board but also with ECMAScript TC39.

Building consensus around principles of Web architecture

I have long argued that the TAG practice of issuing Findings is not within the TAG charter, and does not build consensus. In the W3C, the issuing of a Recommendation is the stamp of consensus. There may be a few cases where the TAG is so far in advance of the community that achieving sufficient consensus for Recommendation is impossible, but those cases should be extremely rare.

  • Recommendation: Review TAG Findings and triage; either (a) update and bring the Finding to Recommendation, (b) obsolete and withdraw, or (c) hand off to a working group or task force.

To build consensus, the TAG's technical focus should match more closely the interest of the Web community.

  • Recommendation: Encourage and elect new TAG members with proven leadership skills as well as interest and experience in the architectural topics of most interest to W3C members.
  • Recommendation: The TAG should focus its efforts on the "Web of Applications" at the expense of shedding work on the semantic web and pushing ISSUE-57 and related topics to a working group or task force.

Updating AWWW to cover Web applications, Web security and other architectural components of the modern Web is a massive task, and those most qualified to document the architecture are also likely to be inhibited by the overhead and legacy of the TAG.

  • Recommendation: Charter a task force or working group to update AWWW.

Resolving issues involving general Web architecture brought to the TAG

To resolve an issue requires addressing it quickly, decisively, and in a way that is accepted by the parties involved. The infamous ISSUE-57 has been unresolved for over five years. The community has, for the most part, moved on.

  • Recommendation: encourage Working Group chairs and staff to bring current architectural issues to the TAG.
  • Recommendation: drop issues which have not been resolved within a year of being raised.

Coordinate cross-technology architectural developments inside and outside W3C

Within W3C, one contentious set of issues involve differing perspectives on the role of standards.

  • Recommendation: The TAG should define the W3C's perspective on the Irreconcilable Differences I've identified as disagreements on the role of standards.

For coordination with standards outside of W3C:

  • Recommendation: The TAG should meet at least annually with the IETF IAB, review their documents, and ask the IAB to review relevant TAG documents. The TAG should periodically review the status of liaison with other standards groups, most notably ECMA TC39.

On the current TAG election

An influx of new enthusiastic voices to the TAG may well help bring the TAG to more productivity than it's had in the past years, so I am reluctant to discourage those who have newly volunteered to participate, even though their prior interaction with the TAG has been minimal or (in most cases) non-existent. I agree the TAG needs reform, but the platforms offered have not specifically addressed the roadblocks to the TAG accomplishing its Mission.

In these blog posts, I've offered some insights into my personal perspectives and priorities, and recommended concrete steps the TAG could take.

If you're participating in W3C:

  • Review carefully the current output and priorities of the TAG and give feedback.
  • When voting, consider the record of leadership and thinking, as well as expertise and platform.
  • Hold elected TAG members accountable for campaign promises made, and their commitment to participate fully in the TAG.

Being on the TAG is an honor and a responsibility I take seriously. Good luck to all.

December 29, 2012

W3C and IETF coordination

This is the third of a series of posts about my personal priorities for Web standards, and the relationship to the W3C TAG.

Internet Applications = Web Applications

For better or worse, the Web is becoming the universal Internet application platform. Traditionally, the Web was considered just one of many Internet applications. But the rise of Web applications and the enhancements of the Web platform to accommodate them (HyBi, RTCWeb, SysApps) have further blurred the line between Web and non-Web.

Correspondingly, the line between IETF and W3C, always somewhat fuzzy, has further blurred, and made difficult the assignment of responsibility for developing standards, interoperability testing, performance measurement and other aspects.

Unfortunately, while there is some cooperation in a few areas, coordination over application standards between IETF and W3C is poor, even for the standards that are central to the existing web: HTTP, URL/URI/IRI, MIME, encodings.

W3C TAG and IETF coordination

One of the primary aspects of the TAG mission is to coordinate with other standards organizations at an architectural level. In practice, the few efforts the TAG has made have been only narrowly successful.

An overall framework for how the Web is becoming a universal Internet application platform is missing from AWWW. The outline of architectural topics the TAG did generate was a bit of a mish-mash, and then was not followed up.

The current TAG document Best Practices for Fragment Identifiers and Media Type Definitions, is narrow; the first public working draft was too late to affect the primary IETF document that should have referenced it, and is likely to not be read by those to whom it is directed.

There cannot be a separate "architecture of the Internet" and "architecture of the Web". The TAG should be coordinating more closely with the IETF Internet Architecture Board and applications area directorate.

Web Standards and Security

This is the second in a series of posts about my personal priorities for the W3C Technical Architecture Group.

Computer security is a complex topic, and it is easy to get lost in the detailed accounts of threats and counter-measures. It is hard to get to the general architectural principles. But fundamentally, computer security can be thought of as an arms race:  new threats are continually being invented, and counter-measures come along eventually to counter the threats. In the battle between threats and defense of Internet and Web systems, my fear is that the "bad guys" (those who threaten the value of the shared Internet and Web) are winning. My reasoning is simple:  as the Internet and the Web become more central to society, the value of attacks on Internet infrastructure and users increases, attracting organized crime and threats of cyber-warfare.

Further, most reasoning about computer security is "anti-architectural":  the exploits of security threats cut across the traditional means of architecting scalable systems—modularity, layering, information hiding. In the Web, many security threats depend on unanticipated information flows through the layer boundaries. (Consider the recently discovered "CRIME" exploit.) Traditional computer security analysis consists of analyzing the attack surface of a system to discover the security threats and provide for mitigation of those threats.

New Features Mean New Threats

Much of the standards community is focused on inventing and standardizing new features. Because security threats are often based on unanticipated consequences of minor details of the use of new features, security analysis cannot easily be completed early in the development process. As new features are added to the Web platform, more ways to attack the web are created. Although the focus of the computer security community is not on standards, we cannot continue to add new features to the Web platform without sufficient regard to security, or to treat security as an implementation issue.

Governance and Security

In many ways, every area of governance is also an area where violation of the governance objectives has increasing value to an attacker. Even without the addition of new features, deployment of existing features in new social and economic applications grows the attack surface. While traditional security analysis was primarily focused on access control, the growth of social networking and novel features increases the ways in which the Web can be misused.

The W3C TAG and Security

The original architecture of the Web did not account for security, and the W3C TAG has so far had insufficient expertise and energy to focus on security. While individual security issues may be best addressed in working groups or outside the W3C, the architecture of the Web also needs a security architecture, which gives a better model for trust, authentication, certificates, confidentiality, and other security properties.

Governance and Web Standards

I promised I would write more about my personal priorities for W3C and the W3C TAG in a series of posts. This is the first. Please note that, as usual, these are my personal opinions. Comments, discussion, disagreements welcome.

A large and growing percentage of the world depends on the Internet as a critical shared resource for commerce, communication, and community. The primary value of the Internet is that it is common: there is one Internet, one Web, and everyone on the planet can communicate with everyone else. But whenever there is a shared resource, opportunities for conflict arise—different individuals, groups, companies, nations, want different things and act in ways that threaten this primary value. There are endless tussles in cyberspace, including conflicts over economics, social policy, technology, and intellectual property. While some of the conflicts are related to "whose technology wins," many are related to social policy, e.g., whether Internet use can be anonymous, private, promote or allow or censor prohibited speech, protect or allow use of copyrighted material.

Shared resources in conflict, unregulated, are ultimately unsustainable. The choices for sustainability are between voluntary community action and enforced government action; if community action fails, governments may step in; but government action is often slow to move and adapt to changes.

As the recent kerfuffle over ITU vs. "multi-stakeholder" governance of the Internet shows, increased Internet regulation is looming. If the Internet community does not govern itself or provide modes of governance, varying national regulations will be imposed, which will threaten the economic and social value of a common Internet. Resolving conflict between the stakeholders will require direct attention and dedicated resources.

Governance and W3C

Standards and community organizations are a logical venue for addressing most of Internet governance conflicts. This is primarily because "code is law":  the technical functioning of the Internet determines how governance can work, and separating governance from technology is usually impossible. Further, the community that gathers at IETF and W3C (whether members or not), are the most affected.

I think W3C needs increased effort and collaboration with ISOC and others to bring "governance" and "Web architecture for governance" to the forefront.

Governance and the W3C TAG

The recent TAG first public working draft, "Publishing and Linking on the Web" is an initial foray of the W3C TAG in this space. While some may argue that this work exceeds the charter of the TAG, I think it's valuable work that currently has no other venue, and should continue in the TAG.

December 13, 2012

I Invented the W3C TAG :)

As a few of you know, W3C TAG elections are upon us. While this is usually a pretty boring event, this year it's been livened by electioneering.  I don't have a long platform document prepared ("stand on my record"), but I'll write some things about where I think web standards need to go.... But first a bit of history:

I invented the W3C TAG. At least more than Al Gore invented the Internet. I was Xerox' AC representative when I started on the W3C Advisory Board, and it was in 2000 that I and Steve Zilles edited the initial TAG charter.  I think a lot of the details (size, scope, term limits, election method) were fairly arbitrarily arrived at, based on the judgment of a group speculating about the long-term needs of the community. I prioritize a focus on architecture, not design; stability as well as progress; responsibility to the community; a role in dispute resolution. The TAG has no power: it's a leadership responsibility; there is no authority.

And the main concern then, as now, is finding qualified volunteers who can actually put in the work needed to get "leadership" done.

In a few future blog posts I'll outline what I think some of the problems for the Web, W3C, and the TAG might be. I'll write more on

1. Governance. Architectural impact of legislative, regulatory requirements.
2. Security. In the arms race, the bad guys are winning.
3. Coordination with other standards activities (mainly IETF Applications area), fuzziness of the boundary of the "web".

Questions? Please ask (here, twitter, www-tag@w3.org)

Update 12/16/2012 ... I didn't invent the TAG alone 

Doing a little more research:

It's easy to find earlier writings  and talks about Web Architecture. At the May 2000 W3C advisory committee meeting,  I was part of the discussion of whether Architecture needed a special kind of group or could be completed by an ordinary working group. I think the main concern was long-term maintenance.
By the 6/9/2000 Advisory Board meeting, the notion of a "Architecture Board" was part of the discussion. An initial charter was sent out by Jean-Francois Abramatic to the Advisory Board  8/11/2000 6:02 AM PST.

Steve Zilles sent a second proposed charter (forwarded to the AB 8/14/2000 08:35PST) with cover note:
The attached draft charter is modelled on the structure of the Hypertext CG charter. This was done for completeness. Much of the content is based on notes that I took during the discussion with Larry Masinter refered to above, but the words are all mine. The Background section is my creation.  The mission is based on our joint notes. The Scope is mostly my creation, but, I belive consistent with    our discussion. The Participants section has most of what we discussed.  I tried to capture the intent of what Jean-Francios wrote, but I did not borrow any of the words because I was using a different outline. My apologies if I failed in that respect.
While I contributed to the definition of the TAG and many of the ideas in the TAG charter, others get "invention" credit as well.

An Architecture Working Group... 

Reading the discussions about the TAG made me wonder if it's time to reconsider an "architecture working group" whose sole responsibility is to develop AWWW2.  There's a lot of enthusiasm for an AWWW2,  can we capture the energy without politicizing it? Given the poor history of the TAG in maintaining AWWW, perhaps it should be moved out to a more focused group (with TAG participation encouraged).


May 20, 2012

Are homepages on the way out?

Is the idea of a home page on the way out?  I've had a "home page" since at least 1996. But I'm wondering if it is declining. What with things like Facebook and LinkedIn and so on, there are too many places to look for "identity".  But it's really just a social convention, that people and organizations might have a "home page" which is them, which you might sign an email with. 
 
When I sign my email I use http://larry.masinter.net alone. Why include  a larger signature block when I can sum it all up in one URL? But I'm doing it less and less. People can find me, just do search.
 
But I wonder -- is the notion of a "home page" underlying the semantic web's use of a URL to stand for some thing, person or group in the real world?

For example, you might say that there was a link, for a URL U and a thing X between:
 
*  how good the page at the U serves a "home page" for X
* how appropriate U is as a URI for the concept X in RDF
 
 (I talked about this on Google+, but blog is better)

December 14, 2011

HTTP Status Cat: 418 - I'm a teapot

418 - I'm a teapot by GirlieMac
418 - I'm a teapot, a photo by GirlieMac on Flickr.
In the W3C TAG, I'm working on bringing together a set of threads around the evolution of the web, the use of registries and extension points, and MIME in web standards.

A delightful collection of HTTP Status Cats includes the above cat-in-teapot came from HTCPCP "The HyperText Coffee Pot Control Protocol" [RFC 2324].

The IETF regularly each April 1st also publishes humorous specifications (as "Informational" documents), perhaps to make the point that "Not all RFCs are standards", but to also provide humorous fodder for technical debates.
The target of HTCPC was the wave of proposals we were seeing for extensions to HTTP in the HTTP working group (which I had chaired) to support what seemed to me to be cockeyed, inappropriate applications.

I set out in RFC2324 to misuse as many of the HTTP extensibility points as a could.

But one of the issues facing registries of codes, values, identifiers is what to do with submissions that are not "serious". Should 418 be in the IANA registry of HTTP status codes? Should the many (not actually valid) URI schemes in it (coffee: in 12 languages) be listed as registered URI schemes?

August 15, 2011

Expert System Scalability and the Semantic Web

In the late 80s, we saw the fall of AI and Expert Systems as a "hot" technology -- the "AI winter".  The methodology, in brief: build a representation system (a way of talking about facts about the world) and an inference engine (a way of making logical inferences bet of a set of facts).  Get experts to tell you facts about the world. Grind the inference engine, and get new facts. Voila!

I always felt that the problem with the methodology was the failure of model theory to scale: the more people and time involved in developing the "facts" about the world, the more likely it is that the terminology in the representation system would fuzz -- that different people involved in entering and maintaining the "knowledge base" would disagree about what the terms in the representation system stood for.

The "semantic web" chose to use URIs as the terminology for grounding abstract assertions and creating a model where those assertions were presumed to be about the real world.

This exacerbates the scalability problem. URIs are intrinsically ambiguous and were not designed to be precise denotation terms. The semantic web terminology of "definition" and "assignment" of URIs reflects a point of view I fundamentally disagree with.  URIs don't "denote". People may use them to denote, but it is a communication act; the fact that I say by "http://larry.masinter.net" I mean *me* does not imbue that URI with any intrinsic semantics.

I've been trying to get at these issues around ambiguity with the "duri" and "tdb" URI schemes, for example, but I think the fundamental perspective still simmers.

August 7, 2011

Internet Privacy: TELLING a friend may mean telling THE ENEMY

In the Quebec maritime museum by Lar4ry
In the Quebec maritime museum, a photo by Lar4ry on Flickr.

After the recent IETF in Quebec, I found htis poster in a maritime museum.

The problem with most of the Internet privacy initiatives is that they don't seem to start with a threat analysis: who are your friends (those with web sites you want to visit) and who are your enemies (those who would use your personal information for purposes you don't want), and how do you tell things to friends without those things getting into the hands of your enemies. It's counter-intuitive to have to treat your friends as if they're a channel to your enemies, but ... information leaks.

Via Flickr:
TELLING a friend may mean telling THE ENEMY

July 16, 2011

Leadership: getting others to follow

Often people talk about something "leading" as whether it is newer, faster, better, more exciting, having more new features, etc.

But fundamentally, leadership only occurs if others follow... a leading product is imitated by its competitors, has a following of customers, and a leading standard is widely implemented.

Where does leadership come from? Can it come from a committee? Not really .... in the end, invention and leadership come from individuals. 

In the area of standards and technology, leadership and innovation comes from individuals and groups .... they make proposals, get feedback, adoption, agreement, and then get others to follow.  A working group, committee, mailing list can only review, suggest improvements, push back on alternatives.

It is foolish to desire that "leadership" in a technology area will only come from one segment, one group, one committee... and impossible to mandate, even if it were desirable.

Industry prospers when those who innovate find ways to get others to follow. The web needs innovation from outside the standards organizations; those innovations then can be brought in, reviewed, updated, modified to meet additional requirements discovered or added during the review and "standardization" process.

June 26, 2011

Irreconcilable differences

,,,,

I've been meaning to post some version of this forever, and it's been getting in the way of me blogging more. So ... here goes... incomplete and warty as this post is.

I've come to think that many of these differences might be from a "implementation" vs. "specification" view, but I'll have to say more about that later....

The ongoing battle for future control over HTML is dominated not only by the usual forces ("whose technology wins?") but also some very polarized views of what standards are, what they should be, how standards should work and so forth. The debate over these prinicples has really slowed down the development of web standards. Many of these issues were also presented at the November 2010 W3C Technical Plenary in the "HTML Next" session.

I've written down some of these polarized viewpoints, as an extreme position and a counterposition.

Matching Reality:

  • Standards should be written to "match reality": the standard should follow what (some, all, most, the important, the open source) systems have implemented (or are willing to implement in the very near future.)
  • Standards should try to "lead reality": The standard should try to move things in directions that improve modularity, reliability, and other values.

Of course, having standards that do not "match reality" in the long run is not a good situation, but the question is whether backward compatibility with (admittedly buggy) implementations should dominate the discussion of "where standards should go". If new standards always match the "reality" of existing content and systems, then you could never add any features at all. But if you're willing to add new features, why not also try to 'fix' things that are misimplemented or done badly? There does need to be a transition plan (how to make changes in a way that doesn't break existing content or viewers), but that's often feasible.

Precision:

  • Standards should precisely specify behavior, and give sufficient details for how to implement something "compatible" with the what is currently deployed, sufficiently that no user will complain that some implementation doesn't work "the same". Such behavior MUST be mandated by the standard.
  • Standards should minimize the compliance requirements to allow widest possible range of implementations; "interoperability" doesn't necessarily mean that even badly written web pages must be supported. Conformance ("MUST") should be used very sparingly.

Personally, I'm more on the "blue" side: the more precisely behavior is specified, the narrower the applicability of the standard. There's a tradeoff, but it seems better to err on the side of under- rather than over-specifying, if a standard is going to have a long-term value. If a subset of implementations want a more precise guideline, doing so could be in a separate implementation guide or profile.

Leading:

  • Standards should lead the community and add exciting new features. New features should ideally appear first in the standard.
  • Standards should follow innovative practice only after wide experience with technology. Sample implementations should be widely reviwed and tested; only after wide experience with technology should it be added to the standard.

In general standards should follow innovation. Refinements during the standardization phase might be seen as "leading", in order to satisfy the broader requirements brought to bear as the standard gets reviewed. There's a compromise, but looking for innovation from a committee.... well, we all know about "design by committee".

Extensibility:

  • Non-standard extensions should be avoided. Ideally, we should eliminate any non-standard extensions; everyone's experience should be the same.
  • Non-standard extensions are valuable. Innovations have (and will continue to) come from competing (non-standard) extensions, including plugins. Not all plugins are universally deployed; sites can choose to use non-standard extensions if they want.

In the past, plugins and other non-standard extensions have fueled new features; why should this trend stop? There are trade-offs, but moves to eliminate non-standard extensions or make them less viable are conter-productive.

Modularity:

  • Modularity is disruptive. Independent evolution of components leads to divergence and confusion. Independent committees go their own way. Subsets just mean unwanted choices and chaos.
  • Modularity is valuable. Specifying technology into smaller separate parts is beneficial: the ability to choose subsets extends the range of applications; modules can evolve independently.

Modularity is important, but it has to be done "right". Architecture recapitulations organizational structure; separate committes with independent specs requires a great deal of good-faith effort to coordinate, and there's not a lot of "good faith" going around.

Timely:

  • Standards take too long, move faster. Implementing and shipping the latest proposal is a good way to validate proposed standards and get technology in the hands of users. Standards that take years aren't interesting.
  • Encouraging users to deploy experimental extensions before they are completed will cause fragmentation, because not all experiments succeed.

The community can see innovation pretty quickly, but good standards take time. I'd rather see experimental features as "proposals" rather than passed around as "the standard" misleadingly.

Web Content Authors Ignore Standards:

  • Web authors don't care about standards. Most individual authors, designers, developers and content providers ignore standards anyway, so any efforts based on assuming authors will change isn't helpful.
  • Influencing authors is possible. Authors can and will adopt standards if popular browsers tie new features to standard-conforming content.

I'm not convincued that influencing content authors is impossible. Doing so requires some agreement from "leading implementors" to give authors sufficient feedback to make them care, but this isn't impossible. It's happened in other standards when it was important.

Versionless Standards and Always On Committee:

  • Standards committees should be chartered to work forever, because the technology needs to evolve continuously. A stable "standard" is just a meaningless snapshot. Standard committees should be "always on", to allow for rapid evolution. The notion of "version numbers" for standards is obsolete in a world where there are continual improvements.
  • Standards should be stable. Continual innovation is good for technology suppliers, but bad for standards; evolution should be handled by allowing individual technology providers to innovate, and then to bring these innovations into standards in specific versions.

We shouldn't guarantee "lifetime employement for standards writers". A stable document should have a long lifetime, not subject to constant revision. If we're not ready to settle on a feature, it should likely move into a separate document and be designed as a (perhaps proprietary) extension. An "always on" committee is more likely to concentrate power in the few who can afford to commit resources, independently of how deeply they are affected by changes.

Open Source:

  • Standards should always have an open source implementation. Allowing any company or software developer to provide their own private extensions is harmful; a content standard should be managed by the group of major (or major open source) implementors, so that any "standard" extension is available to all.
  • Open source is useful but unnecessary. Proprietary extensions and capabilities (originally from a single source or a consortium) have benefited the web in the past and will continue to be sources of innovation. While "open source" may be beneficial, not everything will or can be open source.

Working on open source implementations can go hand in hand with working with standards. However, a standard is very different from open source software. In the end, users care about compatibility of a wide variety of implementations. We shouldn't guarantee "lifetime employement for standards writers".

The "Web" is defined by "What Browsers Do":

  • The web is first and foremost “what browsers do”, and secondly a source of "web applications" technology (browser technology used for installable applications)
  • Other needs can dominate browser needs Web technologies extend to the widest range of Internet applications, including email, instant messaging, news distribution, syndication and aggregation, help systems, electronic publishing; requirements of these applications should have equal weight, even when those requirements are meaningless for what “browsers” are used for.

Royalty Free:

  • Avoid all patented technology. Every component of a browser MUST be implementable without any restriction based on patents or copyright (although creation tools, search engines, analysis, translation gateways, traffic analysis may not be)
  • Patented technology has a place. In some cases, patented technology cannot be avoided, or is so widespread that “royalty free” is just one more requirement among many tradeoffs.

Forking:

  • Forking a spec allows innovation. Having multiple specifications which offer different definitions same thing (such as HTML) allows leading features to be widely known and implemented, and allows group to work around organizational bottlenecks.
  • Forking a spec is harmful. Multiple specifications which claim to define the same thing is a power trip, causing confusion.

Accessibility:

  • Accessibility is just one of many requirements Accessibility is an important requirement for the web platform, but only one of many sets of requirements, to be traded off against the requirements of other user communities when developing standards
  • Accessibility is not an option. Insuring that those who deploy products implementing W3C standards allow building accessible content is necessary before W3C can endorse or recommend that standard.

Architecture:

  • Architecture is mainly theoretical; it is not a very useful concern; rather, invoking "architecture" is mainly a way of adding requirements that aren’t very useful.
  • Architecture and consistency is crucial. Consistency between components of the web architecture and guidelines for consistency and  orthogonality are important enough that existing work should slow down to insure architectural consistency.

And a few other topics I ran out of time to elaborate:

Digital Rights Management: DRM is Evil? DRM is an Important feature?

Privacy: Up to browsers? Mandated in specs?

Voice: Integrated? Separate spec?

Applications: Great? Misuse: use Browser?

JavaScript: Essential, stable? Fundamentally broken?

October 22, 2010

Another take on 'persistence' and 'indirection'

I've noodled on the questions of persistence of identifiers, waht is a "resource" and so on for a while; http://www.ietf.org/id/draft-masinter-dated-uri-07.txt  is the latest edition of a "thought experiment".  If a 'data:' URI is an immediate address, is a "tdb" URI an indirect one?

June 5, 2010

MIME and the Web

I originally wrote this as blog post & made updates, but now available as IETF Internet Draft, for discussion on www-tag@w3.org.

Origins of MIME

MIME was invented originally for email, based on general principles of ‘messaging’, foundational architecture. The role of MIME was to extend Internet messaging from ASCII-only plain text (other character sets,  images, rich documents, etc.) The basic architecture of complex content messaging is:
  • Message sent from A to B.
  • Message includes some data. Sender A includes standard ‘headers’ telling recipient B enough information that recipient B knows how sender  A intends the message to be interpreted.
  • Recipient B gets the message, interprets the ‘headers’ for the data and uses it as information on how to interpret the data.
MIME is a “tagging and bagging” specification:
  •  tagging: how to label content so the intent of how the content should be interpreted is known
  •  bagging: how to wrap the content so the label is clear, or, if there are multiple parts to a single message, how to combine them.
“MIME types” (renamed “Internet Media Types”) were part of the labeling, the name space of kinds of things. The MIME type registry (“Internet Media Type registry”) is where someone can tell the world what a particular label means, as far as the sender’s intent.

Introducing MIME into the Web

The original World Wide Web  didn’t have MIME tagging and bagging. Everything transferred was HTML.
At the time, ('92) other distributed information access systems, including Gopher (distributed menu system) and WAIS (remote access to document databases) were adding capabilities for accessing many things other text and hypertext and the WWW folks were considering type tagging.
It was agreed that HTTP should use MIME as the vocabulary for talking about file types and character sets.
The result was that HTTP 1.0 added the “content-type” header, following (more or less) MIME. Later, for content negotiation, additional uses of this technology (in ‘Accept’ headers) were also added.
The differences between Mail MIME and Web MIME were minor (default charset, requirement for CRLF in plain text). These minor differences have caused a lot of trouble, but that’s another story.

Distributed Extensibility

The real advantage of using MIME to label content meant that the web was no longer restricted to a single format. This one addition meant expanding from Global Hypertext to Global Hypermedia:
The Internet currently serves as the backbone for a global hypertext. FTP and email provided a good start, and the gopher, WWW, or WAIS clients and servers make wide area information browsing simple. These systems even interoperate, with email servers talking to FTP servers, WWW clients talking to gopher servers, on and on.
This currently works quite well for text.  But what should WWW clients do as Gopher and WAIS servers begin to serve up pictures, sounds, movies, spreadsheet templates, postscript files, etc.? It would be a shame for each to adopt its own multimedia typing system.
If they all adopt the MIME typing system (and as many other features from MIME as are appropriate), we can step from global hypertext to global hypermedia that much easier.
The fact that HTTP could reliably transport images of different formats allowed NCSA to add <img> to HTML. MIME allowed other document formats (Word, PDF, Postscript) and other kinds of hypermedia, as well as other applications, to be part of the web. MIME was arguably the most important extensibility mechanism in the web.

Not a perfect match

Unfortunately, while the use of MIME for the web added incredible power,  things didn't quite match, because the web isn’t quite messaging:
  • web "messages" are generally HTTP responses to a specific request; this means you know more about the data before you receive it. In particular, the data really does have a ‘name’ (mainly, the URL used to access the data), while in messaging, the messages were anonymous.
  • You would like to know more about the content before you retrieve it. The "tagging" of MIME is often not sufficient to know, for example, "can I interpret this if I retrieve it", because of versioning, capabilities, or dependencies on things like screen size or interaction capabilities of the recipient.
  • Some content isn’t delivered over the HTTP (files on local file system), or there is no opportunity for tagging (data delivered over FTP) and in those cases, some other ways are needed for determining file type.
Operating systems use using, and continued to evolve to use, different systems to determine the ‘type’ of something, different from the MIME tagging and bagging:
  • ‘magic numbers’: in many contexts, file types could be guessed pretty reliably by looking for headers.
  • Originally MAC OS had a 4 character ‘file type’ and another 4 character ‘creator code’ for file types.
  • Windows evolved to use the “file extension” – 3 letters (and then more) at the end of the file name
Information about these other ways of determining type (rather than by the lable) were gathered for the MIME registry; those registering MIME types are encouraged to also describe ‘magic numbers’, Mac file type, common file extensions. However, since there was no formal use of that information, the quality of that information in the registry is haphazard.
Finally, there was the fact that tagging and bagging might be OK for unilateral one-way messaging, you might want to know whether you could handle the data before reading it in and interpreting it, but the MIME types weren't enough to tell.

The Rules Weren’t Quite Followed

  • Lots of file types aren’t registered (no entry in IANA for file types)
  • Those that are, the registration is incomplete or incorrect (people doing registration didn’t understand ‘magic number’)

A Few Bad Things happened

  1. Browser implementors would be liberal in what they accepted, and use file extension and/or magic number or other ‘sniffing’ techniques to decide file type, without assuming content-label was authoritative. This was necessary anyway for files that weren’t delivered by HTTP.
  2. HTTP server implementors and administrators didn’t supply ways of easily associating the ‘intended’ file type label with the file, resulting in files frequently being delivered with a label other than the one they would have chosen if they’d thought about it, and if browsers *had* assumed content-type was authoritative.  Some popular servers had default configuration files that treated any unknown type as "text/plain" (plain ext in ASCII). Since it didn't matter (the browsers worked anyway), it was hard to get this fixed.
Incorrect senders coupled with liberal readers wind up feeding a negative feedback loop based on the robustness principle.

Consequences

The result, alas, is that the web is unreliable, in that
  • servers sending responses to browsers don’t have a good guarantee that the browser won’t “sniff” the content and decide to do something other than treat it as it is labeled, and
  • browsers receiving content don’t have a good guarantee that the content isn’t mis-labeled
  • intermediaries -- gateways, proxies, caches, and other pieces of the web infrastructure -- don’t have a good way of telling what the conversation means. 
This ambiguity and ‘sniffing’ also applies to packaged content in webapps (‘bagging’ but using ZIP rather than MIME multipart).

The Down Side of Extensibility

Extensibility adds great power, and allows the web to evolve without committee approval of every extension. For some (those who want to extend and their clients who want those extensions), this is power! For others (those who are building web components or infrastructure), extensibility is a drawback -- it adds to the unreliability and difference of the web experience. When senders use extensions recipients aren’t aware of, implement incorrectly or incompletely, then communication often fails.  With messaging, this is a serious problem, although most ‘rich text’ documents are still delivered in multiple forms (using multipart/alternative).
If your job is to support users of a popular browser, however, where each user has installed a different configuration of MIME handlers and extensibility mechanisms, MIME may appear to add unnecessary complexity and variable experience for users of all but the most popular MIME types.

The MIME story applies to charsets

MIME includes provisions not only for file 'types', but also, importantly the "character encoding" used by text types: simple US ASCII, Western European iSO-8859-1, Unicode UTF8. A similar vicious cycle also happened with character set labels: mislabeled content happily processed correctly by liberal browsers encouraged more and more sites to proliferate text with  mis-labeled character sets, to the point where browsers feel they *have* to guess the wrong label.

Embedded, downloaded, launch independent application

MIME is used not only for entire documents "HTML" vs "Word" vs "PDF", but to embedded components of documents, "JPEG image" vs. "PNG image". However, the use cases, requirements and likely operational impact of MIME handling is likely different for those use cases.

Additional Use Cases: Polyglot and Multiview

There are some interesting additional use cases which add to the design requirements:
  •  "Polyglot" documents:  A ‘polyglot’ document is one which is some data which can be treated as two different Internet Media Types, in the case where the meaning of the data is the same. This is part of a transition strategy to allow content providers (senders) to manage, produce, store, deliver the same data, but with two different labels, and have it work equivalently with two different kinds of receivers (one of which knows one Internet Media Type, and another which knows a second one.) This use case was part of the transition strategy from HTML to an XML-based XHTML, and also as a way of a single service offering both HTML-based and XML-based processing (e.g., same content useful for news articles and web pages.
  • "Multiview” documents: This use case seems similar but it’s quite different. In this case, the same data has very different meaning when served as two different content-types, but that difference is intentional; for example, the same data served as text/html is a document, and served as an RDFa type is some specific data.

Versioning

Formats and their specifications evolve over time. Sometimes compatibly, some times compatibly, sometimes not. It is part of the responsibility of the designer of a new version of a file type to try to insure both forward and backward compatibility: new documents work reasonably (with some fallback) with old viewers; old documents work reasonably with new viewers. In some cases this is accomplished, others not; in some cases, "works reasonably" is softened to "either works reasonably or gives clear warning about nature of problem (version mismatch)."
In MIME, the 'tag', the Internet Media Type, corresponds to the versioned series. Internet Media Types do not identify a particular version of a file format. Rather, the general idea is that the MIME type identifies the family, and also how you're supposed to otherwise find version information on a per-format basis. Many (most) file formats have an internal version indicator, with the idea that you only need a new MIME type to designate a completely incompatible format. The notion of an “Internet Media Type” is very course-grained. The general approach to this has been that the actual Media Type includes provisions for version indicator(s)  embedded in the content itself to determine more precisely the nature of how the data is to be interpreted.  That is, the message itself contains further information.
Unfortunately, lots has gone wrong in this scenario as well – processors ignoring version indicators encouraging content creators to not be careful to supply correct version indicators, leading to lots of content with wrong version indicators.
Those updating an existing MIME type registration to account for new versions are admonished to not make previously conforming documents non-conforming. This is harder to enforce than would seem, because the previous specifications are not always accurate to what the MIME type was used for in practice.

Content Negotiation

 The general idea of content negotiation is when party A communicates to party B, and the message can be delivered in more than one format (or version, or configuration), there can be some way of allowing some negotiation, some way for A to communication to B the available options, and for B to be able to accept or indicate preferences.
Content negotiation happens all over. When one fax machine twirps to another when initially connecting, they are negotiating resolution, compression methods and so forth. In Internet mail, which is a one-way communication, the "negotiation" consists of the sender preparing and sending multiple versions of the message, one in text/html, one in text/plain, for example, in sender-preference order. The recipient then chooses the first version it can understand.
HTTP added "Accept" and "Accept-language" to allow content negotiation in HTTP GET, based on MIME types, and there are other methods explained in the HTTP spec.

Fragment identifiers

 The web added the notion of being able to address part of a content and not the whole content by adding a ‘fragment identifier’ to the URL that addressed the data. Of course, this originally made sense for the original web with just HTML, but how would it apply to other content. The URL spec glibly noted that “the definition of the fragment identifier meaning depends on the MIME type”, but unfortunately, few of the MIME type definitions included this information, and practices diverged greatly.
If the interpretation of fragment identifiers depends on the MIME type, though, this really crimps the style of using fragment identifiers differently if content negotiation is wanted.

Where we need to go

 Many people are confused about the purpose of MIME in the web, its uses, the meaning  of MIME types. Many W3C specifications TAG findings and MIME type registrations make what are (IMHO) incorrect assumptions about the meaning and purposes of a MIME type registration.
We need a clear direction on how to make the web more reliable, not less. We need a realistic transition plan from the unreliable web to the more reliable one. Part of this is to encourage senders (web servers) to mean what they say, and encourage recipients (browsers) to give preference to what the senders are sending.
We should try to create specifications for protocols and best practices that will lead the web to more reliable and secure communication. To this end, we give an overall architectural approach to use of MIME, and then specific specifications, for HTTP clients and servers, Web Browsers in general, proxies and intermediaries, which encourage behavior which, on the one hand, continues to work with the already deployed infrastructure (of servers, browsers, and intermediaries), but which advice, if followed, also improves the operability, reliability and security of the web.

Specific recommendations

(I think I want to see if we can get agreement on the background, problem statement and requirements, before sending out any more about possible solutions, however the following is a partial list of documents that should be reviewed & updated, or new documents written

  • update MIME / Internet Media Type registration process (IETF BCP)
    • Allow commenting or easier update; not all MIME type owners need or have all the information the internet needs
    • Be clearer about relationship of 'magic numbers' to sniffing; review MIME types already registered & update.
    • Be clearer about requiring Security Considerations to address risks of sniffing
    • require definition of fragment identifier applicability
    • Perhaps ask the 'applications that use this type' to be clearer about whether the file type is suitable for embedding (plug-in) or as a separate document with auto-launch (MIME handler), or should always be donwloaded.
    • Be clearer about file extension use & relationship of file extensions to MIME handlers
  • FTP specifications
    • Do FTP clients also change rules about guessing file types based on OS of FTP server
  • update Tag finding on authoritative metadata
    • is it possible to remove 'authority'
  • new:  MIME and Internet Media Type section to WebArch
    • based on this memo
  • New: Add a W3C web architecture material on MIME in HTML to W3C web site
    • based on this memo
  • update mimesniff / HTML spec on sniffing, versioning, MIME types, charset sniffing
    • Sniffing uses MIME registry
    • all sniffing can a upgrade
    • discourage sniffing unless there is no type label
      • malformed content-type: error
      • no knowledge that given content-type isn't better than guessed content-type
  • update WEBAPPS specs (which ones?
  • Reconsider other extensibility mechanisms (namespaces, for example): should they use MIME or something like it?

http://lists.w3.org/Archives/Public/www-talk/1992SepOct/0035.html
Re: misconceptions about MIME [long]
Larry Masinter (masinter@parc.xerox.com)
Tue, 27 Oct 1992 14:38:18 PST

"If I wish to retrieve the document, say to view it, I might want to choose the available representation that is most appropriate for my purpose. Imagine my dismay to retrieve a 50 megabyte postscript file from an anonymous FTP archive, only to discover that it is in the newly announced Postscript level 4 format, or to try to edit it only to discover that it is in the (upwardly compatible but not parsable by my client) version 44 of Rich Text. In each case, the appropriateness of alternate sources and representations of a document would depend on information that is currently only available in-band.
I believe that MIME was developed in the context of electronic mail, but that the usage patterns in space and time of archives, database services and the like require more careful attention (a) to out-of-band information about format versions, so that you might know, before you retrieve a representation, whether you have the capability of coping with it, and (b) some restriction on those formats which might otherwise be uncontrollable.
http://lists.w3.org/Archives/Public/www-talk/1992SepOct/0056.html
Re: misconceptions about MIME [long]
Larry Masinter (masinter@parc.xerox.com)
Fri, 30 Oct 1992 15:54:56 PST
I propose (once again) that instead of saying 'application/postscript' it say, at a minimum, 'application/postscript 1985' vs 'application/postscript 1994' or whatever you would like to designate as a way to uniquely identify which edition of the Postscript reference manual you are talking about; instead of being identified as 'image/tiff' the files be identified as 'image/tiff 5.0 Class F' vs 'image/tiff 7.0 class QXB'.