October 22, 2010

Another take on 'persistence' and 'indirection'

I've noodled on the questions of persistence of identifiers, waht is a "resource" and so on for a while; http://www.ietf.org/id/draft-masinter-dated-uri-07.txt  is the latest edition of a "thought experiment".  If a 'data:' URI is an immediate address, is a "tdb" URI an indirect one?

June 5, 2010

MIME and the Web

I originally wrote this as blog post & made updates, but now available as IETF Internet Draft, for discussion on www-tag@w3.org.

Origins of MIME

MIME was invented originally for email, based on general principles of ‘messaging’, foundational architecture. The role of MIME was to extend Internet messaging from ASCII-only plain text (other character sets,  images, rich documents, etc.) The basic architecture of complex content messaging is:
  • Message sent from A to B.
  • Message includes some data. Sender A includes standard ‘headers’ telling recipient B enough information that recipient B knows how sender  A intends the message to be interpreted.
  • Recipient B gets the message, interprets the ‘headers’ for the data and uses it as information on how to interpret the data.
MIME is a “tagging and bagging” specification:
  •  tagging: how to label content so the intent of how the content should be interpreted is known
  •  bagging: how to wrap the content so the label is clear, or, if there are multiple parts to a single message, how to combine them.
“MIME types” (renamed “Internet Media Types”) were part of the labeling, the name space of kinds of things. The MIME type registry (“Internet Media Type registry”) is where someone can tell the world what a particular label means, as far as the sender’s intent.

Introducing MIME into the Web

The original World Wide Web  didn’t have MIME tagging and bagging. Everything transferred was HTML.
At the time, ('92) other distributed information access systems, including Gopher (distributed menu system) and WAIS (remote access to document databases) were adding capabilities for accessing many things other text and hypertext and the WWW folks were considering type tagging.
It was agreed that HTTP should use MIME as the vocabulary for talking about file types and character sets.
The result was that HTTP 1.0 added the “content-type” header, following (more or less) MIME. Later, for content negotiation, additional uses of this technology (in ‘Accept’ headers) were also added.
The differences between Mail MIME and Web MIME were minor (default charset, requirement for CRLF in plain text). These minor differences have caused a lot of trouble, but that’s another story.

Distributed Extensibility

The real advantage of using MIME to label content meant that the web was no longer restricted to a single format. This one addition meant expanding from Global Hypertext to Global Hypermedia:
The Internet currently serves as the backbone for a global hypertext. FTP and email provided a good start, and the gopher, WWW, or WAIS clients and servers make wide area information browsing simple. These systems even interoperate, with email servers talking to FTP servers, WWW clients talking to gopher servers, on and on.
This currently works quite well for text.  But what should WWW clients do as Gopher and WAIS servers begin to serve up pictures, sounds, movies, spreadsheet templates, postscript files, etc.? It would be a shame for each to adopt its own multimedia typing system.
If they all adopt the MIME typing system (and as many other features from MIME as are appropriate), we can step from global hypertext to global hypermedia that much easier.
The fact that HTTP could reliably transport images of different formats allowed NCSA to add <img> to HTML. MIME allowed other document formats (Word, PDF, Postscript) and other kinds of hypermedia, as well as other applications, to be part of the web. MIME was arguably the most important extensibility mechanism in the web.

Not a perfect match

Unfortunately, while the use of MIME for the web added incredible power,  things didn't quite match, because the web isn’t quite messaging:
  • web "messages" are generally HTTP responses to a specific request; this means you know more about the data before you receive it. In particular, the data really does have a ‘name’ (mainly, the URL used to access the data), while in messaging, the messages were anonymous.
  • You would like to know more about the content before you retrieve it. The "tagging" of MIME is often not sufficient to know, for example, "can I interpret this if I retrieve it", because of versioning, capabilities, or dependencies on things like screen size or interaction capabilities of the recipient.
  • Some content isn’t delivered over the HTTP (files on local file system), or there is no opportunity for tagging (data delivered over FTP) and in those cases, some other ways are needed for determining file type.
Operating systems use using, and continued to evolve to use, different systems to determine the ‘type’ of something, different from the MIME tagging and bagging:
  • ‘magic numbers’: in many contexts, file types could be guessed pretty reliably by looking for headers.
  • Originally MAC OS had a 4 character ‘file type’ and another 4 character ‘creator code’ for file types.
  • Windows evolved to use the “file extension” – 3 letters (and then more) at the end of the file name
Information about these other ways of determining type (rather than by the lable) were gathered for the MIME registry; those registering MIME types are encouraged to also describe ‘magic numbers’, Mac file type, common file extensions. However, since there was no formal use of that information, the quality of that information in the registry is haphazard.
Finally, there was the fact that tagging and bagging might be OK for unilateral one-way messaging, you might want to know whether you could handle the data before reading it in and interpreting it, but the MIME types weren't enough to tell.

The Rules Weren’t Quite Followed

  • Lots of file types aren’t registered (no entry in IANA for file types)
  • Those that are, the registration is incomplete or incorrect (people doing registration didn’t understand ‘magic number’)

A Few Bad Things happened

  1. Browser implementors would be liberal in what they accepted, and use file extension and/or magic number or other ‘sniffing’ techniques to decide file type, without assuming content-label was authoritative. This was necessary anyway for files that weren’t delivered by HTTP.
  2. HTTP server implementors and administrators didn’t supply ways of easily associating the ‘intended’ file type label with the file, resulting in files frequently being delivered with a label other than the one they would have chosen if they’d thought about it, and if browsers *had* assumed content-type was authoritative.  Some popular servers had default configuration files that treated any unknown type as "text/plain" (plain ext in ASCII). Since it didn't matter (the browsers worked anyway), it was hard to get this fixed.
Incorrect senders coupled with liberal readers wind up feeding a negative feedback loop based on the robustness principle.

Consequences

The result, alas, is that the web is unreliable, in that
  • servers sending responses to browsers don’t have a good guarantee that the browser won’t “sniff” the content and decide to do something other than treat it as it is labeled, and
  • browsers receiving content don’t have a good guarantee that the content isn’t mis-labeled
  • intermediaries -- gateways, proxies, caches, and other pieces of the web infrastructure -- don’t have a good way of telling what the conversation means. 
This ambiguity and ‘sniffing’ also applies to packaged content in webapps (‘bagging’ but using ZIP rather than MIME multipart).

The Down Side of Extensibility

Extensibility adds great power, and allows the web to evolve without committee approval of every extension. For some (those who want to extend and their clients who want those extensions), this is power! For others (those who are building web components or infrastructure), extensibility is a drawback -- it adds to the unreliability and difference of the web experience. When senders use extensions recipients aren’t aware of, implement incorrectly or incompletely, then communication often fails.  With messaging, this is a serious problem, although most ‘rich text’ documents are still delivered in multiple forms (using multipart/alternative).
If your job is to support users of a popular browser, however, where each user has installed a different configuration of MIME handlers and extensibility mechanisms, MIME may appear to add unnecessary complexity and variable experience for users of all but the most popular MIME types.

The MIME story applies to charsets

MIME includes provisions not only for file 'types', but also, importantly the "character encoding" used by text types: simple US ASCII, Western European iSO-8859-1, Unicode UTF8. A similar vicious cycle also happened with character set labels: mislabeled content happily processed correctly by liberal browsers encouraged more and more sites to proliferate text with  mis-labeled character sets, to the point where browsers feel they *have* to guess the wrong label.

Embedded, downloaded, launch independent application

MIME is used not only for entire documents "HTML" vs "Word" vs "PDF", but to embedded components of documents, "JPEG image" vs. "PNG image". However, the use cases, requirements and likely operational impact of MIME handling is likely different for those use cases.

Additional Use Cases: Polyglot and Multiview

There are some interesting additional use cases which add to the design requirements:
  •  "Polyglot" documents:  A ‘polyglot’ document is one which is some data which can be treated as two different Internet Media Types, in the case where the meaning of the data is the same. This is part of a transition strategy to allow content providers (senders) to manage, produce, store, deliver the same data, but with two different labels, and have it work equivalently with two different kinds of receivers (one of which knows one Internet Media Type, and another which knows a second one.) This use case was part of the transition strategy from HTML to an XML-based XHTML, and also as a way of a single service offering both HTML-based and XML-based processing (e.g., same content useful for news articles and web pages.
  • "Multiview” documents: This use case seems similar but it’s quite different. In this case, the same data has very different meaning when served as two different content-types, but that difference is intentional; for example, the same data served as text/html is a document, and served as an RDFa type is some specific data.

Versioning

Formats and their specifications evolve over time. Sometimes compatibly, some times compatibly, sometimes not. It is part of the responsibility of the designer of a new version of a file type to try to insure both forward and backward compatibility: new documents work reasonably (with some fallback) with old viewers; old documents work reasonably with new viewers. In some cases this is accomplished, others not; in some cases, "works reasonably" is softened to "either works reasonably or gives clear warning about nature of problem (version mismatch)."
In MIME, the 'tag', the Internet Media Type, corresponds to the versioned series. Internet Media Types do not identify a particular version of a file format. Rather, the general idea is that the MIME type identifies the family, and also how you're supposed to otherwise find version information on a per-format basis. Many (most) file formats have an internal version indicator, with the idea that you only need a new MIME type to designate a completely incompatible format. The notion of an “Internet Media Type” is very course-grained. The general approach to this has been that the actual Media Type includes provisions for version indicator(s)  embedded in the content itself to determine more precisely the nature of how the data is to be interpreted.  That is, the message itself contains further information.
Unfortunately, lots has gone wrong in this scenario as well – processors ignoring version indicators encouraging content creators to not be careful to supply correct version indicators, leading to lots of content with wrong version indicators.
Those updating an existing MIME type registration to account for new versions are admonished to not make previously conforming documents non-conforming. This is harder to enforce than would seem, because the previous specifications are not always accurate to what the MIME type was used for in practice.

Content Negotiation

 The general idea of content negotiation is when party A communicates to party B, and the message can be delivered in more than one format (or version, or configuration), there can be some way of allowing some negotiation, some way for A to communication to B the available options, and for B to be able to accept or indicate preferences.
Content negotiation happens all over. When one fax machine twirps to another when initially connecting, they are negotiating resolution, compression methods and so forth. In Internet mail, which is a one-way communication, the "negotiation" consists of the sender preparing and sending multiple versions of the message, one in text/html, one in text/plain, for example, in sender-preference order. The recipient then chooses the first version it can understand.
HTTP added "Accept" and "Accept-language" to allow content negotiation in HTTP GET, based on MIME types, and there are other methods explained in the HTTP spec.

Fragment identifiers

 The web added the notion of being able to address part of a content and not the whole content by adding a ‘fragment identifier’ to the URL that addressed the data. Of course, this originally made sense for the original web with just HTML, but how would it apply to other content. The URL spec glibly noted that “the definition of the fragment identifier meaning depends on the MIME type”, but unfortunately, few of the MIME type definitions included this information, and practices diverged greatly.
If the interpretation of fragment identifiers depends on the MIME type, though, this really crimps the style of using fragment identifiers differently if content negotiation is wanted.

Where we need to go

 Many people are confused about the purpose of MIME in the web, its uses, the meaning  of MIME types. Many W3C specifications TAG findings and MIME type registrations make what are (IMHO) incorrect assumptions about the meaning and purposes of a MIME type registration.
We need a clear direction on how to make the web more reliable, not less. We need a realistic transition plan from the unreliable web to the more reliable one. Part of this is to encourage senders (web servers) to mean what they say, and encourage recipients (browsers) to give preference to what the senders are sending.
We should try to create specifications for protocols and best practices that will lead the web to more reliable and secure communication. To this end, we give an overall architectural approach to use of MIME, and then specific specifications, for HTTP clients and servers, Web Browsers in general, proxies and intermediaries, which encourage behavior which, on the one hand, continues to work with the already deployed infrastructure (of servers, browsers, and intermediaries), but which advice, if followed, also improves the operability, reliability and security of the web.

Specific recommendations

(I think I want to see if we can get agreement on the background, problem statement and requirements, before sending out any more about possible solutions, however the following is a partial list of documents that should be reviewed & updated, or new documents written

  • update MIME / Internet Media Type registration process (IETF BCP)
    • Allow commenting or easier update; not all MIME type owners need or have all the information the internet needs
    • Be clearer about relationship of 'magic numbers' to sniffing; review MIME types already registered & update.
    • Be clearer about requiring Security Considerations to address risks of sniffing
    • require definition of fragment identifier applicability
    • Perhaps ask the 'applications that use this type' to be clearer about whether the file type is suitable for embedding (plug-in) or as a separate document with auto-launch (MIME handler), or should always be donwloaded.
    • Be clearer about file extension use & relationship of file extensions to MIME handlers
  • FTP specifications
    • Do FTP clients also change rules about guessing file types based on OS of FTP server
  • update Tag finding on authoritative metadata
    • is it possible to remove 'authority'
  • new:  MIME and Internet Media Type section to WebArch
    • based on this memo
  • New: Add a W3C web architecture material on MIME in HTML to W3C web site
    • based on this memo
  • update mimesniff / HTML spec on sniffing, versioning, MIME types, charset sniffing
    • Sniffing uses MIME registry
    • all sniffing can a upgrade
    • discourage sniffing unless there is no type label
      • malformed content-type: error
      • no knowledge that given content-type isn't better than guessed content-type
  • update WEBAPPS specs (which ones?
  • Reconsider other extensibility mechanisms (namespaces, for example): should they use MIME or something like it?

http://lists.w3.org/Archives/Public/www-talk/1992SepOct/0035.html
Re: misconceptions about MIME [long]
Larry Masinter (masinter@parc.xerox.com)
Tue, 27 Oct 1992 14:38:18 PST

"If I wish to retrieve the document, say to view it, I might want to choose the available representation that is most appropriate for my purpose. Imagine my dismay to retrieve a 50 megabyte postscript file from an anonymous FTP archive, only to discover that it is in the newly announced Postscript level 4 format, or to try to edit it only to discover that it is in the (upwardly compatible but not parsable by my client) version 44 of Rich Text. In each case, the appropriateness of alternate sources and representations of a document would depend on information that is currently only available in-band.
I believe that MIME was developed in the context of electronic mail, but that the usage patterns in space and time of archives, database services and the like require more careful attention (a) to out-of-band information about format versions, so that you might know, before you retrieve a representation, whether you have the capability of coping with it, and (b) some restriction on those formats which might otherwise be uncontrollable.
http://lists.w3.org/Archives/Public/www-talk/1992SepOct/0056.html
Re: misconceptions about MIME [long]
Larry Masinter (masinter@parc.xerox.com)
Fri, 30 Oct 1992 15:54:56 PST
I propose (once again) that instead of saying 'application/postscript' it say, at a minimum, 'application/postscript 1985' vs 'application/postscript 1994' or whatever you would like to designate as a way to uniquely identify which edition of the Postscript reference manual you are talking about; instead of being identified as 'image/tiff' the files be identified as 'image/tiff 5.0 Class F' vs 'image/tiff 7.0 class QXB'.

March 24, 2010

Ozymandias' URI

,,,

I met a traveller from an antique land
Who said: Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown
And wrinkled lip, and sneer of cold command
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed.
And on the pedestal these words appear:
My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!"
And Here is my URI, http://ozymandias.perm.
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.

(prompted by renewed TAG discussion on persistent URIs, cf. Problems URLs don't solve: stuff goes away)

March 22, 2010

Browsers are rails, web sites are trains (authoring conformance requirements)

,,

There's some ongoing debate about whether or how the HTML specification should or shouldn't contain "authoring requirements".  I thought I'd write about the general need.

In general, a standard is a (specification of an) interface, across which multiple kinds of agents can communicate. The usual reason for "conformance classes" in protocol specifications is to improve interoperability between agents of the various classes, as an extension of the Robustness principle (or Postel's Law): Be conservative in what you do; be liberal in what you accept from others.

As an engineering principle, robustness predates software by a long shot. There is a standard for railroad tracks, and a separate standard for rail car wheels. The railroad track standard says how wide a track should be, and gives some tolerances which should be kept within. Similarly, a rail car wheel standard would define how far apart the rail car wheels should be, and give restrictions and guidelines on the center of gravity of the car, guidelines on how the flanges of a rail car wheel should be constructed and some tolerances of those. The goal of the standard and the tolerances is to avoid derailment. The rules for the rails and the rules for the rail car wheels go together.

How does this map to the current HTML debate? The current HTML spec contains requirements for HTML interpreting agents (browsers, or the rails); the discussion is around the standards for HTML documents (rail cars) and for the behavior of document producers as well.

The goal of interoperability is to keep the trains on the rails -- to insure that web pages presented are received and interpreted as expected. Following the robustness principle, It is reasonable to have a conservative standard for HTML documents (and authoring tools), and for that standard to be more conservative then would be required if every implementation (rail system) exactly followed all of the standard. Document conformance, recommended document practice, and backward compatible advisory components are all reasonable targets for specification.

Of course, designing a good set of conservative guidelines involves negotiating set of tradeoffs. Is there only one "conservative" guideline? Are there several, depending on the importance of backward compatibility with older browsers? Should the fact that some content was previously conforming have weight? Those are still the open questions.

But conservative guidelines for content and authoring tools are important, and not (as originally noted in the bug report cited above) "ideology" or "loyalty".

March 13, 2010

Resources are Angels; URLs are Pins

,,,,,,,,iri

Everyone knows what a URL is, right? It's just a simple thing -- a kind of pointer to something. You'd think a standard around URLs would be easy, compared to something as complicated as a document description and application environment.

Except that every little bit of what URLs are, how they work, how to use them and find them, has endless complexities and is a source of disagreement! No wonder web standards are hard.

Trying to understand all of the issues around URLs winds up pushing into philosophy of language, knowledge representation and AI systems, security and spoofing, metadata, the way in which academic citations are made, the requirements for gracefully moving to IPv6, the upgrade of the domain name system to allow non-English domain names (that use letters other than A-Z), and so on and so forth.

This post starts to explore a few of the issues, why they're hard, and why we're still arguing about them. I'll expand it or blog more over time; comments appreciated.

Sometimes, when you're dealing with a hard subject in a committee, you wind up with a "bike shed" discussions (see Parkinson's Law of Triviality). In the case of URLs, I think the "bike shed" is and has been the terminology -- people argue about what to call things, rather than harder things to talk about, like how it works and what the requirements are. "Just rename the spec to be X, and it's fine, I'm ok if you call it an X spec, I just don't like you calling it a Y."

"Universal" vs. "Uniform"

Perhaps it was hubris to envision a world all connected by links, that led to a vision of the World Wide Web where there was not just an identifier, a resource identifier, but a Universal resource identifier. Universal: it could be used for any situation when you wanted to identify something.  Now, something that is universal actually needs to work in all of the situations that might need an identifier (whatever that is) for identifying a resource (whatever that is).

But in the history of computer science, there have been lots and lots of different kinds of identifiers used for reference; in the history of language, you might think that any "noun phrase" is a kind of identifier for something. So the requirements for a Universal resource identifier are pretty extensive, hard to sort out. The group charted with standardizing these URI things -- bringing it from a vision to a standard -- (which I chaired), first tried to tackle the problem of not really wanting to solve that universal problem. How could you identify anything? So, in the great bike shed tradition, after hundreds (perhaps thousands) of emails on the subject, "Universal" was changed to "Uniform". Something that is "Uniform" just has to work the same wherever it's used, but if it's "Universal", it would have to be good for *every* kind of resource identification, which is ambitious but (alas) impossible.

"Identifiers" become "Names" and "Locators"

One of the great advances in distributed system design was the introduction of a kind of network design that separated naming, addressing, and routing (host name, IP address, which gateway to send it to.)   Here at the application layer (that is, the web resting on top of the internet), this distinction has been recapitulated: is a URL the "name" of something, which gives you a hint as to how you get at it, is it really the "address" of something, telling you where on the Internet it is, but it's up to you to figure out how to get there, or is it really a "route" to something, telling you what steps you have to take to get there?

And as we all know, these separations are somewhat artificial. Will the post office deliver a letter addressed to "Hixie, Google"? Don't we use names as addresses and routes as names? Since one can be used for the other, are they really different?

They're certainly different in the library and information management world: knowing what a document IS (in caps) is a lot different from knowing where you could find it.

We spent a lot of time on this topic, finally coming up with the decision: the space of Uniform Resource Identifiers (URIs) would be split: there would be Uniform Resource Names ( URNs) and the rest. The rest of these things would be called URLs, "Uniform Resource Locators". URNs have their own requirements [RFC1737], registration procedures, and syntax, but would fit inside the overall URI space: URNs started with "urn:" and every other kind of URI didn't.

Given the ambiguity of the URN/URL divide, that URNs that can be used as locators (using [Dynamic Delegation Discovery System]) and URLs can and are used as names without any "resolution" protocol in mind, maybe trying to call them different things was hopeless. URL, URI, IRI, I'm not a stickler, at least in prose. (In formal specifications, that's another story.)

Do Resources Exist?

Through all of this, the idea of a "resource" is still one of the Great Mysteries of the web. What is a resource? How can you tell whether two resources are the same resource? Are there different kinds of resources (Information Resources vs. non-Information Resources)?

This Great Mystery has eluded being pinned down since the beginning, to the point where, well, I thought it was fitting to think of Resources as Angels and URIs as Pins.

The W3C TAG (without me) spent quite a bit of time discussing this and related issues. TAG httpRange-14 addressed "what is the range of the HTTP function". Mathematically, a function f is a mapping from one set (the domain) into another set (the range). That is, if you write f(x) = y, x is in the domain, y is in the range, and f is the mapping. Presumably, for "f" standards for the act of "locating" done by the HTTP protocol, while the domain (what x is chosen from) is the set of "all URIs", while the range is the set(?) of all "resources".

Of course, this kind of assumes that you know what these things are. But what exactly is a "resource" and what does it mean to "identify" one?

It's interesting to look at the philosophy of language; if go back to Frege and the problem of Reference, you might ask whether if you put "the morning star" and "the evening star" as two addresses for "Venus", would it make sense to ask whether these "locators" identified the "same" resource? There's only one Venus, right? So you' could have many Pins in the single Angel.

Related questions come up when you ask what is the Resource identified by a single URI, just thinking about the boundaries: is "http://larry.masinter.net" an identifier for me or for my web page? Does it refer to the page at the current moment, or whenever you get around to looking at it? Does it identify just the HTML of that page, the HTML and its images, that page and everything on the whole site I might reference?  The answer to most of these questions might be "yes"; that is, any kind of reference is naturally ambiguous. There are many Resources that might be identified by a single URI, and many Angels can dance on the point [!] of a single Pin.

"Resource" vs. "Representation"

Let's restrict the range of things we're talking about at the moment to URLs that start with "http:" or "ftp:" or a few others, and even ones that are typically used on the web in hyperlinked applications.

I click on a link, and my computer asks some other computer something and gets something back. What do I get? Is it "the resource" that was identified by that URL? It usually has something to do with that resource, but of course, when you press hard, there are URLs that don't return anything (you just POST to them, for example) and URLs that can return more than one thing, depending on the situation (using content negotiation or knowing something about it.)

In another bike shed moment, the decision was to try to separate out the notion of "resource" and "representation".

In a lot of situations, the difference between a "file" and "the name of the file plus the location of the file plus the contents of the file plus any file system data about the file" .... well, they're the same. So if you think of URLs like file names and clicking on a link like accessing a file, the "resource" and the "representation" don't really matter.

But for those for whom the distinction does matter, the idea of just tossing the distinction, well, it's annoying.

"Hyperlinks vs. Ontologies"

URLs are used extensively in hyperlinked applications--not just HTML, but other formats and protocols too (Flash, PDF, Silverlight, SVG, etc.). Hyperlinked applications such as web browsers use URLs to identify which network resource should be contacted during the interaction with the hypertext and how that contact is to be made.

But URLs have also been used for other purposes. For example,

  • XML applications use URIs in xmlns attributes to identify namespaces.
  • Semantic Web applications use URIs to denote concepts. I use "denote" as term related to, but different from, "identify".
  • Metadata applications use URLs to identify both the data that the metadata is about.

At one time, I was trying to argue that the notion of a "concept" doesn't really form a "set" in the mathematical sense, because there wasn't a clear idea of "equality".

(more to come.... about model theory, internationalization and the discussions around IRI, the difference between "the presentation of an IRI" and "an IRI", the process of registering new URI schemes, other systems such as XRI and DOI and Handle systems, and...

... a truly universal "tdb" scheme, which can be used to identify anything. Well, anything you can think of. Well, anything you can think of and write down what you're thinking.)

February 27, 2010

Masinter and Web Standards, Oh My

,,,,,,,,,

There's been so much hooey flung around that it's hard to know where to start. This is a little haphazard, so I might reorganize some of the topics later.

Not me

Just in case you weren't watching, there was an enormous flap, started by a completely astounding blog post that somehow "Adobe is blocking HTML5!" and reference to some mystery, when nothing had ever been blocked. If you want to read about this and the nonsense around it, it's easy to find. Not sure how much of the muck-racking I need to repeat here. If you don't know what I'm talking about, let me know and I can point you to it.

Secret mailing lists

In the great tradition of conspiracy theories, it was claimed that there was a secret mailing list on which objections were being discussed? What secret mailing list was everyone talking about? Who subscribes to it? What is its charter? What topics are discussed on it? Who is the chair? How do decisions get made in it? (Post answers in blog comment! T-shirt to first to send correct answers.)

Transparency in Standards

Is it OK with everyone that the working group charter says one thing and the truth is something else? What I was asking for was transparency. I said they were not in scope for the current charter. And now we have an updated charter, at least. The W3C HTML WG charter still needs "annotation" to make it truthful clearer about communication (how many mailing lists and bugzilla databases do you need to watch to know what's going on?), participation (how many invited experts or members from a single company are there, in "good standing", even though they never actually participate?), and decision policy (there's a separate HTML WG only Decision Policy, with its own bugzilla database, in a Nomic twist.)

Updating the charter to say these things were in scope was always a way of bringing them into scope. That W3C wanted to save face and call it an "annotation" and skip the normal W3C rechartering process -- well, so many exceptions were made for HTML in the first place, that's fine.

What makes a "standard" process "open" isn't just "everyone can read the mailing list". Trying to follow the HTML standard requires withstanding a "denial of service" attack; thousands of emails, messages, posts, edits to track every single month. No one person can really follow what's going on..

Organizational scope and modularity

Well, this is a longer topic, and there are a lot of others who are blogging or discussing this, and I've blogged about it too. Short answer is: big committee, big spec, big mess. Splitting things up to minimize interactions between pieces is what distinguishes "architecture" from "hack". Anyone with serious experience building large systems should know this.

Masinter as Adobe's representative

I do work for Adobe. I get paid by them. And I do work on Standards now as my job. So you might think that, well, if I say something, Adobe is saying it?  I suppose I need to start putting more disclaimers on my email.

My work at Adobe has been mainly in the Advanced Technology Group working on a pretty wide variety of topics (content management, archiving, layout) and then in the Creative Solutions Business Unit (focus on workflow and metadata for video and timed media.) I kept up my standards affiliations because I thought it was important and I had some history, but really only started doing standards full-time in late 2008.

Other people and their affiliations

The idea that I am Adobe's Larry Masinter, but Hixie is Hixie and diveintomark (or just 'Mark' on some blogs) aren't somehow representing Google... or that Maciej isn't representing Apple? I guess I'm just jealous. Why do they get a free pass? I'm going back to using LMM@acm.org as my posting address, not to hide my affiliation but to make it clearer when I'm speaking for myself. Which leads me to try to explain why you should believe that.

My history in standards

I've worked on web standards but also the processes around developing them for long before I joined Adobe; I chaired the URI working group, and the HTTP working group for many years, and also participated significantly in the original development of HTML. Back in '93, I convinced everyone to use MIME in HTTP to allow extensibility (HTML wasn't the only document format then, and isn't now) and interoperability between the web and other Internet applications. I spent many years trying to bring stability, extensibility, architecture, internationalization, and security into the web platform. I not only worked on the web technically, but also worked on the standards processes. Chairing the URI working group and the HTTP working group while the browser vendors were trying to kill each other in the marketplace was a great career success for me: to (usually) get people to leave their partisan bickering outside the process. I was on the W3C Advisory Board and quite active in developing the W3C process document (including the little-known provision that the W3C Director does *not* have final authority; almost all Director decisions can be appealed, at least if people know about them.)

I had brought to IETF and W3C previous experience in standards work in ANSI X3J13. I had spent a lot of time in that standards effort developing the essentials of an "issue" as a resolution process (documented in a 1988 paper), and drove HTTP using it; as a process, it spread through much of IETF and W3C. Then, as now, the separation of understanding of the 'problem' from the proposed 'solution' is crucial.

I've worked on scores (more than 20 and probably more than 40) different working group charters. I think I really understand why working groups have charters, how the words in a charter are chosen carefully, and why it's important to keep things in scope.  Is that being nit-picky? Yes, but that's what I do. And, I claim, it's what makes good standards: follow process, pay attention to details.

My perspective is that what makes a good standard is very different than what makes a good implementation guide. Many otherwise good software engineers (even those with a great deal of experience) really don't see it, since their experience and intuitions have served them well in building complex software systems, or leading open source projects.

In the late 90s, I was on the tutorial circuit focusing on Web Standards, and put together several tutorials about the web, the web standards process, how to get involved, and what was and wasn't important about standards. These are linked from my home page, but I'll drag some of them out and blog about what I think is still true and what's changed in the intervening decade.

Tweets and Blogs Oh My

OK, so, there's still a lot of trash talk around the Internet. One poster said:

"Adobe's Larry Masinter should get out of the game. He did some great things in his career, but now he is just a well-paid Stanford-PhD-bearing obstructionist. Please retire and let the world move on. HTML5 is an amazing step forward as an open platform for the web, barring M$ and Adobe FUD-slinging."

Thanks for the "great things"! I try to be modest about my accomplishments, really! And I don't think credentials are worth much, although I do value knowledge and understanding, and I think there really is a "science" to "computer science" and not just a bunch of hacks. If I retired, though, I'd miss all this fun! Frankly, though, I don't think I was the one slinging any FUD at all, you know? Look at it.

More?

This is pretty haphazard, but it will do for now. I'll expand on things if you ask, honest. Comment here or by email; LMM@acm.org.

February 23, 2010

Users and Standards

In a comment an an earlier blog post, I was asked about users and the effect of different kinds of standards on users -- I had talked about the effect of standards on implementors but not about end users. The commenter made some claims about what end users wanted, with reference (presumably) to users of browsers.

This is hard, so I might take a couple of whacks at it, but here's one pass....

What Users Want


My main point is that what users want varies a lot, and that making a standard that was good for "users" involves making tradeoffs.
  • Even a single user wants different and somewhat contradictory things. A single user wants both innovation and stability -- for new features, but also for old features to continue to work somehow. Just optimizing that alone involves a lot of tradeoffs.
  • Second, there are different categories or roles of users. I'm a writer-user publishing this blog, and you (dear reader) are a reader-user. Although both roles may desire communication, sometimes the desires of the writer and the reader conflict. For example, writers often want readers to also see ads, while readers would just as soon skip the ads. To benefit both the needs and wants of both roles users requires some trade-offs.
  • Third, even within a single role, there are variations. For example, among writer-users, some care about ads, some care about exact presentation, some care only about readers who have particular technology, or are running the latest software. Some care about users with accessibility needs and, unfortunately, others don't.
  • Fourth, there are other applications, software, and work-flows those users might engage in (for example, copying web pages into email, reposting them in news feeds) and the user needs and wants may vary based on interoperability needs outside of the context of the user role in this individual action.
  • Fifth, there are vendors who have different subsets of users who are important to them, based on their business model, and what other activities those vendors also support. As the business model varies (e.g., between selling advertising linked with uniquely provided services, closed hardware platforms combined with content monetization, or tooling and infrastructure), this influences which users, workflows, roles, and associated compatibility issues are important to the vendors who are implementing the standards. And vendors don't want impossible implementation goals in order to meet their user's needs. While you might not care about the desires of the vendors, motivating them is what causes standards to be implemented.
  • Finally, it's important to think not only about needs now but also in the future. Do the standards allow sufficient evolution that new reader technology will work well with old content, while old readers continue to work with new content? Has the standard provided for a clean path for future extensibility?
What makes good standards for those users?

The standards, the specifications, and the discussions around them require a very difficult balance between all of these ultimately user-driven factors.

Given all of this, a standard for the web should try to meet all these needs, not just some. While some end users indeed want "a browser that work on any web site they pick", that would also mean supporting ActiveX, Java, QuickTime, and the five-letter F word.

Those who are designing and coding web sites have complex needs (some motivated by "money"), butthe link to technical requirements is not very simple.

A publisher can't depend on anything being broadly implemented just because some spec says a browser MUST do something. A MUST in a specification isn't a law; it provides no push. The only role a MUST in standard actually has is to provide a check-box for implementations; if the vendor of the implementation says "I implement standard X", they mean, among other things, "I follow every MUST in the spec, and I also follow every SHOULD except when I have a good reason not to, which I can explain". That's it. That's all the standard really does, is give you something to measure against.


A good standard is one where MUST and SHOULD are used as sparingly as possible; only those things that are really necessary to accomplish interoperability in the simple sense that the roles (readers and writers, intermediaries, editors, transformers) can function with expected results. Not necessarily the same results, but within the range of expectation.

February 12, 2010

HTML5 "experiment" and W3C Priorities

in yet another astounding revelation, here's another post, again to the W3C secret Member-Only w3c-ac-forum cabal's list. Since this is my blog, I can repost all this historical stuff, right? Of course, this was a while ago, in the context of W3C budget discussions, and before there was even the (buggy!) HTML working group "Decision Policy".


From: Larry Masinter <masinter@adobe.com>
Date: Thu, 25 Sep 2008 15:18:29 -0700
To: "w3c-ac-forum@w3.org"

My personal observation is that the current HTML5 process combines the worst elements of the IETF process and the W3C process. From the IETF, there is the chaos of an open mailing list, wide ranging comments and free participation, but without the "adult supervision" that the IETF supplies in the form of the Internet Engineering Steering Group (IESG) and the area directors. From the W3C, there is the overhead and cost of W3C activity management and staff, and a process that assumes that voting members actually have a say in the final specification, but without any actual responsibility or response to normal W3C safeguards and cross-checks.

Recall the original history of HTML standardization: it started in the IETF, but succumbed there to an irresolvable debate between "inline font tags" vs "style sheets". The move to W3C was to provide a more structured environment, and strong enough management to resolve the ongoing positioning by Netscape and Microsoft.

At this point, the players have changed and there are more of them, but the jockeying for position is ongoing, but W3C seems to have abdicated any responsibility for ensuring that comments get answered, that questions are addressed on their technical merits, that reasonable voices are considered and points of view addressed, even when they differ from the point of view of the current editor of the document.

The W3C process document was built to safeguard the process. If there is an 'experiment' and the experiment is going awry, then the experiment shouldn't continue in its current form.

My basic premise for W3C prioritization is that the expansion of W3C activities outside the core of the web, in an attempt to gather more members, has basically been counter-productive, because activities have a lifetime beyond the interest of the few incremental members attracted. The basic direction I would suggest is to trim staff and expenses, and refocus the remaining staff on the core fundamentals of the web. Yes, some special interests may leave if their projects are no longer being developed by W3C, but the W3C process has a heavy overhead which was designed for a much narrower scope. Even if the peripheral activities seem like they're self-funding, every new topic stretches the ability of staff, TAG, Advisory Board, and ultimately the Director to understand and manage the process.

HTML5 and W3C Focus (10/2008)

,,,

I sent the email below to the W3C members-only mailing list w3c-ac-forum@w3.org on 14 October 2008, as part of a discussion about HTML management. It might be a little dated, but maybe it will explain a bit where I'm coming from.

Note added 2/28/2010:

By “the expansion of W3C activities outside the core of the web” I did not mean style sheets, video, 2D or 3D graphics, HTTP, URLs and all the various URL schemes, web security, which are all at the core of the web. Web applications too! But of the W3C list of activities and publications, some are pretty far away from the "core of the web" as most know "the web", and even if relevant and visionary, but they're not on the trajectory of "the web" as it is being deployed.

"Leadership is the skill of getting others to follow": get out in front, but not so far in front they can't see you, and certainly not chasing after them with waggling fingers.


From: Larry Masinter
To: w3c-ac-forum@w3.org
Subject: HTML5 and W3C Focus
Date: Tue, 14 Oct 2008 23:47:15 -0700

To make some progress on the HTML5 situation, I think it's important to look at the general principles of standards-making, and not get too focused on the specifics of the individuals and their personalities. Certainly, just changing the players without addressing the process issues isn't constructive.

Good standards are hard and take time. Many people are frustrated with the standards process and would like to short-circuit the procedures, especially when the official process yields a specification which doesn't address their needs (as happened with XHTML). Many standards efforts do not succeed because the process of developing the specification ignored the needs of critical constituents in the effort to get "something" soon.

The web has many constituents, not just browser implementors. These include end users who depend on the web functioning smoothly and interoperably, but also authors, makers of authoring tools, dynamic web content generation tools, search engines, machine translation services, content management systems, advertisers, IT managers concerned with web security, publishers, government agencies wanting to publish information on the web that is universally accessible, publishers supporting multi-lingual information, gateways and translators to and from web content to print, etc., etc.

The design of HTML matters to each of those constituents, and changes that favor one category may adversely impact another; for example, the more expressive something is, the harder it is to analyze or translate.

Any single individual (editor or chair) may be committed to giving everyone a fair hearing, but it is impossible for any one person to comprehend all of the requirements and the impact that any change or addition might have to the web on all of those constituents. It is inevitable that they internalize the requirements of those they talk to the most, depending on their organization and place in it. With a complex, highly structured, and interdependent technology, it is unreasonable to expect that a small set of individuals who mainly represent a subset of one constituent ("browser implementors primarily concerned with new mobile platforms") to fully carry out the analysis and design work without substantial additional review.

Making changes or additions to HTML, and encouraging content creators to use additional features where universal acceptance is not likely is irresponsible, because it fragments the web, and encourages creation of content which only works for some people and not others. The chaos of the browser wars between Microsoft and Netscape was very damaging, and recovery is not yet complete; we should work to avoid a repeat.

Allowing a small specialized design committee to proceed without substantial oversight loses sight of all of the rest for whom the W3C is supposed to serve as guardian of the Web. Additionally, others who are not of the inner circle and who come to the process are at a disadvantage, because it's often difficult to articulate their requirements in a context that is understood by the design committee. Often proposed solutions don't entirely fit and are easy to ignore because the motivation is poorly understood.

Standards processes and standards organizations need staff and mechanisms to ensure that other voices from other constituents are heard clearly, that requirements are exposed and considered, that designs are not just reviewed after the fact, but that review comments are given sufficient discussion to expose the underlying requirements and to understand more clearly whether there is some way of meeting them, even if the offered solution isn't accepted.

In my experience, one of the most challenging tasks in standards-making was to push "proposed solutions" back to "requirements". Very often, someone would make a suggestion for some fix or patch or change they wanted to the specification. They were often unable (or, in some cases, for proprietary reasons, unwilling) to articulate the requirements or needs they were trying to satisfy by the proposed feature. Much of the difficult work was to distill these requirements and prioritize them, make them explicit, and explore alternative solutions.

Transparency in the standards process (e.g., a chart of suggestions ignored by the HTML5 editor) might help a little, but what's needed is a more thorough analysis of the requirements and evaluation of the specification against requirements in a neutral way. Doing so is a distinguishing characteristic which would differentiate an open and consensus driven standards organization from a consortia or a company producing proprietary solutions.

As W3C examines its budget priorities, I believe that the only proper response to both the W3C financial crisis (as well as the dissatisfaction with prominent ongoing activities such as HTML5) is to look harder at the organization's priorities. An effective W3C is one in which W3C can provide technical management. Technical management of a standard involves more than just making sure the meetings are held on time and that there are minutes. The W3C organization needs to provide leadership that reflects its mission statement: Leading the Web to Its Full Potential. The implicit assumption is that it is the full potential for all, not a small subset. The management team, staff, TAG and other important controls of the W3C process should be sufficiently focused not only on the technical aspects of the specifications being developed, but also on the impact the specification has on the broader range of constituencies, even when those constituencies are not directly engaged in the process. The structure of W3C process and technical oversight - director, TAG, staff - have intrinsic limits as to the scope of activities that can be effectively managed.

Many of the other core specifications which underlie the web (beyond HTML) are also in need of maintenance. In the IETF, the HTTP specification is being updated to correct errors and clarify ambiguities, yet this effort has received scant W3C attention. The URI specifications have several areas which need clarification and further standardization work (the "file:" URI comes to mind), yet there is no motivation or effort to fix the potholes - everyone is focused on building bridges, with the hope that some day they might lead somewhere.

I'm calling for more focus of W3C staff on the core specifications which are of critical interest to the broadest constituencies. To do this and stay within budget implies a drastic retrench of the current W3C activities, even at the expense of driving a number of activities outside of the W3C.

January 27, 2010

Over-specification is anti-competitive

If there are 300 implementations of a specification, all different, but you take the 4 "important implementations" and write a specification that is precise enough to cover what those 4 "important implementations" do, exactly, precisely, and normatively require
("MUST") that behavior, then you inevitably wind up of making many of the remaining 296 implementations non-conforming, because the MUST requirements are too stringent.
The process then favors the 4 "important implementations" over the 296 other ones, and makes it harder for any of them to be offered as compliant implementations.
This is an example of "structural bias", as I wrote about earlier.

This problem is widespread in the HTML specification, and unfortunately really difficult to eliminate.
The example where I explored this in depth was in the calculation of "image.width" and "image.height", where a precise algorithm that required a state transition from:
[image not available, not loaded] to [image available, being loaded]
and then to EITHER [ image available, completely loaded] OR [ image not available, load failed].
HTML5 requires if the image is "available" (whether "being loaded"
or "completely loaded") that both image.width and image.height were both non-zero.
This behavior, I was assured, was necessary because there was some (how much? how often? still deployed?) javascript code that relied on exactly this state transition behavior.
Another implementation, which did not cache image width and height, or which let the cached image.width and height expire, and thus would allow [image available, being loaded] to transition to [image not available, not loaded], but that would be non-compliant with the HTML spec.
This non-compliance is not justified by significant interoperability considerations. It's hard to imagine any reasonable programmer making any such assumptions, and much more likely that the requirement is imaginary. By putting "compatibility" with a few, rare occurrences of badly written software which only works with a few browsers as the primary objective of HTML5, the result is an impenetrable mess.
The same can be said for most of the current HTML spec. It is overly precise, in a way that is anti-competitive, due to the process by which it was written; however, it is not in the business interests of the sponsors of the self-selected "WHATWG" steering committee to change the priorities.
Much was written about the cost of reverse engineering and how somehow this precise definition increased competition by giving other implementors precise guidelines for what to implement, but those arguments don't hold water. The cause of "reverse engineering" is and always has been the willingness of implementors to ignore specifications and add new, proprietary and novel extensions, or to modify behavior in a way that is "embrace and pollute" rather than "embrace and extend". This was the behavior during Browser Wars 1.0 and the behavior continues today.
None of the current implementations of HTML technology were written by first consulting the current specification (because the spec was written following the implementations rather than vice versa) so we have no assurance whatsoever that the current specification is useful for any implementation purpose other than proving that a competitive technology is "non-compliant."

Medley Interlisp Project, by Larry Masinter et al.

I haven't been blogging -- most of my focus has been on Medley Interlisp. Tell me what you think!