("MUST") that behavior, then you inevitably wind up of making many of the remaining 296 implementations non-conforming, because the MUST requirements are too stringent.
The process then favors the 4 "important implementations" over the 296 other ones, and makes it harder for any of them to be offered as compliant implementations.
This is an example of "structural bias", as I wrote about earlier.
This problem is widespread in the HTML specification, and unfortunately really difficult to eliminate.
The example where I explored this in depth was in the calculation of "image.width" and "image.height", where a precise algorithm that required a state transition from:
[image not available, not loaded] to [image available, being loaded]
and then to EITHER [ image available, completely loaded] OR [ image not available, load failed].
HTML5 requires if the image is "available" (whether "being loaded"
or "completely loaded") that both image.width and image.height were both non-zero.
This behavior, I was assured, was necessary because there was some (how much? how often? still deployed?) javascript code that relied on exactly this state transition behavior.
Another implementation, which did not cache image width and height, or which let the cached image.width and height expire, and thus would allow [image available, being loaded] to transition to [image not available, not loaded], but that would be non-compliant with the HTML spec.
This non-compliance is not justified by significant interoperability considerations. It's hard to imagine any reasonable programmer making any such assumptions, and much more likely that the requirement is imaginary. By putting "compatibility" with a few, rare occurrences of badly written software which only works with a few browsers as the primary objective of HTML5, the result is an impenetrable mess.
The same can be said for most of the current HTML spec. It is overly precise, in a way that is anti-competitive, due to the process by which it was written; however, it is not in the business interests of the sponsors of the self-selected "WHATWG" steering committee to change the priorities.
Much was written about the cost of reverse engineering and how somehow this precise definition increased competition by giving other implementors precise guidelines for what to implement, but those arguments don't hold water. The cause of "reverse engineering" is and always has been the willingness of implementors to ignore specifications and add new, proprietary and novel extensions, or to modify behavior in a way that is "embrace and pollute" rather than "embrace and extend". This was the behavior during Browser Wars 1.0 and the behavior continues today.
None of the current implementations of HTML technology were written by first consulting the current specification (because the spec was written following the implementations rather than vice versa) so we have no assurance whatsoever that the current specification is useful for any implementation purpose other than proving that a competitive technology is "non-compliant."
If there are 300 implementations of a specification, all different, but you take the 4 "important implementations" and write a specification that is precise enough to cover what those 4 "important implementations" do, exactly, precisely, and normatively require
ReplyDelete("MUST") that behavior, then you inevitably wind up of making many of the remaining 296 implementations non-conforming, because the MUST requirements are too stringent.
If they're not already following that algorithm, then there's a clear interop problem. Fixing interop is more important than making people feel good with a "conforming" stamp. We authors don't care whether or not something is conforming, except as a tool to get things interoperable.
@JackalMage, your response begs the question, which itself is how you draw the line about what to specify and what not to.
ReplyDeletePersonally I think that Firefox having Tools/Options on Windows and File/Preferences on Linux is a clear interoperability problem that HTML5 must remedy.
<<<
ReplyDeleteThe cause of "reverse engineering" is and always has been the willingness of implementors to ignore specifications and add new, proprietary and novel extensions, or to modify behavior in a way that is "embrace and pollute" rather than "embrace and extend". This was the behavior during Browser Wars 1.0 and the behavior continues today. None of the current implementations current HTML technology were ever written by first consulting the specification,
>>>
That's false. Ever since I've been involved with Gecko (ten years), every feature with a specification has been implemented by first consulting the specification, and then adding hacks to make existing Web pages display correctly if necessary.
Following specs is easier than reverse engineering. No browser implementer enjoys reverse engineering. The only reason to do it is to get existing content to work correctly. If you really wanted to "embrace and pollute", you wouldn't reverse-engineer, you'd just do whatever you wanted.
<<<
It's hard to imagine any reasonable programmer making any such assumptions
>>>
You need to accept the fact that Web content is created by tens of millions of unreasonable programmers.
These programmers don't read specs, they just write code and test it in two or three browsers (if you're lucky), and tweak it until it works in those browsers. They've been doing it this way for years, and they're not going to change.
So if you want to view their pages as intended and follow a spec at the same time, the spec needs to be consistent with the behavior of those browsers.
I don't think anyone is happy about this, but wishful thinking won't help those 296 other implementations of HTML.
Whether a particular issue like image width/height is truly necessary for interop isn't really worth speculating about without data. But in general, I think it's better to make the spec over-specific.
If an implementer thinks some part of the spec is not needed for interop, and doing it some other way is much easier, they can just ignore that part of the spec. Then if it really doesn't matter to authors, there is no problem. If it turns out to matter to authors, then the spec makes it clear what the correct behavior is and that the implementer needs to fix. On the other hand, if the implementer can implement the spec behavior just as easily as any other behavior, then we get interop, which is always good.
@robert gives reasons why some want the spec we have, but doesn't address the main point, which is that over-specification is anti-competitive.
ReplyDeleteMore on other points (possibly in new blog posts).
In my last paragraph I explained why over-specification is harmless.
ReplyDeleteOn the other hand, under-specification is *severely* anti-competitive because the market leaders don't have to do anything, while the small players have to do expensive reverse-engineering work, assuming their primary goal is to handle Web content (rather than be spec-compliant).
Perhaps the underlying disagreement is that you think that spec compliance is everyone's primary goal and handling Web content is an afterthought (perhaps even optional). But that's not realistic.
@robert: The anti-competitive nature of over-specification is harmless because implementors can "just ignore" any part of spec?
ReplyDeleteI'll blog separately about the "reverse engineering" myth. Promise.
@Leigh: It does indeed draw that question, but there's a fairly reasonable guideline for where that line should be drawn - will differences in implementation cause authors (sufficiently large) problems?
ReplyDeleteThe menubar of a browser pretty clearly falls below that line.
In other words, you don't need to snark to make your point.
(It's theoretically possible that menubars could be useful to be specified uniformly, but that's nothing to do with webpages or the authors of such, and thus far outside of HTML5.)
@JackalMage: Snarking is allowed, but only from people I agree with. And the selection of which "authors" are important, or what constitutes a "sufficiently large" problem for them, is basically also anti-competitive. The "major" customers of the "major browsers" are given more weight. For example, the problem of authors who previously had followed accessibility guidelines aren't being given a lot of weight.
ReplyDeleteAny time there is judgment needed, and some vendors have more voice then others, there is a risk of bias.
In Twitter-land, @rigow reminds me that anti-competitive might have a narrower meaning. Certainly over-specification creates a structural bias, even if the market bias doesn't reach some legal threshold.
ReplyDeleteThanks for an interesting post. Funny you should pick the IMG width/height example, because I've seen exactly this cause problems in practise. A while ago, I was investigating why a DHTML menu on Brother Corporation's website was rendered wrong first time Opera loaded the site. Turned out the script relied on reading dimensions from an image and on first load would usually do so before the image had finished loading. I think Opera was returning 0 in that situation.
ReplyDeleteThis problem obviously had me tearing my hears out - because how, exactly, would we fix that? Delay running JavaScript if images were loading? Pause the script when it tried to read the dimensions of a loading image? (How messy would that be?) Emulate Internet Explorer's timing for image loading and script execution, whatever that meant?
Had HTML5 been available back then, I would have rejoiced at finding an algorithm that attempted to answer my questions.
I'm absolutely certain that many developers working on the "other" 296 implementations will feel the same way whenever they encounter an interoperability problem that HTML5 has addressed.
@Hallvord, you (and others) may want such an algorithm and might even rejoice when you find one. Sure, lots of people would like all of this to be spelled out precisely.
ReplyDeleteBut this begs the question of whether what's written is the correct algorithm (because some browsers implemented something like it and it seems to fit some web sites?) or whether specifying appropriate temporal event constraints as a normative algorithm is even feasible (the web is an asynchronous environment; some web sites would would stop working if the move their images to a much slower server.)
That some other browser developers might like to find an algorithmic description in this particular situation doesn't mean either that they would like this algorithm (depending on whether it matches their implementation model), or that the spec isn't biased against the rest of the HTML developers, which was the main point of this blog post.
I'll write more about the impossibility of precise specification text (as opposed to, say, a reference implementation or virtual machine) later.
When I mentioned this post on Twitter, Mike Shaver, VP of engineering at Mozilla responded:
ReplyDeletehttp://twitter.com/shaver/statuses/9011038539
"is he at all familiar with how Adobe basically got the SVG WG to stamp their implementation's behaviour, because it had shipped?"
To what extent do you think the SVG case he mentions is parallel or different to what you're objecting to with regard to HTML?
--Stephen Shankland, CNET News
http://news.cnet.com/deep-tech
"doesn't mean either that they would like this algorithm (...) or that the spec isn't biased against the rest of the HTML developers"
ReplyDeleteAlmost sounds like you're saying one should never specify anything because some group of target users might not like what's specified :)
Those of us who need even certain timing-sensitive implementation details described have a legitimate need for that (namely that we need to present web content smoothly and nicely to end users). It's as legitimate as the accessibility people's need for a compulsory ALT to make authors describe their images ;)
As far as algorithms in the spec go, there should be some language somewhere saying you can implement things the way you want as long as your implementation ends up doing the same thing as the specified algorithm in the end.
Regarding the "anti-competitive" statement - sure, we'll have lots of work to do aligning to HTML5. It will actually be an expensive voyage, even for browser vendors. I can see that developers with less resources than browser vendors might find it hard to cope with the spec.
That's still a lot better than the anti-competitive effects of a spec that doesn't match reality. If we give them a spec that isn't telling them what they need to know to handle real content, the other 296 implementors will have a lot more time-consuming and expensive work to do. I know, because I've spent nearly ten years doing exactly that sort of work.
Standards need carefully balance between backward compatibility and leadership. "Match reality" is a nifty catch-phrase, but if it's backward looking, then you will never make progress. Certainly new features don't "match reality". Standards can only make progress when the implementors (all of them, not a small subset) can reach some agreement about what they will implement, not about what was implemented. (I'm still working on a blog post about the tyranny of legacy.)
ReplyDeleteThe problem with timing-sensitive information is that when the timing ambiguity is intrinsic to the architecture, no amount of retrospective spec wrangling can make it better. It's a fact of the processing model deployed. Some things are sometimes fast and sometimes slow, and we have to deal with that. If we want content that can present smoothly and nicely to end users under a variety of conditions, we might need more than JavaScript and the DOM.
Writing down what the timing expectations might help a little to be consistent with current content, and I don't have any objection to it as a suggestion, or even, if really needed, to be a "SHOULD". But making this MUST requiring that conforming implementations follow a specific algorithm, even in situations where that isn't necessary, doesn't help anyone, really.
Frankly, I think once we get to actually trying to test things, we'll realize that you can't really test an algorithmic description, and that the test plan ("how do I know this is doing the right thing") will lead to a much better specification.
You might disagree about what "standards" are for, but I think I've been pretty consistent, see The Future of Web Technology and Standards (given in 2000 when I was working for AT&T).
@Stephen: Interesting question, and I don't know the history. On the other hand, the SVG specification hardly has the kind of (frequently untestable) nit-picky 'precision' found throughout the HTML specification.
ReplyDeleteThere are really two dimensions: precision, and direction. Precision varies from under-specified to over-specified. Specs that are under-specified are bad because they allow different implementations that aren't interoperable. Specs that are over-specified (which I've argued against here) needlessly block otherwise legitimate implementations that *would* interoperate reasonably (with of course a big argument about what 'reasonably' means.)
The other dimension is whether the spec is forward looking or backward looking: do you write the spec to match the implementations, or do you write a spec and try to get the implementations to match. (I agree completely you want implementations and specs to match, of course.)
I don't know the history, but it sounds like some forward-looking stuff in SVG got changed in favor of backward-looking at Adobe's implementation. Whether that's bad or good depends on where you're standing, I suppose.
But they're different dimensions, and I think they're orthogonal: you can have all combinations.
From all of this discussion I don't see much mention of the end user. What does the end user want? Most of all they want something useful. They want something that does what they want and does it anywhere. Sites that just work on any web browser any place they pick. They need a lot of MUSTs in the spec to insure compatibility.
ReplyDeleteThen there are those at the middle designing and coding sites. Mostly they are motivated by money. They need a simplified spec that is easy to follow and use to implement cool things with. They need a lot of MUSTs in there so they can depend on things working across multiple browsers rather than saying you have to use browser xyz to use this site. They need the specs to move forward in a reasonable time to be able to implement cool new things that always work.
Then there are the implementers. Mostly they are motivated by money. They want to be able to grab market share in the name of being competitive. When they have market share they want to add in little special things that won't work on browser xyz and ignore parts of the spec to maintain market share. They want to innovate and grab more market share. Sometimes they want to drag their feet about implementing the spec to keep market share.
Over-specification and implementers that follow it is the best thing for the end user. The process is horrible but much better than no process.
@bill d:
ReplyDeleteI'll reply in another blog post, but short outline is:
* There are lots of kinds of users with different desires, different roles, different suppliers. You just talked about some, and even then, I don't agree with your summary of requirements.
* A "standard" is just something to measure against. "MUST" in a standard doesn't cause anything to happen, except that products that say "I implement standard X" are promising to implement every MUST and having a good reason for every SHOULD they don't implement. Understanding that standards have no other force than that leads to very different uses of MUST and SHOULD and normative than what you talked about.
@Bill D: I blogged about this more under Users and Standards.
ReplyDelete