November 20, 2014

Ambiguity, Semantic web, speech acts, truth and beauty


(I think this post is pretty academic for the web dev crowd, oh well)

When talking about URLs and URNs or semantic web or linked data, I keep on returning to a topic. Carl Hewitt gave me a paper about inconsistency which this post reacts to.

The traditional AI model of semantics and meaning don't work well for the web. 
Maybe this is old-hat somewhere but if you know any writings on this topic, send me references.

In the traditional model (from Bobrow's essay in Representation and Understanding), the real world has objects and people and places and facts; there is a KRL Knowledge Representation Language in which statements about the world are written, using terms that refer to the objects in the real world. Experts use their expertise to write additional statements about the world, and an "Inference Engine" processes those statements together to derive new statements of facts.

This is like classic deduction "Socrates is a man, all men are mortal, thus Socrates is mortal" or arithmetic (37+53) by adding 7+3, write 0 carry 1 plus 3 plus 5 write 9, giving 90.

And to a first approximation, the semantic web was based on the idea of using URLs as the terms to refer to real world, and relationships, and RDF as an underlying KRL where statements consisted of triples.

Now we get to the great and horrible debate over "what is the range of the http function" which has so many untenable presumptions that it's almost impossible to discuss. That the question makes sense.
That you can talk about two resources being "the same". That URLs are 'unambiguous enough', and the only question is to deal with some niggly ambiguity problems, with a proposal for new HTTP result codes.

So does http://larry.masinter.net refer to me or my web page? To my web page now or for all history, to just the HTML of the home page or does it include the images loaded, or maybe the whole site?

"http://larry.masinter.net" "looks" "good".

So I keep on coming back to the fundamental assumption, the model for the model.

Coupled with my concern that we're struggling with identity (what is a customer, what is a visitor) in every field, and phishing and fraud on another front.

Another influence has been thinking about "speech acts". It's one thing to say "Socrates is a man" and completely different thing to say "Wow!". "Wow!" isn't an assertion (by itself), so what is it? It's a "speech act" and you distinguish between assertions and questions and speech acts.

A different model for models, with some different properties:

Every speech is a speech act.

      There are no categories into assertion, question, speech act. Each message passed is just some message intending to cause a reaction, on receipt. And information theory applies: you can't supply more than the bits sent will carry. "http://larry.masinter.net" doesn't intrinsically carry any more than the entropy of the string can hold. You can't tell by any process whether it was intended to refer to me or to my web page.

Truth is too simple, make belief fundamental. 

   So in this model, individuals do not 'know' assertions, they only 'believe'  to a degree. Some things are believed so strongly that they are treated as if they were known. Some things we don't believe at all. A speech act accomplishes its mission if the belief of the  recipient changes in the way the sender wanted.   Trust is a measure of influence: your speech acts that look like statements influence my beliefs about the world insofar as I trust you. The web page telling me my account balance influences my beliefs about how much I owe.

Changing the model helps think about security

Part of the problem with security and authorization is we don't have a good model for  reasoning about it. Usually we divide the world into "Good guys" and "bad guys": Good guys make true statements ("this web page comes from bank trustme")  while bad guys lie. (Let's block the bad guys.)   By putting trust and ambiguity at the base of the model and not as an after-patch we have a much better way of describing what we're trying to accomplish.

Inference, induction, intuition are just different kinds of processing

   In this model, you would like influence of belief to resemble logic in the cases where there is trust and those communicating have some agreement about what the terms used refer to. But inference is subject to its own flaws ("Which Socrates? What do you mean by mortal? Or 'all men'"). 

Every identifier is intrinsically ambiguous

Among all of the meanings the speaker might have meant, there is no inbound right way to disambiguate. Other context, out of band, might give the receiver of the message with a URL more information about what the sender might have meant. But part of the inference, part of the assessment of trust, would have to take into account belief about the sender's model as to what the sender might have meant. Precision of terms is not absolute.

URNs are not 'permanent' nor 'unambiguous', they're just terms with a registrar

I've written more on this which i'll expand elsewhere. But URNs aren't exempt from ambiguity, they're generally just URLs with different assigned organizations to disambiguate if called on.

Metadata, linked data, are speech acts too.

When you look in or around an object on the net, you can often  find additional data, trying to tell you things about the object. This is the metadata. But it isn't "truth", metadata is also a communication act, just one where one of the terms used is the object.

There's more but I think I'll stop here. What do you think?







September 14, 2014

Living Standards: "Who Needs IANA?"

I'm reading about two tussles, which seem completely disconnected, although they are about the same thing, and I'm puzzled why there isn't a connection.

This is about the IANA protocol parameter registries.  Over in ianaplan@ietf.org people are worrying about preserving the IANA function and the relationship between IETF and IANA, because it is working well and shouldn't be disturbed (by misplaced US political maneuvering that the long-planned transition from NTIA is somehow giving something away by the administration.)

Meanwhile, over in www-international@w3.org, there's a discussion of the Encodings document, being copied from WHATWG's document of that name into W3C recommendation. See the thread (started by me), about the "false statement".

Living Standards don't need or want registries for most things the web use registries for now: Encodings, MIME types, URL schemes. A Living Standard has an exhaustive list, and if you want to add a new one or change one, you just change the standard.  Who needs IANA with its fussy separate set of rules? Who needs any registry really?

So that's the contradiction: why doesn't the web need registries while other applications do? Or is IANAPLAN deluded?

September 9, 2014

The multipart/form-data mess

OK, this is only a tiny mess, in comparison with the URL mess,  and I have more hope for this one.

Way back when (1995), I spec'ed a way of doing "file upload" in RFC1867. I got into this because some Xerox printing product in the 90s wanted it, and enough other folks in the web community seemed to want it too. I was happy to find something that a Xerox product actually wanted from Xerox research.

It seemed natural, if you were sending files, to use MIME's methods for doing so, in the hopes that the design constraints were similar and that implementors would already be familiar with email MIME implementations.  The original file upload spec was done in IETF because at the time, all of the web, including HTML, was being standardized in the IETF.   RFC 1867 was "experimental," which in IETF used to be one way of floating a proposal for new stuff without having to declare it ready.

After some experimentation we wanted to move the spec toward standardization. Part of the process of making the proposal standard was to modularize the specification, so that it wasn't just about uploading files in web pages.   Rather, all the stuff about extending forms and names of form fields and so forth went with HTML. And the container, the holder of "form data"-- independent of what kind of form you had or whether it had any files at all -- went into the definition of multipart/form-data (in RFC2388).   Now, I don't know if it was "theoretical purity" or just some sense of building things that are general purpose to allow unintended mash-ups, but RFC2388 was pretty general, and HTML 3.2 and HTML 4.0 were being developed by people who were more interested in spec-ing a markup language than a form processing application, so there was a specification gap between RFC 2388 and HTML 4.0 about when and how and what browsers were supposed to do to process a form and produce multipart/form-data.

February of last year (2013) I got a request to find someone to update RFC 2388. After many months of trying to find another volunteer (most declined because of lack of time to deal with the politics) I went ahead and started work: update the spec, investigate what browsers did, make some known changes.  See GitHub repo for multipart/form-data and the latest Internet Draft spec.

Now, I admit I got distracted trying to build a test framework for a "test the web forward" kind of automated test, and spent way too much time building what wound up to be a fairly arcane system. But I've updated the document, and recommended its "working group last call". The only problem is that I just made stuff up based on some unvalidated guesswork reported second hand ... there is no working group of people willing to do work. No browser implementor has reviewed the latest drafts that I can tell.

I'm not sure what it takes to actually get technical reviewers who will actually read the document and compare it to one or more implementations to justify the changes in the draft.

Go to it! Review the spec! Make concrete suggestions for change, comments or even better, send GitHub pull requests!



September 7, 2014

The URL mess

(updated 9/8/14)

One of the main inventions of the Web was the URL.  And I've gotten stuck trying to help fix up the standards so that they actually work.

The standards around URLs, though, have gotten themselves into an organizational political quandary to the point where it's like many other situations where a polarized power struggle keeps the right thing from happening.

Here's an update to an earlier description of the situation:

URLs were originally defined as ASCII only. Although it was quickly determined that it was desirable to allow non-ASCII characters, shoehorning utf-8 into ASCII-only systems was unacceptable; at the time, Unicode was not so widely deployed, and there were other issues. The tack was taken to leave "URI" alone and define a new protocol element, "IRI";  RFC 3987 published in 2005 (in sync with the RFC 3986 update to the URI definition).   (This is a very compressed history of what really happened.)

The IRI-to-URI transformation specified in RFC 3987  had options; it wasn't a deterministic path. The URI-to-IRI transformation was also heuristic, since there was no guarantee that %xx-encoded bytes in the URI were actually meant to be %xx percent-hex-encoded bytes of a utf8 encoding of a Unicode string.

To address issues and to fix URL for HTML5, a new working group was established in IETF in 2009 (The IRI working group). Despite years of development, the group didn't get the attention of those active in WHATWG, W3C or Unicode consortium, and the IRI group was closed in 2014, with the consolation that the documents that were being developed in the IRI working group could be updated as individual submissions or within the "applications area" working group.  In particular, one of the IRI working group items was to update the "scheme guidelines and registration process",  which is currently under development in IETF's application area.

Independently, the HTML5 specs in WHATWG/W3C defined "Web Address", in an attempt to match what some of the browsers were doing. This definition (mainly a published parsing algorithm) was moved out into a separate WHATWG document called "URL".

The world has also moved on. ICANN has approved non-ascii top level domains, and IDN 2003 and 2008 didn't really address IRI Encoding. Unicode consortium is working on UTS #46.

The big issue is to make the IRI -to-URI transformation non-ambiguous and stable.  But I don't know what to do about non-domain-name non-ascii 'authority' fields.  There is some evidence that some processors are %xx-hex-encoding the UTF8 of domain names in some circumstances.

There are four umbrella organizations (IETF, W3C, WHATWG, Unicode consortium) and multiple documents, and it's unclear whether there's a trajectory to make them consistent:

IETF

Dave Thaler (mainly) has updated http://tools.ietf.org/html/draft-ietf-appsawg-uri-scheme-reg, which needs comunity review.

The IRI working group closed, but work can continue in the APPS area working group. Documents sitting needing update, abandoned now, are three drafts (iri-3987bis, iri-comparison, iri-bidi-guidelines) intended originally to obsolete RFC 3987.

Other work in IETF that is relevant but I'm not as familiar with is the IDN/IDNA work for internationalizing domain names, since the rules for canonicalization, equivalence, encoding, parsing, and displaying domain names needs to be compatible with the rules for doing those things to URLs that contain domain names.

In addition, there's quite a bit of activity around URNs and library identifiers in the URN working group, work that is ignored by other organizations.

W3C

The W3C has many existing recommendations which reference the IETF URI/IRI specs in various ways (for example, XML has its own restricted/expanded allowed syntax for URL-like-things). The HTML5 spec references something, the TAG seems to be involved, as well as the sysapps working group, I believe. I haven't tracked what's happened in the last few months.

WHATWG

The WHATWG spec is http://url.spec.whatwg.org/  (Anne, Leif). This fits in with the WHATWG principle of focusing on specifying what is important for browsers, so it leaves out many of the topics in the IETF specs. I don't think there is any reference to registration, and (when I checked last) had a fixed set of relative schemes: ftp, file, gopher (a mistake?), http, https, ws, wss, used IDNA 2003 not 2008, and was (perhaps, perhaps not) at odds with IETF specs.

Unicode consortium

Early versions of  #46 and I think others recommends translating toAscii and back using punycode  ? But it wasn't specific about which schemes.

Conclusion


From a user or developer point of view, it makes no sense for there to be a proliferation of definitions of URL, or a large variety URL syntax categories. Yes, currently there is a proliferation of slightly incompatible implementations.  This shouldn't be a competitive feature. Yet the organizations involved have little incentive to incur the overhead of cooperation, especially since there is an ongoing power struggle for legitimacy and control. The same dynamic applies to the Encoding spec, and, to a lesser degree, handling of MIME types (sniffing) and multipart/form-data.

September 6, 2014

On blogging, tweeting, facebooking, emailing

I wanted to try all the social media, just to keep an understanding of how things really work, I say.

And my curiosity satisfied, I 'get' blogging, tweeting, facebook posting, linking in, although I haven't tried pinning and instagramming. And I'm not sure what about.me is about, really, and quora sends me annoying spam which tempts me to read.

Meanwhile, I'm hardly blogging at all; I have lots of topics with something to say.  Meanwhile Carol (wife) is blogging about a trip; I supply photo-captions and Internet support.

So I'm going to follow suit, try to blog daily. Blogspot for technical, Facebook for personal, tweet to announce. LinkedIn notice when there's more to read.  I want to update my site, too; more on that later.

Medley Interlisp Project, by Larry Masinter et al.

I haven't been blogging -- most of my focus has been on Medley Interlisp. Tell me what you think!