December 29, 2012

Web Standards and Security

This is the second in a series of posts about my personal priorities for the W3C Technical Architecture Group.

Computer security is a complex topic, and it is easy to get lost in the detailed accounts of threats and counter-measures. It is hard to get to the general architectural principles. But fundamentally, computer security can be thought of as an arms race:  new threats are continually being invented, and counter-measures come along eventually to counter the threats. In the battle between threats and defense of Internet and Web systems, my fear is that the "bad guys" (those who threaten the value of the shared Internet and Web) are winning. My reasoning is simple:  as the Internet and the Web become more central to society, the value of attacks on Internet infrastructure and users increases, attracting organized crime and threats of cyber-warfare.

Further, most reasoning about computer security is "anti-architectural":  the exploits of security threats cut across the traditional means of architecting scalable systems—modularity, layering, information hiding. In the Web, many security threats depend on unanticipated information flows through the layer boundaries. (Consider the recently discovered "CRIME" exploit.) Traditional computer security analysis consists of analyzing the attack surface of a system to discover the security threats and provide for mitigation of those threats.

New Features Mean New Threats

Much of the standards community is focused on inventing and standardizing new features. Because security threats are often based on unanticipated consequences of minor details of the use of new features, security analysis cannot easily be completed early in the development process. As new features are added to the Web platform, more ways to attack the web are created. Although the focus of the computer security community is not on standards, we cannot continue to add new features to the Web platform without sufficient regard to security, or to treat security as an implementation issue.

Governance and Security

In many ways, every area of governance is also an area where violation of the governance objectives has increasing value to an attacker. Even without the addition of new features, deployment of existing features in new social and economic applications grows the attack surface. While traditional security analysis was primarily focused on access control, the growth of social networking and novel features increases the ways in which the Web can be misused.

The W3C TAG and Security

The original architecture of the Web did not account for security, and the W3C TAG has so far had insufficient expertise and energy to focus on security. While individual security issues may be best addressed in working groups or outside the W3C, the architecture of the Web also needs a security architecture, which gives a better model for trust, authentication, certificates, confidentiality, and other security properties.

3 comments:

  1. 1. Incorrect programs often can be coerced into attacker-controlled behavior
    2. Program correctness is too hard
    3. Therefore....

    (rpg may have the last laugh in the battle with New Jersey; the consequences of #1 are a catastrophe in C.)

    As you say, standardization is different from writing software. The bottleneck of common agreement we reach on protocols may make them more amenable to analysis than programs.

    I am not sure where to place the yellow out-of-scope tape for IETF or W3C activities.

    I don't think the disaster of black-and-white trust is going away until the naive ontology of X.509 is discredited. As the Semantic Web itself becomes less binary there may be some useful convergence for trust management.

    ReplyDelete
  2. On scope for IETF and W3C: I think it's best to think of the "development community" as those who are developing the Internet and the Web, and IETF and W3C and WHAT-WG as venues -- like conference halls: they provide meeting rooms, guards who check badges, and publish the proceedings, but the real work is done by the participants. The "scope" of each doesn't matter much as long as there's a decision! The main problem with the URL mess is that there are 3 different organizations and 7 committees, none of which take responsibility for resolving the conflicts.

    ReplyDelete
  3. Reading and understanding the specification, if available, is one of the first things any security researcher will do when trying to break software. When testing some implementation of TCP, HTTP, URL, HTML, Shadow DOM or whatever, the spec is the starting point. During security assessments, test cases can be derived directly from those specs, testing the explicit assumptions around security, and also thinking creatively about abuse cases that may not have been imagined. Simple and clearly-written specs help the cause, but you're right, it's very much an arms race, especially as releases rot and implementations drift over time.

    I see security not so much anti-architectural, but more anti-workflow. Security requirements are often seen as speed bumps and greeted with grunts and groans by any but the most pragmatic and security-conscious engineer.

    ReplyDelete

Medley Interlisp Project, by Larry Masinter et al.

I haven't been blogging -- most of my focus has been on Medley Interlisp. Tell me what you think!