On February 25, 1993, Marc Andreessen wrote:
I'd like to propose a new, optional HTML tag:
Required argument is
This names a bitmap or pixmap file for the browser to attempt to pull over the network and interpret as an image, to be embedded in the text at the point of the tag's occurrence.
An example is:
(There is no closing tag; this is just a standalone tag.)
This tag can be embedded in an anchor like anything else; when that happens, it becomes an icon that's sensitive to activation just like a regular text anchor.
Browsers should be afforded flexibility as to which image formats they support. Xbm and Xpm are good ones to support, for example. If a browser cannot interpret a given format, it can do whatever it wants instead (X Mosaic will pop up a default bitmap as a placeholder).
This is required functionality for X Mosaic; we have this working, and we'll at least be using it internally. I'm certainly open to suggestions as to how this should be handled within HTML; if you have a better idea than what I'm presenting now, please let me know. I know this is hazy wrt image format, but I don't see an alternative than to just say ``let the browser do what it can'' and wait for the perfect solution to come along (MIME, someday, maybe).
“Mosaic” was one of the earliest web browsers. ("X Mosaic" was the version that ran on Unix systems.) When he wrote this message in early 1993, Marc Andreessen had not yet founded the company that made him famous, Mosaic Communications Corporation, nor had he started work on that company's flagship product, “Mosaic Netscape.” (You may know them better by their later names, "Netscape Corporation" and “Netscape Navigator.”)
“MIME, someday, maybe” is a reference to content negotiation, a feature of HTTP where a client (like a web browser) tells the server (like a web server) what types of resources it supports (like
image/jpeg) so the server can return something in the client's preferred format. The Original HTTP as defined in 1991 (the only version that was implemented in February 1993) did not have a way for clients to tell servers what kind of images they supported, thus the design dilemma that Marc faced.
A few hours later, Tony Johnson replied:
I have something very similar in Midas 2.0 (in use here at SLAC, and due for public release any week now), except that all the names are different, and it has an extra argument
NAME="name". It has almost exactly the same functionality as your proposed
<ICON name="NoEntry" href="http://note/foo/bar/NoEntry.xbm">
The idea of the name parameter was to allow the browser to have a set of "built in" images. If the name matches a "built in" image it would use that instead of having to go out and fetch the image. The name could also act as a hint for "line mode" browsers as to what kind of a symbol to put in place of the image.
I don't much care about the parameter or tag names, but it would be sensible if we used the same things. I don't much care for abbreviations, ie why not
SOURCE=. I somewhat prefer
ICONsince it imlies that the
IMAGEshould be smallish, but maybe
ICONis an overloaded word?
Midas was another early web browser, a contemporary of X Mosaic. It was cross-platform; it ran on both Unix and VMS. “SLAC” refers to the Stanford Linear Accelerator Center (now the SLAC National Accelerator Laboratory). SLAC hosted the first web server in the United States (in fact the first web server outside Europe). When Tony wrote this message, SLAC was an old-timer on the WWW, having hosted five pages on their web server for a whopping 441 days.
While we are on the subject of new tags, I have another, somewhat similar tag, which I would like to support in Midas 2.0. In principle it is:
The intention here would be that the second document is to be included into the first document at the place where the tag occured. In principle the referenced document could be anything, but the main purpose was to allow images (in this case arbitrary sized) to be embedded into documents. Again the intention would be that when HTTP2 comes along the format of the included document would be up for separate negotiation.
“HTTP2” is a reference to Basic HTTP as defined in 1992. At this point in early 1993, it was still largely unimplemented. The draft known as “HTTP2” evolved and was eventually standardized as “HTTP 1.0” (albeit not for another three years). HTTP 1.0 did include request headers for content negotiation, a.k.a. “MIME, someday, maybe.”
An alternative I was considering was:
<A HREF="..." INCLUDE>See photo</A>
I don't much like adding more functionality to the
<A>tag, but the idea here is to maintain compatibility with browsers that can not honour the
INCLUDEparameter. The intention is that browsers which do understand
INCLUDE, replace the anchor text (in this case "See photo") with the included document (picture), while older or dumber browsers ignore the
This proposal was never implemented, although the idea of text-if-an-image-is-missing is an important accessibility technique which was missing from Marc’s initial
<IMG> proposal. Many years later, this feature was bolted on as the
<img alt> attribute, which Netscape promptly broke by erroneously treating it as a tooltip.
A few hours after that, Tim Berners-Lee responded:
I had imagined that figues would be reprented as
<a name=fig1 href="fghjkdfghj" REL="EMBED, PRESENT">Figure </a>
where the relation ship values meanEMBED Embed this here when presenting it PRESENT Present this whenever the source document is presented
Note that you can have various combinations of these, and if the browser doesn't support either one, it doesn't break.
[I] see that using this as a method for selectable icons means nesting anchors. Hmmm. But I hadn't wanted a special tag.
This proposal was never implemented, but the
rel attribute is still around.
It would be nice if there was a way to specify the content type, e.g.
<IMG HREF="http://nsa.gov/pub/sounds/gorby.au" CONTENT-TYPE=audio/basic>
But I am completely willing to live with the requirement that I specify the content type by file extension.
This proposal was never implemented, but Netscape did later add arbitrary embedding of media objects with the
While images are at the top of my list of desired medium types in a WWW browser, I don't think we should add idiosyncratic hooks for media one at a time. Whatever happened to the enthusiasm for using the MIME typing mechanism?
This isn't a substitute for the upcoming use of MIME as a standard document mechanism; this provides a necessary and simple implementation of functionality that's needed independently from MIME.
Let's temporarily forget about MIME, if it clouds the issue. My objection was to the discussion of "how are we going to support embedded images" rather than "how are we going to support embedded objections in various media".
Otherwise, next week someone is going to suggest 'lets put in a new tag
<AUD SRC="file://foobar.com/foo/bar/blargh.snd">' for audio.
There shouldn't be much cost in going with something that generalizes.
Responding to Jay’s original message, Dave Raggett said:
True indeed! I want to consider a whole range of possible image/line art types, along with the possibility of format negotiation. Tim's note on supporting clickable areas within images is also important.
Later in 1993, Dave Raggett proposed HTML+ as an evolution of the HTML standard. The proposal was never implemented, and it was superceded by HTML 2.0. HTML 2.0 was a “retro-spec,” which means it formalized features already in common use. “This specification brings together, clarifies, and formalizes a set of features that roughly corresponds to the capabilities of HTML in common use prior to June 1994.”
Dave later wrote HTML 3.0, based on his earlier HTML+ draft. HTML 3.0 was also never implemented (outside of the W3C’s own reference implementation, Arena), and it was superceded by HTML 3.2. HTML 3.2 was also a “retro-spec” — “HTML 3.2 adds widely deployed features such as tables, applets and text flow around images, while providing full backwards compatibility with the existing standard HTML 2.0.”
Getting back to 1993, Marc replied to Dave:
Actually, maybe we should think about a general-purpose procedural graphics language within which we can embed arbitrary hyperlinks attached to icons, images, or text, or anything. Has anyone else seen Intermedia's capabilities wrt this?
The idea of a “general-purpose procedural graphics language” did eventually catch on. Modern browsers support both SVG (declarative markup with embedded scripting) and
<canvas> (procedural direct-mode graphics API), although the latter started as a proprietary extension before being “retro-specced” by the WHATWG.
Other systems to look at which have this (fairly valuable) notion are Andrew and Slate. Andrew is built with _insets_, each of which has some interesting type, such as text, bitmap, drawing, animation, message, spreadsheet, etc. The notion of arbitrary recursive embedding is present, so that an inset of any kind can be embedded in any other kind which supports embedding. For example, an inset can be embedded at any point in the text of the text widget, or in any rectangular area in the drawing widget, or in any cell of the spreadsheet.
Meanwhile, Thomas Fine had a different idea:
Here's my opinion. The best way to do images in WWW is by using MIME. I'm sure postscript is already a supported subtype in MIME, and it deals very nicely with mixing text and graphics.
But it isn't clickable, you say? Yes your right. I suspect there is already an answer to this in display postscript. Even if there isn't the addition to standard postscript is trivial. Define an anchor command which specifies the URL and uses the current path as a closed region for the button. Since postscript deals so well with paths, this makes arbitrary button shapes trivial.
Display Postscript was an on-screen rendering technology co-developed by Adobe and NeXT.
This proposal was never implemented, but the idea that the best way to fix HTML is to replace it with something else altogether still pops up from time to time.
HTTP2 allows a document to contain any type which the user has said he can handle, not just registered MIME types. So one can experiment. Yes I think there is a case for postscript with hypertext. I don't know whether display postcript has enough. I know Adobe are trying to establish their own postscript-based "PDF" which will have links, and be readable by their proprietory brand of viewers.
I thought that a generic overlaying language for anchors (Hytime based?) would allow the hypertext and the graphics/video standards to evolve separately, which would help both.
INCLUDEand let it refer to an arbitrary document type. Or
INCLUDEsounds like a cpp include which people will expect to provide SGML source code to be parsed inline -- not what was intended.
HyTime was an early, SGML-based hypertext document system. It loomed large in many early discussions of HTML, and later XML.
Tim’s proposal for an
<INCLUDE> tag was never implemented, although you can see echoes of it in
<embed>, and the
Finally, on March 12, 1993, Marc Andreessen revisited the thread:
Back to the inlined image thread again -- I'm getting close to releasing Mosaic v0.10, which will support inlined GIF and XBM images/bitmaps, as mentioned previously. ...
We're not prepared to support
EMBEDat this point. ... So we're probably going to go with
ICON, since not all inlined images can be meaningfully called icons). For the time being, inlined images won't be explicitly content-type'd; down the road, we plan to support that (along with the general adaptation of MIME). Actually, the image reading routines we're currently using figure out the image format on the fly, so the filename extension won't even be significant.
I don’t really know why I wrote this. It wasn’t what I set out to write. That happens. But I am extraordinarily fascinated with all aspects of this almost-17-year-old conversation. Consider:
But none of this answers the original question: why do we have an
<img> element? Why not an
<icon> element? Or an
<include> element? Why not a hyperlink with an
include attribute, or some combination of
rel values? Why an
<img> element? Quite simply, because Marc Andreessen shipped one, and shipping code wins.
That’s not to say that all shipping code wins; after all, Andrew and Intermedia and HyTime shipped code too. Code is necessary but not sufficient for success. And I certainly don’t mean to say that shipping code before a standard will produce the best solution. Marc’s
<img> element didn’t mandate a common graphics format; it didn’t define how text flowed around it; it didn’t support text alternatives or fallback content for older browsers. And 16, almost 17 years later, we’re still struggling with content sniffing, and it’s still a source of crazy security vulnerabilities. And you can trace that all the way back, 17 years, through the Great Browser Wars, all the way back to February 25, 1993, when Marc Andreessen offhandedly remarked, “MIME, someday, maybe,” and then shipped his code anyway.
The ones that win are the ones that ship.
Again, this is more for my benefit than for yours. If I don't write this down, I'll forget it.
Dive Into Python 3 was commissioned in January 2009 by Apress, who published the original Dive Into Python in 2004. Upon agreeing to contract terms, I registered a ten-year lease on
diveintopython3.org and immediately published a draft table of contents.
The original DiP was written in DocBook XML. As I've mentioned before, I chose DocBook XML because I wanted to learn XML and XSL, and DocBook seemed to be Just The Thing for technical documentation. There was also a bit of self-grandeur involved. I was writing a book For The Ages, so it was important that it be in a Format Of Forever. And in the short term, I could transform The Format Of Forever into useful (but lowly) Output Formats, so I could do unimportant things like publish it online.
For The Ages turned out to be about 10 years. The Format Of Forever is still going strong, but Python itself changed so quickly that it didn't matter.
Oh, and there was one other little thing that happened between 2000 and 2009: search stopped sucking and took over the web. Kids today may not remember, but it used to be hard to find stuff on the web. Once you found it, you wanted to download it so you could read it offline.
Remember being "offline"?
Anyway, I now realize that there were some hidden assumptions behind my design decisions in 2000. Some of those assumptions turned out to be wrong, or at least not-completely-right. Sure, a lot of people downloaded DiP, but it still pales in comparison to the number of visitors I got from search traffic. In 2000, I fretted about my "home page" and my "navigation aids." Nobody cares about any of that anymore, and I have nine years of access logs to prove it.
So, I am writing DiP3 in pure HTML and, modulo some lossless minimizations, publishing exactly what I write. This makes the proofreading feedback cycle faster -- instead of "building" the HTML output, I just hit Ctrl-R. I expected it to make some things more complicated, but they turn out not to matter very much.
Furthermore, I am no longer under the illusion that this book will be useful forever. Python will either continue to evolve or it will die; either way, static documentation has a shelf life. Today's cutting edge code is tomorrow's mainstream code is next year's legacy code. DiP's shelf life was about 10 years. I am supremely confident that the HTML I'm writing today will still be readable 10 years from now, and after that it won't matter because I'll have to rewrite the whole damn book anyway.
See you in 2020 for Dive Into Python 4!
Section 12.4.1 of the HTML specification defines how to find the base URI of an HTML document.
I feel oddly compelled to explain this to you. It is insanely complicated.
HEADelement of the HTML document contains a
BASEelement, the base URI is given in the
hrefattribute, which must be an absolute URI.
Section 14.14 of RFC 2616 defines the
Content-Location: HTTP header. If an HTML document is served without a
BASE element but with a
Content-Location: HTTP header, then that is the base URI (test page). Just to make this more interesting,
Content-Location: may itself be a relative URI, in which case it is resolved according to RFC 2396, with the URI of the HTML document as its base URI. The resolved URI then serves as the base URI for other relative URIs within the HTML document.
Neither IE 6 SP1 nor Mozilla 1.6 Beta support the
Content-Location: header, mainly because Microsoft web servers are so buggy that respecting the
Content-Location: header would cause about 10% of IIS-powered sites to break horribly.
Finally, Mozilla does support the
Content-Base: header, which existed in HTTP 1.0 but was dropped from HTTP 1.1 due to the lack of interoperable implementations. The IETF requires at least two interoperable implementations before a draft can become a standard. Interoperating only with yourself is just a standards-compliant form of masturbation.
The following HTML attributes may be relative URIs:
Section 12.4 of the HTML specification states that
When present, the BASE element must appear in the HEAD section of an HTML document, before any element that refers to an external source. What if you have, say, a
LINK element with a relative URI before the
BASE element? In this situation, Mozilla resolves the URI relative to the document URI (there was no
Content-Location: HTTP header), but once it sees the
BASE element, it resolves all further URIs relative to the URI given in the
href of the
BASE element. I am not entirely convinced that this behavior is correct, but it seems reasonable, and I have codified this interpretation in my autodiscovery tests.
While recently discussing the XHTML Friends Network with Tantek, I learned that the
profile attribute of the
HEAD element may actually contain multiple URIs, separated by spaces. Section 7.4.1 of the HTML specification confirms this. Presumably all of the profile URIs should be considered potentially relative, and resolved according to the
Content-Location: HTTP header, or failing that, the document URI. They can't be resolved relative to the
href attribute of the
BASE element, since by definition, the
profile attribute of the
HEAD element always precedes the
BASE element within the
HEAD element. Since I know of no software that does anything at all with the
profile attribute, I can't test how real-world implementations actually deal with this.
Stuff like this drives me nuts. People ask me why my markup category is named
those that tremble as if they were mad. This is why.
Given enough good code, I should always be able to Do The Right Thing with your markup.
First of all, I apologize to those of you who subscribe to my RSS feed and use web-based or browser-based news aggregators. If you checked your news page in the last 12 hours, you no doubt saw my little prank: an entire screen full of platypuses. (Please, let's not turn this into a discussion of proper pluralization. Try to stay with me.) They're gone from my feed now, although depending on your software you may need to delete the post in question from your local news page as well.
Now that the contrition is out of the way, let's face facts: if this prank affected you, your software is dangerously broken. It accepts arbitrary HTML from potentially 100s of sources and blindly republishes it all on a single page on your own web server (or desktop web server). This is fundamentally dangerous.
Now, the current situation is not entirely your software's fault. RSS, by design, is difficult to consume safely. The RSS specification allows for
description elements to contain arbitrary entity-encoded HTML. While this is great for RSS publishers (who can just
throw stuff together and make an RSS feed), it makes writing a safe and effective RSS consumer application exceedingly difficult. And now that RSS is moving into the mainstream, the design decisions that got it there are becoming more and more of a problem.
HTML is nasty. Arbitrary HTML can carry nasty payloads: scripts, ActiveX objects, remote image
web bugs, and arbitrary CSS styles that (as you saw with my platypus prank) can take over the entire screen. Browsers protect against the worst of these payloads by having different rules for different
zones. For example, pages in the general Internet are marked
untrusted and may not have privileges to run ActiveX objects, but pages on your own machine or within your own intranet can. Unfortunately, the practice of republishing remote HTML locally eliminates even this minimal safeguard.
Still, dealing with arbitrary HTML is not impossible. Web-based mail systems like Hotmail and Yahoo allow users to send and receive HTML mail, and they take great pains to display it safely. It's a lot of work, and there have been several high-profile failures over the years, but they're coping.
Let me be clear: by design, RSS forces every single consumer application to cope with this problem.
So, to anyone who wants to write a safe RSS aggregator (or who has already written an unsafe one), I offer this advice:
scripttags. This almost goes without saying. Want to see the prank I didn't pull? More seriously,
scripttags can be used by unscrupulous publishers to insert pop-up ads onto your news page. Think it won't happen? Some larger commercial publishers are already inserting text ads and banner ads into their feeds.
metatags, which can be used to hijack a page and redirect it to a remote URL.
linktags, which can be used to import additional style definitions.
styletags, for the same reason.
styleattributes from every single remaining tag. My platypus prank was based entirely on a single rogue
Alternatively, you can simply strip all but a known subset of tags. Many comment systems work this way. You'll still need to strip
style attributes though, even from the known good tags.
There is one scenario I see play out again and again on Web Design-L, css-discuss, and countless other forums. Newbie Designer posts a link to a test page, asking for help because it doesn't behave as expected in this or that browser. Guru Designer replies, telling Newbie Designer that their page doesn't validate, and that they should go validate their page before asking such questions. There is no further discussion; no further replies are posted; no one else is willing to help.
Why does this happen? Why won't we help you?
The short, smart-alec, Zen-like answer is that we are helping you, you just don't realize it yet. The full answer goes like this:
Validation may reveal your problem. Many cases of
it works in one browser but not another are caused by silly author errors. Typos like missing attribute values can cause browsers to crash; validation catches these typos. Simple errors like missing end tags (such as
</div>) or missing elements (such as
<tr>) can cause different problems in different browsers. Small mistakes like this are difficult for you to spot in your own code, but the validator pinpoints them immediately.
I am not claiming that your page, once validated, will automatically render flawlessly in every browser; it may not. I am also not claiming that there aren't talented designers who can create old-style
Tag Soup pages that do work flawlessly in every browser; there certainly are. But the validator is an automated tool that can highlight small but important errors that are difficult to track down by hand. If you create valid markup most of the time, you can take advantage of this automation to catch your occasional mistakes. But if your markup is nowhere near valid, you'll be flying blind when something goes wrong. The validator will spit out dozens or even hundreds of errors on your page, and finding the one that is actually causing your problem will be like finding a needle in a haystack.
Validation may solve your problem. HTML is not
anything goes; it has rules of how elements can be used and combined. Browsers are written to understand these rules and render your page accordingly. Browsers also have special-case logic to deal with various types of invalid markup, including vendor-specific tags and attributes, illegal combinations of block-level and inline elements, and overlapping elements. Different browsers create different internal representations of this so-called
Tag Soup markup, which can lead to unexpectedly varying results when they go to apply styles or execute script on your page.
Ian Hickson illustrates these differences. Dave Hyatt, one of the developers of Apple's Safari browser, talks about the
residual style problem caused by improperly nested elements. As Dave's example shows, this doesn't just affect CSS-based pages; it affects pure-HTML pages too.
I am not claiming that validation is a magic bullet that will automatically solve all your web design problems; it is not. Designers still cope with lots of cross-browser and cross-platform compatibility problems with valid markup. But validating your pages eliminates a vast array of potential incompatibilities, leaving a manageable subset of actual incompatibilities to work with. Which leads me to my next point...
Valid markup is hard enough to debug already. Debugging Tag Soup is an order of magnitude harder. It's also not terribly rewarding. Some of us are good at it; many of us have been around long enough to have dealt with it at one point or another. But it's not where we like to focus our energies. There's nothing aesthetically pleasing or intellectually satisfying about helping a hack-and-slash coder tweak their shitty markup and bludgeon a few browsers into submission. We know it'll only break again next week; we've been there, we know what happens next week. We know you're just coding on borrowed time.
And did I mention that debugging this stuff is hard? There's a lot to keep track of, even when you do everything right. There are bugs in Windows browsers, bugs in Mac browsers, bugs in browsers old and new, bugs in Opera, bugs in Netscape, bugs in MSIE too. Dr. Seuss could make great poetry out of all the bugs we cope with in our valid, standards-compliant pages. And on top of that, you want us to keep track of the near-infinite variety of bugs that could be triggered by your Tag Soup? We don't have that kind of time, and the time we do have is better spent elsewhere. Which leads me to my final point...
Validation is an indicator of cluefulness. There are a lot of people who need our help, and there are relatively few of us who have the combination of time, expertise, and inclination to debug the work of strangers for free. It's those pesky power laws at work again: we simply can't help everyone who asks. Like a Human Resources department that gets 500 resumes for every open position, we have to filter on something, and validation has proven to be a good filter. It is possible -- in fact, it is almost inevitable -- that this will keep us from interacting with otherwise talented designers who would have turned out to be great friends or professional associates later in life, but that's the way it goes. It might also be the case that, out of 500 applicants, the perfect candidate for that open position is the one with 5 spelling mistakes on their resume. But you can't interview everyone. You have to filter on something.
Why is validation a good filter? Because nobody makes valid pages by accident. If you come to us and say,
Hey, I have this page, it's valid XHTML and CSS, and I'm having this specific problem, well OK, you've obviously put some work into it, you've met us more than halfway, let's see what we can do. But if you come to us and say,
Hey, I slapped together this page and it works in X browser but now my client says it doesn't work in W, Y, and Z, they must be buggy pieces of shit, well... you catch more flies with honey, and you get more help with valid markup.