Skip to main content

On June 3, 2009, Janina Sajka, Chair of the Protocols and Formats Working Group of the W3C Web Accessibility Initiative (WAI), wrote:

The following consensus was reached by Protocols and Formats Working Group during its teleconference of Wednesday, 3 June 2009 ...

We note that summary is often used as a technique for accessibility support where governmental regulations require governmental web sites to be accessible. ... [link to Guide to the Section 508 Standards for Electronic and Information Technology, Subpart B - Technical Standards, Web-based Intranet and Internet Information and Applications (1194.22), Section (g) "Data Table"]

If summary is removed [from HTML 5], U.S. Government web sites, might find it more difficult to conform to HTML 5. We further note that Section 508 regulations apply to U.S. state and local governments, and that similar accessibility requirements are emerging in Canada, the U.K., the E.U., Australia, and elsewhere.

Guide to the Section 508 Standards for Electronic and Information Technology, Subpart B - Technical Standards, Web-based Intranet and Internet Information and Applications (1194.22), Section (g) "Data Table":

Web developers who are interested in summarizing their tables should consider placing their descriptions either adjacent to their tables or in the body of the table, using such tags as the CAPTION tag.

On June 4, 2009, Ian Hickson, editor of the HTML 5 specification, replied:

As far as I can tell this concern is unfounded; the <caption> attribute is in fact encouraged by the very same government (as quoted above) to be used exactly as HTML5 recommends in a manner consistent with the goals of the summary="" attribute.

HTML 5, Editor's Draft as of this writing:

For tables that consist of more than just a grid of cells with headers in the first row and headers in the first column, and for any table in general where the reader might have difficulty understanding the content, authors should include explanatory information introducing the table. This information is useful for all users, but is especially useful for users who cannot see the table, e.g. users of screen readers.

Such explanatory information should introduce the purpose of the table, outline its basic cell structure, highlight any trends or patterns, and generally teach the user how to use the table.

There are a variety of ways to include this information, such as:

  • In prose, surrounding the table
  • In the table's caption
  • In the table's caption, in a details element
  • Next to the table, in the same figure
  • Next to the table, in a figure's legend

Authors may also use other techniques, or combinations of the above techniques, as appropriate.

If a table element has a summary attribute, the user agent may report the contents of that attribute to the user.

On July 7, 2009, Janina Sajka, Chair of the Protocols and Formats Working Group of the W3C Web Accessibility Initiative (WAI), wrote:

PF responded on these questions formally. We would appreciate the basic human courtessy of acknowledgment. If you don't like what we said, please speak to that. But kindly don't simply ignore us. [link to the June 3, 2009 message announcing the consensus of the Protocols and Formats Working Group]

On July 7, 2009, Ian Hickson, editor of the HTML 5 specification, replied:

That e-mail received a reply some weeks ago: [link to Ian Hickson's message of June 4, 2009]

Is there a formal reply to that e-mail?

On July 7, 2009, Janina Sajka, Chair of the Protocols and Formats Working Group of the W3C Web Accessibility Initiative (WAI), replied:

No, we don't make formal replies to individuals.

On August 2, 2009, John Foliot wrote:

I maintain that it is not the role of the HTML WG, and the editor in particular, to be offering this guidance, especially when it contradicts the consensus position of the W3C Group chartered to speak on web accessibility issues. Simply put, you are messing in somebody else's yard, and it is against W3C process to be doing so. If HTML WG feel that they have compelling evidence and data that suggests that the WCAG guidance needs to be reviewed and revised, there is a process for that.

On August 3, 2009, Ian Hickson responded to John Foliot:

I didn't want to be the one to have to explain this to you, but nobody else is doing so, so here goes: The W3C process doesn't actually require that working groups agree, or not contradict each other. The WAI's mission is not binding on other working groups.

On August 3, 2009, at approximately 7:51 pm, Roy Fielding wrote:

I have no opinion on the value of @summary other than noting the likelihood that its support will be required for some FIPS or government statute for accessibility, and therefore deprecating it within HTML5 will just make HTML5 look stupid.

Guide to the Section 508 Standards for Electronic and Information Technology, Subpart B - Technical Standards, Web-based Intranet and Internet Information and Applications (1194.22), Section (g) "Data Table":

Web developers who are interested in summarizing their tables should consider placing their descriptions either adjacent to their tables or in the body of the table, using such tags as the CAPTION tag.

On August 3, 2009, at approximately 7:59 pm, Roy Fielding wrote:

[A]uthors are clearly not served by a specification that tells them caption and summary are the same and all such information must be relegated to caption. As an implementor of content management systems used by government agencies in several different countries, I will not conform to any HTML specification that deprecates or fails to define @summary.

Guide to the Section 508 Standards for Electronic and Information Technology, Subpart B - Technical Standards, Web-based Intranet and Internet Information and Applications (1194.22), Section (g) "Data Table":

Web developers who are interested in summarizing their tables should consider placing their descriptions either adjacent to their tables or in the body of the table, using such tags as the CAPTION tag.

On May 23, 2006, Joe Clark wrote:

The Web Content Accessibility Guidelines Working Group [part of the W3C Web Accessibility Initiative (WAI)] is the worst committee, group, company, or organization I've ever worked with. Several of my friends and I were variously ignored; threatened with ejection from the group or actually ejected; and actively harassed.

In response to Joe Clark's article, John Foliot wrote:

I can attest to knowing a regular participant to the [Web Content Accessibility Guidelines] WG discussion list who has been shut down and ignored on more than one occasion, and I personally have been dismissed by other working groups within the W3C. ... So the behavior and treatment described by Joe is not unknown any time you strongly voice an opinion counter to the internal W3C herd.

On August 2, 2009, John Foliot wrote:

I have submitted an alternative [HTML 5] Draft document for consideration; one which I believe rightly returns the role of author guidance for creating accessible content to the W3C WAI - the group officially chartered by the W3C to speak to these matters. It is a question of respect.

On August 4, 2009, Ian Hickson asked:

Are you saying that for you, it is more important that HTML5 not contradict other W3C specifications, than it be that HTML5 address accessibility problems with the HTML language?

On August 4, 2009, John Foliot responded to Ian Hickson:

You needs to stop contradicting WAI, even if you have proof that WAI might need to update their guidance. ...

Contradictory information *harms* the overall outreach aspect of teaching people how to create accessible web content, and I speak from the position of one who actually does that for a living, and have been doing so for close to a decade. THE MESSAGE WE SEND TO THE WORLD'S WEB DEVELOPERS MUST BE CONSISTENT!

Guide to the Section 508 Standards for Electronic and Information Technology, Subpart B - Technical Standards, Web-based Intranet and Internet Information and Applications (1194.22), Section (g) "Data Table":

Web developers who are interested in summarizing their tables should consider placing their descriptions either adjacent to their tables or in the body of the table, using such tags as the CAPTION tag.

On August 3, 2009, Roy Fielding wrote:

John [Foliot]'s point is that the W3C has a group specifically tasked to make accessibility recommendations.

On August 3, 2009, David Baron responded to Roy Fielding:

Has that group weighed in in this debate, in response to the evidence presented? Or is it just that an out-of-date (i.e., not updated in response to newer evidence) recommendation of that group is being cited?

On August 3, 2009, Roy Fielding responded to David Baron:

I don't know -- it isn't a relevant question. The group exists [link to W3C Web Accessibility Initiative (WAI)] and seems to be open for your input.


Sam Ruby:

This blog entry has an [inline SVG] image with a text alternative. Who does it benefit?

Short answer: no one, but you have to do it anyway.

Long answer: As far as I know, none of the commercially available screenreaders support SVG in any way, much less reading the title of an SVG image included inline in an XHTML page (as opposed to, say, linked from the src attribute of an <img> element, or embedded in an <object> element). Nonetheless, you have provided a text alternative for the image, and theoretically, that could be presented to a user in place of (or in addition to) the image. You have therefore fulfilled your moral duty, even though no one actually benefits from it. Welcome to the wacky world of access enablement.

The concept of access enablement is not complicated. In the physical world, it works like this: I build the ramp, you bring the wheelchair. I don't have to provide you with a wheelchair; it's up to you to procure one. Nor do I have to teach you how to use your wheelchair to get up my ramp. Nor do I have to push you up the ramp when you arrive. If your wheelchair happens to break at the bottom of my ramp, you can't sue me for being inaccessible. I did my part: I built the ramp; everything else is Somebody Else's Problem.

For better or for worse, this concept got translated directly into the virtual world of software. Just as there are standards that define the minimum width and maximum slope of wheelchair-accessible ramps, so too there are standards for building accessible software and authoring accessible content. In the desktop software world, priority #1 is to keep track of the focus. In the web authoring world, it's to provide text alternatives for any non-text content. The exact techniques vary by medium. For the HTML <img> element, the guidelines say you must provide an alt attribute and (potentially) a longdesc attribute. For SVG, they mandate a <title> child element and (potentially) a <desc> element.

The interesting part is not what the guidelines say, but what they do not say.

  1. No thought of complexity for authors. What are the chances that the author is even qualified to write a long description for a complex graph? Or captions for a video? And if they're not, how much would it cost to pay someone else to do it? How long would it take for them (or someone they hire) to implement it? Would that delay cause other problems or present other opportunity costs?
  2. No thought of implementation cost for authoring tool vendors. Is it reasonable to expect authors to provide text alternatives to a photo they take with their cellphone and upload to a photo sharing site? Is it reasonable to expect tools to enforce this?
  3. No mention of implementation cost for client software. Can screenreader vendors justify the cost of implementing and maintaining support for rarely-used features, e.g. reading the title of an inline SVG image when only one site in the world actually does that? What about the cost of implementing workarounds for bogus data for popular-but-misused features?
  4. No mention of how end users would learn about the feature.

So here's the crux of the problem: nowhere in the process of defining an accessibility feature is there any consideration for how often it would be used, how often it would be used correctly, what would happen if it were used incorrectly, how much it would cost to implement it, or how users would learn about the feature. In short, there is no cost-benefit analysis.

Now, some features are simple and easy and popular, so these questions never come up. If enough authors use them and tool vendors implement them and end users learn about them, then everything works. But not every feature is simple or easy or popular; a lot of them are waaaaay down the "long tail" of usage + implementation + education. So far down that, in any other field, you would start talking about the law of diminishing returns. But in accessibility, there is no such limit.

Some concrete examples: most browsers don't expose information about the access keys available on a page, and most authors don't define access keys in their pages, and those that do often conflict with other browser, AT, or OS-level shortcuts. Most images aren't complex enough warrant a long description, and most authors who try to offer a long description get it wrong. But it is just assumed that users who would benefit from them will somehow learn of their existence and be motivated to find software that supports them (assuming they can ever find a page that uses them).

The accessibility orthodoxy does not permit people to question the value of features that are rarely useful and rarely used.

When this orthodoxy collides with reality, the results are both humorous and sad. When I was an accessibility architect at IBM, I assisted in the final stages of ensuring that Eclipse's Graphical Editing Framework was fully accessible to blind people. This involved ensuring that all possible objects were focusable, all possible actions were keyboard-accessible (including drag-and-drop), and all possible information about nodes and connectors was exposed to third-party assistive technologies via MSAA. It was mind-numbing work, full of strange edge cases and bizarre hypothetical situations, not unlike the one Sam is struggling to understand. During one particularly difficult teleconference, an Eclipse developer muttered something like, "You realize no one is ever actually going to do any of this, right?" There was an awkward silence as the people who had spent their lives in the trenches of access enablement contemplated the very real possibility that no one would ever benefit from their work.

Back to Sam's question. Few authors publish in true XHTML mode, fewer still include inline SVG images in their XHTML, and fewer still include titles or descriptions in those images. But in theory, you can imagine a situation where a web author publishes in true XHTML mode, and the author includes an inline SVG image within an XHTML page, and an end user is using a browser that supports true XHTML, and that user is using a hypothetical screenreader-of-the-future that implements support for the <title> and <desc> elements within inline SVG images within XHTML pages, and that user stumbles across that page. It's theoretically possible, therefore you have to do it. Period. End of discussion.

Now go retrofit text alternatives into every SVG image you've ever published, or an accessibility advocacy group who has never visited your site will sue you on behalf of all the users you've been disenfranchising. All zero of them.


On the topic of <canvas> accessibility, John Foliot writes:

Finally, I propose that any instance of <canvas> that lacks at a minimum the 2 proposed mandatory values be non-conformant and not render on screen.

When pressed for an explanation, John continues:

Actually, yes, I have proposed this form of draconian response before.

It's about consequences: until such time as there are real consequences for slack developers/tools that allows content to exist that is incomplete, then there will be content that is incomplete - it's a simple as that. Why would <img src="path..." /> be any more complete than <img alt="Photo of a leprechaun" />? I mean, clearly, anyone processing that info in their user-agent will 'get' the intent of the author, right? Yet today, the first example will render in the browser, the second delivers a 'fail'. Ergo (to me) there is a problem of inequity here that must be addressed - if it fails for some, it should fail for all.

If it fails for some, it should fail for all.

John also believes that Flickr is (or at least should be) illegal because it allows people to publish inaccessible content. I'll pause for a moment and let that sink in.


It's important to understand just how extreme these views are, even within the accessibility community. I was a professional accessibility architect for several years; before that, I took an intense interest in web accessibility; long before that, I was a relay operator for AT&T. A few years ago, I had the pleasure of attending the annual CSUN accessibility conference, where I helped staff the Mozilla booth and talked about Firefox's accessibility features to anyone who would listen. So I have had the opportunity to speak to a great many people who cared about accessibility, authored accessible content, wrote accessible software, designed accessible hardware, and provided accessibility services.

In all that time, in all those conversations, with all those people, I have never heard anyone say, "Seriously, you know what we should do to make the world more accessible? Fuck over all the sighted people."

I know that most of the people who care about accessibility do not have the time or resources to follow the daily machinations of the HTML 5 working groups. That's fine; standards take a long time and require a lot of attention, and most people have day jobs somewhere else. But I have observed a kind of "conventional wisdom" taking hold in this wider community -- that the HTML working group doesn't care about accessibility, that any and all proposals are rejected, that the views of "experts" are simply dismissed out of hand.

I think it would be wise for people who truly care about accessibility to take a closer look at the so-called "experts" who are participating on their behalf, and to understand exactly what these people are proposing. It's true that some of their proposals have not been adopted, but it's not because some cartoonishly monocled villain enjoys being mean to them. It's because the proposals are insane.


[Part of an ongoing series.]

The first thing you need to know about captions and subtitles is that captions and subtitles are different. The second thing you need to know about captions and subtitles is that you can safely ignore the differences unless you're creating your own from scratch. I'm going to use the terms interchangeably throughout this article, which will probably drive you crazy if you happen to know and care about the difference.

Historically, captioning has been driven by the needs of deaf and hearing impaired consumers, and captioning technology has been designed around the technical quirks of broadcast television. In the United States, so-called "closed captions" are embedded into a part of the NTSC video source ("Line 21") that is normally outside the viewing area on televisions. In Europe, they use a completely different system that is embeddable in the PAL video source. Over time, each new medium (VHS, DVD, and now online digital video) has dealt a blow to the accessibility gains of the previous medium. For example:

  • PAL VHS tapes did not have enough bandwidth to store closed captions at all.
  • DVDs have the technical capability, but producers often manage to screw it up anyway; e.g. DVDs of low-budget television shows are often released without the closed captions that accompanied the original broadcast.
  • HDMI cables drop "Line 21" closed captions altogether. If you play an NTSC DVD on an HDTV over HDMI, you'll never see the closed captions, even if the DVD has them.

And accessible online video is just fucking hopeless. (And no, it won't change unless new regulation forces it to change. When it comes to captioning, Joe Clark has been right longer than many of you have been alive.)

So even in broadcast television, captioning technology was fractured by different broadcast technologies in different countries. Digital video had the capability of unifying the technologies and learning from their mistakes. Of course, exactly the opposite happened. Early caption formats split along company lines; each major video software platform (RealPlayer, QuickTime, Windows Media, Adobe Flash) implemented captioning in their own way, with levels of adoption ranging from nil to zilch. At the same time, an entire subculture developed around "fan-subbing," i.e. using captioning technology to provide translations of foreign language videos. For example, non-Japanese-speaking consumers wanted to watch Japanese anime films, so amateur translators stepped up to publish their own English captions that could be overlaid onto the original film. In the 1980s, fansubbers would actually take VHS tapes and overlay the English captions onto a new tape, which they would then (illegally) distribute. Nowadays, translators can simply publish their work on the Internet as a standalone file. English-speaking consumers can have their DVDs shipped directly from Japan, and they use software players that can overlay standalone English caption files while playing their Japanese-only DVDs. The legality of distributing these unofficial translations (even separately, in the form of standalone caption files) has been disputed in recent years, but the fansubbing community persists.

Technically, there is a lot of variation in captioning formats. At their core, captions are a combination of text to display, start and end times to display it, information about where to position the text on a screen, fonts, styling, alignment, and so on. Some captions roll up from the bottom of the screen, others simply appear and disappear at the appropriate time. Some caption formats mandate where each caption should be placed and how it should be styled; others merely suggest position and styling; others leave all display attributes entirely up to the player. Almost every conceivable combination of these variables has been tried. Some forms of media try multiple combinations at once. DVDs, for example, can have two entirely distinct forms of captioning -- closed captioning (as used in NTSC broadcast television) embedded in the video stream, and one or more subtitle tracks. DVD subtitle tracks are used for many different things, including subtitles (just the words being spoken, in the same language as the audio), captions for the hearing impaired (which include extra notations of background noises and such), translations into other languages, and director's commentary. Oh, and they're stored on the DVD as images, not text, so the end user has no control over fonts or font size.

Beyond DVDs, most caption formats store the captions as text, which inevitably raises the issue of character encoding. Some caption formats explicitly specify the character encoding, others only allow UTF-8, others don't specify any encoding at all. On the player side, most players respect the character encoding if present (but may only support specific encodings); in its absence, some players assume UTF-8, some guess the encoding, and some allow the user to override the encoding. Obviously standalone caption files can be in any format, but if you want to embed your captions as a track within a video container, your choices are limited to the caption formats that the video container supports.

And remember when I said that there were a metric fuck-ton of audio codecs? Forget that. There are an imperial fuck-ton of caption formats (i.e. multiply by 9/5 and add 32). Here is a partial list of caption formats, taken from the list of formats supported by Subtitle Workshop, which I used to caption my short-lived video podcast series:

Adobe Encore DVD, Advanced SubStation Alpha, AQTitle, Captions 32, Captions DAT, Captions DAT Text, Captions Inc., Cheetah, CPC-600, DKS Subtitle Format, DVD Junior, DVD Studio Pro, DVD Subtitle System, DVDSubtitle, FAB Subtitler, IAuthor Script, Inscriber CG, JACOSub 2.7+, Karaoke Lyrics LRC, Karaoke Lyrics VKT, KoalaPlayer, MacSUB, MicroDVD, MPlayer, MPlayer2, MPSub, OVR Script, Panimator, Philips SVCD Designer, Phoenix Japanimation Society, Pinnacle Impression, PowerDivX, PowerPixel, QuickTime Text, RealTime, SAMI Captioning, Sasami Script, SBT, Sofni, Softitler RTF, SonicDVD Creator, Sonic Scenarist, Spruce DVDMaestro, Spruce Subtitle File, Stream SubText Player, Stream SubText Script, SubCreator 1.x, SubRip, SubSonic, SubStation Alpha, SubViewer 1.0, SubViewer 2.0, TMPlayer, Turbo Titler, Ulead DVD Workshop 2.0, ViPlay Subtitle File, ZeroG.

Which of these formats are important? The answer will depend on whom you ask, and more specifically, how you're planning to distribute your video. This series is primarily focused on videos delivered as files to be played on PCs or other computing devices, so my choices here will reflect that. These are some of the most well-supported caption formats:


SubRip is the AVI of caption formats, in the sense that its basic functionality is supported everywhere but various people have tried to extend it in mostly incompatible ways and the result is a huge mess. As a standalone file, SubRip captions are most commonly seen with a .srt extension. SubRip is a text-based format which can include font, size, and position information, as well as a limited set of HTML formatting tags, although most of these features are poorly supported. Its "official" specification is a doom9 forum post from 2004. Most players assume that .srt files are encoded in Windows-1252 (what Windows programs frequently call "ANSI"), although some can detect and switch to UTF-8 encoding automatically.

Because .srt files are so often published separately from the video files they describe, the most common use case is to put your .srt file in the same directory as your video file and give them the same name (up to the file extensions). But it is also possible to embed SubRip captions directly into AVI files with AVI-Mux GUI, into MKV files with mkvmerge, and into MP4 files with MP4Box.

You can play SubRip captions in Windows Media Player or other DirectShow-based video players after installing VSFilter; in QuickTime after installing Perian; on Linux, both mplayer and VLC support it natively.

SubStation Alpha

SubStation Alpha and its successor, Advanced SubStation Alpha, are the preferred caption formats of the fansubbing community. As standalone files, they are commonly seen with .ssa or .ass extensions. They have a spec longer than three paragraphs. They are actually miniature scripting languages. A .ass file contains a series of commands to control position, scrolling, animation, font, size, scaling, letter spacing, borders, text outline, text shadow, alignment, and so on; and a series of time-coded events for displaying text given the current styling parameters. It has support for multiple character encodings.

The playing requirements for SubStation Alpha captions are almost identical to SubRip. The same plugins are required for Windows and Mac OS X. On Linux, mplayer prides itself on having the most complete SSA/ASS implementation.

MPEG-4 Timed Text

a.k.a. "MPEG-4 Part 17," a.k.a. ISO 14496-17, MPEG-4 Timed Text (hereafter "MP4TT") is the one and only caption format for the MP4 container. It is not a file format; it is only defined in terms of a track within an MP4 container. As such, it can not be embedded in any other video container, and it can not exist as a separate file. (Note: the last sentence was a lie; the MPEG-4 Timed Text format is really the 3GPP Timed Text format, and it can very much be embedded in a 3GPP container. What I meant to say is that the format can not be embedded in any of the other popular video container formats like AVI, MKV, or OGG. I could go on about the subtle differences between MPEG-4 Timed Text in an MP4 container and 3GPP Timed Text in a 3GPP container, but it would just make you cry, and besides, technical accuracy is for pussies.)

MP4TT defines detailed information on text positioning, fonts, styles, scrolling, and text justification. These details are encoded into the track at authoring time, and can not be changed by the end user's video player. The most readable description of its features is actually the documentation for GPAC, an open source implementation of much of the MPEG-4 specification (including MP4TT). Since MP4TT doesn't define a text-based serialization, GPAC invented one for their own use; since their format is designed to capture all the possible information in an MP4TT track, it turns out to be an easy way to read about all of MP4TT's features.

MP4Box, part of the GPAC project, can take an .srt file and convert it into a MPEG-4 Timed Text track and embed it in an existing MP4 file. It can also reverse the process -- extract a Timed Text track from an MP4 file and output a .srt file.

On Mac OS X, QuickTime supports MP4TT tracks within an MP4 container, but only if you rename the file from .mp4 to .3gp or .m4v. I shit you not. (On the plus side, changing the file extension will allow you to sync compatible video to an iPod or iPhone, which will actually display the captions. Still not kidding.) On Windows, any DirectShow-based video player (such as Windows Media Player or Media Player Classic) supports MP4TT tracks once you install Haali Media Splitter. On Linux, VLC has supported MP4TT tracks for several years.


SAMI was Microsoft's first attempt to create a captioning format for PC video files (as opposed to broadcast television or DVDs). As such, it is natively supported by Microsoft video players, including Windows Media Player, without the need for third-party plugins. It has a specification on MSDN. It is a text-based format that supports a large subset of HTML formatting tags. SAMI captions are almost always embedded in an ASF container, along with Windows Media video and Windows Media audio.

Don't use SAMI for new projects; it has been superceded by SMIL. For historical purposes, you may enjoy reading about creating SAMI captions and embedding them in an ASF container, as long as you promise to never, ever try it at home.


SMIL (Synchronized Multimedia Integration Language) is not actually a captioning format. It is "an XML-based language that allows authors to write interactive multimedia presentations." It also happens to have a timing and synchronization module that can, in theory, be used to display text on a series of moving pictures. That is to say, if you think of SMIL as a way to provide captions for a video, you're doing it wrong. You need to invert your thinking -- your video and your captions are each merely components of a SMIL presentation. SMIL captions are not embedded into a video container; the video and its captions are referenced from a SMIL document.

SMIL is a W3C standard; the most recent revision, SMIL 3.0, was just published in December 2008. If you printed out the SMIL 3.0 specification on US-Letter-sized paper, it would weigh in at 395 pages. So don't do that.

QuickTime supports a subset of SMIL 1.0. WebAIM provides a nice tutorial on using SMIL to add captions to a QuickTime movie.

Further reading


As far as I can tell, the only thing that leading accessibility experts agree on is that nobody listens to leading accessibility experts, especially not the microformats cabal, which has never cared about accessibility, has never bothered to test it, and has never acknowledged those who have tested it. In fact, the BBC recently removed one microformat from their site because one piece of it may be confusing to some screen reader users with a certain non-default configuration. This proves what leading accessibility experts have been saying all along, that all microformats are inaccessible, and we should all just use RDF.

Meanwhile, the devilish cabal is secretly solving the problem on their public wiki page, their public mailing list, and their public IRC channel. But will it be enough for the BBC? Be sure to tune in next week, when we'll drown a leading accessibility expert to see if she's a witch.