Skip to main content

If you don't already know what Google Rich Snippets or are, this post is probably not for you.

Google Rich Snippets are not deprecated! The FAQ states, "If you have already done markup and it is already being used by Google, Microsoft, or Yahoo!, the markup format will continue to be supported. Changing to the new markup format could be helpful over time because you will be switching to a standard that is accepted across all three companies, but you don't have to do it."

This guide is for people who want to do it anyway.

Disclaimer: despite my working for Google, this guide has no official status. It represents nothing beyond my own personal interpretation, based on my experience writing about microdata and Google Rich Snippets in my free HTML5 book which you should totally pay money for because this is the way I want the world to work.

Some notes:

  • Google Rich Snippets supported microdata, microformats, and RDFa. only supports microdata. If you've been using microformats or RDFa to mark up your Google Rich Snippets, sorry, you backed the wrong horse.
  • Microdata is valid HTML5. Take two seconds to upgrade your DOCTYPE and get on with your life.
  • The official mailing list is your best bet if you have questions. My blog is where good feedback goes to die.
  • There is currently no way of verifying that this metadata, once migrated, will be interpreted correctly by search engines. i.e. There is no testing tool or validator. Somebody should get on that.

Table of Contents


Address changes is now Some properties have new names.

Old itempropNew itemprop


Geo changes is now There are no changes to property names or semantics.

Several properties in the old schema were of type In the vocabulary, many of these now use a location property of type Place. The new Place schema contains a geo property of type GeoCoordinates.


Organization changes is now The new schema has a number of more specific types like Corporation, NGO, and SportsTeam. Use the most specific type that is appropriate.

Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
geomoved to location property


Person changes is now Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
friend, contact, acquaintanceobsolete, use knows, follows, or colleagues

In addition, the affiliation used to be a plain text property. It now takes an Organization. (See also: Organization changes)


Event changes is now The new schema has a number of more specific types like BusinessEvent, SocialEvent, or Festival. Use the most specific type that is appropriate.

Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
eventTypeobsolete, use a specific event type if available
geomoved to location property

In addition, the location property used to be plain text or an Organization or an Address; now it must be either a Place or a PostalAddress. (See also: Address changes)


Product changes is now Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
reviewreviews (type Review)
offerdetailsoffers (type Offer)

The brand property used to be plain text, but it now takes an Organization.


Review changes is now Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
itemrevieweditemReviewed (type Thing)
reviewerauthor (type Person or Organization)


Offer changes is now Some properties have new names or are expressed in different ways.

Old itempropNew itemprop
conditionitemCondition (type OfferItemCondition)
identifiermoved to productID property of itemOffered

The availability property used to be an enumerated attribute, but it now takes an ItemAvailability.


Most of my recent writing has happened elsewhere.

That last article came about during the creation of mimesniff, my open source Python 3 library that implements the HTML5 Content-Type detection and character encoding detection algorithms.

If none of that is your cup of tea, here is a picture of my dog Beauregard, enjoying the beautiful North Carolina summer weather.

Beauregard on deck


Time to resurface a few good comments I made at Tim's place last year:

> if an electronic-trading system receives an XML message for a transaction valued at €2,000,000, and there's a problem with a missing end tag, you do not want the system guessing what the message meant

You [Tim] have used this example, or variations of it, since 1997. I think I can finally express why it irritates me so much: you are conflating "non-draconian error handling" with "non-deterministic error handling". It is true that there are some non-draconian formats which do not define an error handling mechanism, and it is true that this leads to non-interoperable implementations, but it is not true that non-draconian error handling implies "the system has to guess." It is possible to specify a deterministic algorithm for graceful (non-draconian) error handling; this is one of the primary things WHATWG is attempting to do for HTML 5.

If any format (including an as-yet-unspecified format named "XML 2.0") allows the creation of a document that two clients can parse into incompatible representations, and both clients have an equal footing for claiming that their way is correct, then that format has a serious bug. Draconian error handling is one way to solve such a bug, but it is not the only way, and for 10 years you've been using an overly simplistic example that misleadingly claims otherwise.

And, in the same thread but on a different note:

I would posit that, for the vast majority of feed producers, *is* RSS (and Atom). People only read the relevant specs when they want to argue that the validator has a false positive (which has happened, and results in a new test) or a false negative (which has also happened, and also results in a new test). Around the time that RFC 4287 was published, Sam rearranged the tests by spec section. This is why specs matter. The validator service lets morons be efficient morons, and the tests behind it let the assholes be efficient assholes. More on this in a minute.

> A simpler specification would require a smaller and finite amount of test cases.

The only thing with a "finite amount of test cases" is a dead fish wrapped in yesterday's newspaper.

On October 2, 2002, the service that is now hosted at came bundled with 262 tests. Today it has 1707. That ain't all Atom. To a large extent, the increase in tests parallels an increase in understanding of feed formats and feed delivery mechanisms. The world understands more about feeds in 2007 than it did in 2002, and much of that knowledge is embodied in the validator service.

If a group of people want to define an XML-ish format with robust, deterministic error handling, then they will charge ahead and do so. Some in that group will charge ahead to write tests and a validator, which (one would hope) will be available when the spec finally ships. And then they will spend the next 5-10 years refining the validator, and its tests, based on the world's collective understanding. It will take this long to refine the tests into something bordering on comprehensive *regardless of how simple the spec is* in the first place.

In short, you're asking the wrong question: "How can we reduce the number of tests that would we need to ship with the spec in order to feel like we had complete coverage?" That's a pernicious form of premature optimization. The tests you will actually need (and, hopefully, will actually *have*, 5 years from now) bears no relationship to the tests you can dream up now. True "simplicity" emerges over time, as the world's understanding grows and the format proves that it won't drown you in "gotchas" and unexpected interactions. XML is over 10 years old now. How many XML parsers still don't support RFC 3023? How many do support it if you only count the parts where XML is served as "application/xml"?

I was *really proud* of those 262 validator tests in 2002. But if you'd forked the validator on October 3rd, 2002, and never synced it, you'd have something less than worthless today. Did the tests rot? No; the world just got smarter.

On a somewhat related note, I've cobbled together a firehose which tracks comments (like these) that I make on other selected sites. Many thanks to Sam for teaching me about Venus filters, which make it all possible. If you've been thinking "Gee, I just can't get enough of that Pilgrim guy, I wish there were a way that I could stalk him without being overly creepy about it," then this firehose is for you.


Even the experts can't get it right 100% of the time.

[screenshot of xml error on]

screenshot taken at 10:29 PM on March 8, 2008.

For the record, my site is valid HTML 5, except the parts that aren't. My therapist says I shouldn't rely so much on external validation.


In the midst of a discussion between the only four people in the world who care about such things, Asbørn Ulsberg writes:

I think the issue of getting plugin developers to author well-formed plugins is solved by getting the core of WordPress to support and enforce XHTML. Getting this right is, as I see it, a two-part enforcement and encouragement battle. First, WordPress should be outputting everything as application/xhtml+xml (including the admin pages) to supporting browsers. [emphasis added]

This is, quite simply, the worst idea ever. Many, many years ago, I explained in great detail why it was the worst idea ever. This is not just a theoretical problem:

When he [inserted an invalid Unicode character into my system], I ran into the very problem that Mark mentioned years ago and I had to poke my WordPress database option to switch back to text/html for the WordPress admin panel so I could correct Jacques' invalid character.

Wordpress constitutes a logical system. If errors are introduced into that system, the administration panel is the only place within the system that you can correct those errors. If the administration panel itself does not tolerate the very errors it seeks to correct, then you can not fix the errors without jumping out of the system. This is Logical Systems 101. Go read Gödel, Escher, Bach, then come back and argue in favor of enforcing draconian error handling everywhere.

Sam does not have this problem, because he doesn't have to fix errors from within his system. (In fact, his weblog has no administration panel at all. All of his administration is already done from outside the system.) But your average Wordpress user doesn't have that luxury; they're constrained by either a lack of knowledge (don't know how to jump out of the system) or a lack of privileges (not allowed to jump outside the system). In my original thought experiment, Nick had the latter problem, because he was running on Typepad and had no access to anything outside his own administration pages.

(Interesting postscript: Thought Experiment led to the only documented case in the history of my blog where I changed someone's mind through reasoned argument. Possibly the only documented case in the history of the internet, though I would be interested in other examples if you have them.)