W3C's CSS validation service

Created 29th January, 2006 17:04 (UTC), last edited 5th December, 2007 13:32 (UTC)

In designing this site I wanted to make sure that I got my XHTML and CSS absolutely correct and I have to admit that I'm pleased that I get to put the buttons on the left that show that the site is valid (although it isn't delivering the pages in quite the way it should because Internet Explorer can't handle that).

Unfortunately the CSS doesn't validate from every page on the site. This isn't because of anything I've done, but rather because of the URLs I use.

I wanted the URLs to be correct names that could be easily read and understood rather than some cryptic code that only made sense to the server. Because of this nearly every URL has a space in it and many have some very odd characters due to the fact that I also use Unicode.

FOST.3™ is a tool meant to be able to handle sites and systems in a variety of languages (which it does very well) so it would seem idiotic to limit the URLs it allowed to ASCII.

If you click on the CSS icon from this page you'll get an error. You won't get an error in the CSS (because there aren't any, although there are some warnings), but you do get an error caused by a bug not even in the validator itself, but in the page that I have to link to which in turn takes you to the validator. I'm going to use a link to another page, Wet buttercup, though as having the W3C's validator give error messages with a URL that mentions the validator is just confusing.

The error message that it generates is:

Target: http://www.kirit.com/Wet buttercup
I/O Error: http://www.kirit.com/Wet buttercup: Bad Request

What the page that the button links to does is to create a URL for the validator to check, but this redirect doesn't encode the URL it passes on correctly.

The URL it generates is below* [*I've allowed it to split across lines to make it easier to read—there is no space after the question mark. The link takes you to the correct URL]:

This looks like the URL is encoded properly, but the validator must decode the URL it receives as part of the query string. This means that although the first page creates a query string that looks encoded it must nonetheless be encoded again. What it should be generating is below:

I could write a script to create URLs of the correct format, but I think the problem is at the other end and if both ends fix it then my end will be broken again.

This problem of not encoding or decoding in all the right places occurs all over the place. It's such a common feature of all sorts of web systems that most people just give up making their URLs read properly and just use cryptic sequences of letters and numbers.