Such has been the flood of information since Aaron Gustafson broke the news of Microsoft’s radical new plans for Internet Explorer that I’ve mostly sat back and tried to absorb it all, waiting before contributing anything.
For those who haven’t been following the developments, Microsoft have said that future versions of Internet Explorer will support a new HTTP header and/or meta-tag which will indicate to the browser which version of IE the page is designed for. Unless the page specifies otherwise, all future versions of Internet Explorer will render it just like IE7 would. If you want IE8 to actually use the new features it brings with it, such as (we hope) improved standards support, you will need to explicitly ask it to do so.
There’s been a lot of response and people have fairly quickly become quite polarised. David Emery has a good list but a few I particularly noted were:
Eric Meyer and Jeffrey Zeldman explaining their support, Jeremy Keith suggesting that at the very least this is the wrong way round (the default should be the latest and greatest rendering engine), Drew McLellan (Web Standards Group Lead) pointing out that while members of the WaSP Microsoft Task Force had been involved in the initiative, this is not (currently) a WaSP-endorsed idea, comments from Ian Hickson,
and Sam Ruby and his commenting crowd considering the technical implications.
Reading all the debate it can be hard to separate feelings about this specific idea from a basic resentment against Microsoft that is harboured by most web developers I know. The failure of Internet Explorer to keep up with web standards has cost many of us, in aggregate, months of work, and our clients lots of money. The time we’ve spent supporting broken browsers could have been time spent improving the user experience or developing exciting new uses for the web. It feels very much as if the only way Microsoft can see to fix the mess they’ve created with their lacklustre browsers (and some very poor authoring tools) is to throw us a new type of confusion.
Regardless of it being Microsoft, though, even after reading all the debate I can’t help but feel that this is a very bad idea. Even if we get to the day that Internet Explorer 11 dominates and IE10 is the only other version used by a significant number of users, this would mean there would still be sites out there that are coded not to older standards than we may be used to, but to one of three or four older rendering engine and their unique set of bugs. That’s too much information for anyone to handle.
And then, of course, there’s the question of other web browsers. Sure, we can add a note that a site is designed for “safari 3, firefox 2.0, IE7 and Opera 9″, but that’s not a complete list even now. And we’re already seeing an explosion in the number of mobile devices, app-specific browsers, screen scrapers and other means of accessing the web. I find it hard to see this as anything other than a short-sighted form of browser lock-in.
So what should we do? Clearly the standards process is moving slowly and it’s taking browser developers several generations of their software to fully support the standards we do have. There’s already a lot of discussion of what should happen to the standards process to change that, but in the meantime I quite like the line of thought in David’s blog entry mentioned above suggesting we should have a way to test for support of various CSS properties. I’m not sure about the precise implementation details, but object detection got us a long way when we wanted to escape browser sniffing before, and maybe that’s still a fruitful line of enquiry?