Nice work by Rafael Rivera, who noticed that Microsoft was stacking the deck in its IE9 performance tests:
More specifically, Microsoft performed a comparison of its Internet Explorer 9 browser technology – currently in developmental stages – to stale builds of Mozilla’s Firefox, Apple’s Safari, Google’s Chrome, and … that browser no one cares about (sadly) – Opera.
Sounds like a valid argument to me. I decided to re-test using builds of Mozilla Firefox “Namoroka” (188.8.131.52pre), Google Chromium (6.0.397.0/46552), and Apple Safari w/ a newer WebKit engine (r58804) that matched release dates with Internet Explorer 9 (May 5, 2010). After clicking around the site a hundred or so times in each browser, the results… changed. Each browser made noticeable improvements in areas like CSS3 and DOM; Chrome proved to be the most volatile in its changes, while Firefox proved to be quite… glacial.
Here’s the revised performance chart for Chrome, which looks like the winner in the "most improved" category.
Note that the baseline for comparison is a solid green column with all 100% scores for the IE9 developer preview. So it’s still a fairly big win for Microsoft.
Source: Browsers re-tested in the IE9 Testing Center, different results surface (WithinWindows.com)
4 thoughts on “IE9 outperforms the competition, even in a fair fight”
But c’mon, it’s still a rigged test. There’s no question that IE9 is missing SOME functionality, nor that Chrome/Firefox can do things IE9 can’t. But since Microsoft specifically designed this test to focus on the things that they do that other browsers don’t, it makes them look like heroes.
I’m sure it’ll be a nice browser, and vastly better than IE8, but you can’t tell how well it works from Microsoft’s hand-picked tests.
For fairness’s sake, I’m using Chrome for page development and FF for casual browsing, but I installed the IE9 technology preview and plan on giving the full version a spin when it debuts as well.
Serdar, was that issue in the first few months after IE8 was released? If so, it was a known issue that was fixed fairly quickly.
Mike, I think you have it backwards. They almost certainly designed the tests as targets, and then made IE pass all of them. It’s not an uncommon behaviour. The other browsers are probably targeting different test suites and criteria before release. We should be encouraging all the browser makers to publish and continue to publish their test cases like this. Even though every such test case will naturally be skewed, at first, to the publishing vendor, even in an environment of perfect honesty and fair play (which, honestly, is not the environment any vendor plays in entirely).
Comments are closed.