The Betanews Comprehensive Relative Performance Index: How it works and why

After several months of intense research, helped along by literally hundreds of reader suggestions, Betanews has revised and updated its testing suite for Windows-based Web browser performance. The result is the Comprehensive Relative Performance Index (CRPI). If it's "creepy" to you, that's fine.

We've kept one very important element of our testing from the very beginning: We take a slow Web browser that you might not be using much anymore, and we pick on its sorry self as our test subject. We base our index on the assessed speed of Microsoft Internet Explorer 7 on Windows Vista SP2 -- the slowest browser still in common use. For every test in the suite, we give IE7 a 1.0 score. Then we combine the test scores to derive a CRPI index number that, in our estimate, best represents the relative performance of each browser compared to IE7. So for example, if a browser gets a score of 6.5, we believe that once you take every important factor into account, that browser provides 650% the performance of IE7.

As you'll see, we believe that "performance" means doing the complete job of providing rendering and functionality the way you expect, and the way Web developers expect. So we combine speed, computational efficiency, and standards compliance tests. This way, a browser with a 6.5 score can be thought of as doing the job more than five times faster and better.
Here now are the eight batteries we use for our suite, and how we've modified them where necessary to suit our purposes:

  • The Nontroppo CSS rendering test. Up to now, we've been using a modified version of a rendering test used by HowToCreate.co.uk, whose two purposes have been to time how long it takes to re-render the contents of multiple arrays of <DIV> elements and to time the loading of the page that includes those elements. The reason we modified this page was because the JavaScript onLoad event fires at different times for different browsers -- despite its documented purpose, it doesn't necessarily mean the page is "loaded." There's a real-world reason for these variations: In Apple Safari, for instance, some page contents can be styled the moment they're available, but before the complete page is rendered, so firing the event early enables the browser to do its job faster -- in other words, Apple doesn't just do this to cheat. But the actual creators of the test themselves, at nontroppo.org, did a better job of compensating for the variations than we did: Specifically, the new version now tests to see when the browser is capable of accessing that first <DIV> element, even if (and especially when) the page is still loading.

    Here's how we developed our new score for this test: There are three loading events: one for Document Object Model (DOM) availability, one for first element access, and the third being the conventional onLoad event. We counted DOM load as one sixth, first access as two sixths, and onLoad as three sixths of the rendering score. Then we adjusted the re-rendering part of the test so that it iterates 50 times instead of just five. This is because some browsers do not count milliseconds properly in some platforms -- this is the reason why Opera mysteriously mis-reported its own speed in Windows XP as slower than it was. (Opera users everywhere...you were right, and we thank you for your persistence.) By running the test for 10 iterations for five loops, we can get a more accurate estimate of the average time for each iteration because the millisecond timer will have updated correctly. The element loading and re-rendering scores are averaged together for a new and revised cumulative score -- one which readers will discover is much fairer to both Opera and Safari than our previous version.

  • Celtic Kane's JavaScript suite. The independent developer who calls himself Celtic Kane is noted for developing a battery of simplified tests, which first became noteworthy for having demonstrated the rendering ability of Opera over its competition at the time, Mozilla Firefox and IE7. What impresses us about CK is its ability to render a "signature" of eight integer scores that distinguish just about each version of each browser we test -- whereas many other tests are susceptible to variations in the environment, CK is relatively quite stable. As before, each of the eight tests in the CK battery (array handling, date and timing object manipulation, error handling, math objects, regular expressions, string objects, DOM manipulation, and AJAX declarations) are judged for relative performance against IE7, and the result is averaged for a cumulative score.
  • SunSpider JavaScript benchmark. Maybe the most respected general benchmark suite in the field focuses on computational JavaScript performance rather than rendering -- the raw ability of the browser's underlying JavaScript engine. Though it comes from the folks who produce the WebKit open source rendering engine that currently has closer ties with Safari, though is also used elsewhere, we've found SunSpider's results to appear fair and realistic, and not weighted toward WebKit-based browsers. There are nine categories of real-world computational tests (3D geometry, memory access, bitwise operations, complex program control flow, cryptography, date objects, math objects, regular expressions, and string manipulation), some of which overlap with Celtic Kane, although we feel those that do overlap should be treated more importantly anyway. All nine categories are scored and average relative to IE7 in Vista SP2.

Next: The additions and changes we've made...

3 Responses to The Betanews Comprehensive Relative Performance Index: How it works and why

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.