×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Decoding battery life for laptops

Last Updated : 30 June 2009, 16:40 IST
Last Updated : 30 June 2009, 16:40 IST

Follow Us :

Comments
ADVERTISEMENT

This is a story of truth, greed and the American Way. Oh, and also laptop battery-life benchmarks. Two things about battery-life measurements for laptops: First, they usually bear little relationship to reality. I don’t know about you, but my “five-hour” battery often dies halfway between JFK and LAX. Second, laptop ads always use that essential tool of wiggle-roomers everywhere, “Up to.” As in, “Up to five hours.”

Folks, “up to” is one of the greatest cop-outs in the English language. You know what? I’ve got a laptop that gets “up to” 1,000 hours on a charge! Because “up to” just means “something below this number.”

Well, so what, right? Why pick on laptop makers? Every industry does it, right?
Wrong.

In 2003, the digital camera industry had a similar problem. Every company was advertising its cameras’ battery life in overblown terms. Each had its own testing protocol, none representative of real life. Pretty soon, consumers realised that the battery statistics were basically meaningless.

Eventually, CIPA (the Camera and Imaging Products Association), a camera-industry trade group, took action. It developed a standardised battery-life test. You take one photo every 30 seconds—half with the flash on, half with the flash off. You zoom all the way in or all the way out before every shot. You leave the screen on all the time. After every 10 shots, you turn off the camera for awhile. And so on.

In other words, you test the camera pretty much the way people would use it in the real world, erring on the side of conservatism. Nowadays, all cameras are tested and advertised this way. And CIPA ratings now match up with reality.

But laptops are more complicated, right? Many more factors determine battery life: what you’re doing, how bright the screen is, what wireless features are turned on, and so on.  Yet other industries have faced this problem, too. Cellphones, for example: The battery dies a lot faster when you’re making calls than when you’re just carrying the thing in your pocket. Cars: You generally get much better mileage on the highway than in the city. Even iPods: You get much better battery life when you’re playing music rather than video.
So their manufacturers do the only logical thing — they give you the worst-case/best-case numbers.

When you shop for a cellphone, you see, “4 hours talk time/300 hours standby.” When you shop for a car, you see “26 mpg city/32 highway.” When looking over an iPod, you see “24 hours of music playback/6 hours of video.” And everybody’s happy.

But with laptops, what do we get? “Up to five hours.” This is important, because battery life has become a huge selling point. People have finally managed to unlearn the Megahertz Myth (hallelujah!), so they’re looking at battery life as a crucial buying factor.
Why doesn’t the computer industry invent a standard battery test?

Actually, they have. Those “up to” numbers are the results of a test suite called MobileMark 2007. There are a few problems with the MobileMark test. One of them is the identity of its inventor. It’s Bapco (Business Application Performance Corporation), a trade group led by Intel and composed primarily of laptop and chip manufacturers.

Let’s see: a benchmark developed by precisely the companies who profit if battery life looks good. Isn’t that like putting the foxes in charge of henhouse inspections?
Another problem: Unlike CIPA’s camera tests, the MobileMark test protocol doesn’t reflect real-world use. For example, the screen. It’s the most power-hungry component of a laptop, so specifying how bright it is during your test is extremely important. Well, the MobileMark test specifies that you have the screen set to 60 nits (a brightness measurement). The screens on modern laptops put out 250 to 300 nits. The MobileMark test, in other words, specifies setting the screen at a fraction of full brightness — a setting that few people use in the real world. (Advanced Micro Devices says that 60 nits is about 20 per cent brightness on most laptops. Intel says it’s closer to 50 percent. Either way, it’s too low.)

The MobileMark test doesn’t specify whether battery-eating features like Wi-Fi and Bluetooth are turned on during testing. That decision is left up to the manufacturers when they test their own laptops.

Finally, there’s the actual MobileMark test. Actually, there are three of them.
In the DVD test, you play a DVD movie over and over until the battery’s dead — a worst-case, shortest-life situation.

In the Productivity test, an automated software robot performs business tasks like crunching numbers in Excel, manipulating graphics in Photoshop and sending e-mail. This ought to be the most realistic test—except that it doesn’t include any use of Web browsers, iTunes, Windows Media Player, online TV shows or games. Oops.
In the final test, called Reading, an automated script pretends to read a PDF document, pausing two minutes on each page. This, clearly, is the best case; it’s not wildly different, in fact, from leaving the laptop unattended.

So which of those tests gets reported in the laptop ads?

Intel says it’s the Productivity test, but why aren’t we allowed to see all three results?
All of this brings us to Advanced Micro Devices, which has spent several weeks blogging about all of this silliness and bringing it to the attention of tech writers like me.
AMD thinks the industry should adopt a much more realistic benchmark for laptops—and then represent the results in a style that matches cellphones, iPods and cars. It’s proposing a new logo that clearly shows the best-case/worst-case numbers. Your laptop’s box might say, “2:30 Active Time/4:00 Resting Time.”
And, predictably, AMD reports it is meeting with “considerable resistance” from the big industry players.

Intel, AMD’s archrival, seems especially annoyed by all this muckraking. A spokesman, Bill Kircos, says MobileMark is “a well thought, well debated and very sound benchmark.” Besides, if a shopper doesn’t like it, “there are a wealth of independent tests, reviews, magazine articles and company information to see what people are getting on battery life, in addition to the three-faced MobileMark benchmark.”

Wait — consumers are supposed to make up for MobileMark’s failings by spending hours hunting online for realistic battery tests?

Wouldn’t it save effort all around to have a realistic, reliable test? That’s how the cellphone, auto and music-player industries do it; why not computer makers?
That one’s easy: because there are big dollars at stake. People pay more when they think they’re getting better battery life. By misleading the public with bogus battery statistics, stores and computer and chip makers make more money. No wonder cynics call it “benchmarketing.”

(Intel’s spokesman also told me AMD has yet to propose a better battery-testing regimen to Bapco, of which AMD is also a member. AMD retorts that’s not necessarily true: “All Bapco discussions are confidential.

If Bapco is willing to waive these confidentiality obligations or make its meeting minutes public, AMD will be happy to discuss what it has or hasn’t presented to Bapco.”)
It’s pretty obvious why Intel wants to keep the status quo. But what’s AMD’s motive in stirring up this hornet’s nest, anyway? According to tests by Laptop magazine and others, AMD laptops in general have shorter battery life than Intel laptops.

But in more realistic battery-life tests, the gap between AMD and Intel laptops closes somewhat. So yes, everybody’s got an agenda on this one. But yours should be to support AMD’s campaign It’s logical, it’s fair — and it’s long overdue.

ADVERTISEMENT
Published 30 June 2009, 16:29 IST

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT