×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

War beneath the web

Last Updated 17 November 2009, 16:40 IST
ADVERTISEMENT

The email from Google in June was the first sign: it warned that the Free Our Data site seemed to be host to a set of hidden spam links – or as Google put it, “techniques that are outside our quality guidelines.” It took more than two months to discover the true extent of the hacking, which had planted links all over the website to an “online pharmacy” selling dubious products.

More surprising, on digging into the problems, was the realisation that Free Our Data was only one of a network of sites that had been hit in a similar way by exploiting a subtle, hidden flaw. Others with similar spam links included the Montserrat Volcano observatory site, a European research site, a Minneapolis-based artist, an Australian website for singers, a recruiting company in California, the personal webspace of a maths professor at the University of Texas in San Antonio, and a medical devices website run by a large healthcare company.

A search for “online/canadian” will certainly turn up hundreds more sites that have been compromised in the same way, such as the Imperial Ice Stars website. Nor was this some Windows server exploit; the hacker seemed to have found holes in the open source content management systems (CMS) of each of the blogs, exploiting them to alter the sites at will.

I found two separate “control panels” inserted into Free Our Data, their names disguised to make them seem like innocuous pieces of site code; instead, they gave the hacker complete control to add any file to the site, and insert any content into its related databases. The code carries text claiming to be by a Chinese hacker called “4ngel”, though it's most likely that the hacker responsible simply bought or copied it. The password – “yahoo” – also gives a clue to its owner's likely email address.

That so many apparently diverse sites could each be attacked by the same method gives one pause for thought. While PCs running Windows are increasingly the target of better-designed security exploits – as we explained last week (Enemy of the state, 5 November) – what about the millions of sites on the web that are either hosted by individuals or run by companies for whom staying ahead of server and CMS security issues is not top priority? What can we say about the state of web security?

New tricks

The web seems a different place than in August 2001, when the “Code Red” (or “Nimda”) virus ravaged the web – automatically infecting Windows servers, seeking out more to infect and putting an infected file onto webpages so that any machine reading it with Internet Explorer 5 would also be infected. But that doesn’t mean security has become tighter.

The addition of spam links to a webpage is a comparatively low-key problem. The bigger risk now is from “drive-by” downloads – malware (malicious software) that will try to infect Windows machines that visit a particular website by exploiting vulnerabilities in the browser.

Experts agree that the change is due to one critical factor: money. Hackers generally don’t now aim to make a mess; they do it to get cash. “The difference is that in about 2003 people realised they could use these weaknesses to make money,” explains Richard Clayton, a security researcher at Cambridge University. “There are three ways they do it: drive-by downloads, which enlarge a botnet [which can be hired to send spam, assist in the theft of personal details, or attack websites to extort their owners]; hosting a phishing site, so they can collect login details; and putting spam links on the site to raise the spam's search engine ranking.” The hacking of Free Our Data and the other sites had the latter purpose.

Part of what’s changed is the point at which a site’s vulnerabilities are exploited. Lloyd Brough, a managing consultant at NCC Group Secure Test, has been in web security for about 10 years. “Nowadays, it’s application-based,” he explains. Exploits such as those used for Nimda targeted the web server software itself. Generally, that has now been hardened.

So instead the target is the databases or associated software through which sites’ content and user requests and contributions are managed. These are frequently attacked though a method called “SQL injection”. If the code that handles a submitted form, for example, doesn’t create exceptions for particular strings, it can be used to subvert the site. “We first noticed that about six years ago,” says Brough, “and people are still writing code that isn't properly excepted.”

Search and destroy

Nowadays, attacks at that application layer – on databases, the web scripting languages such as PHP and ASP, or even on cookies (items of data stored on users’ machines) issued by the website – are commonplace. But what might be surprising is the methods used to identify sites to break into.

Clayton and his team have done extensive research into phishing sites hosted on cracked web servers. “We found the same sites would get hacked. Our insight was that people were using Google to find websites to break into, by doing specific searches for particular versions of software that they knew had particular vulnerabilities – Wordpress 1.3.1 or Drupal or whatever. So they’d do a Google search, find those sites and then hack all 50 sites using the same method.”

Clayton’s team could demonstrate that this was how it was done by studying the sites’ logs. And that wasn’t the end of it: sometimes the same site would be hit by more than one team of hackers, who would each put their own exploit onto it. And the worst of it was that the Google search method meant that, if the site wasn’t cleaned, updated and hardened extensively after the break-in was discovered, says Clayton, the chance of being compromised again in the next six months was 50%. “It’s like cleaning up after a burglary but not fixing the open window downstairs,” he says.

Bigger game

The targets are getting bigger, too. In the past couple of months, both the New York Times and the gadget site Gizmodo have seen their online advertising compromised to try to create “drive-by” infections; and the growing use by criminals of “iframes” – invisible or tiny webpages-within-webpages which may take their content from anywhere on the net – has increased the risk to the casual browser.

But is there an endpoint? Might it level off? The consensus is no.

“It’s a big problem and getting worse,” says Dave Jevans, chief executive of IronKey and chair of the Anti-Phishing Working Group. “When I have tracked website attacks, I’ve found it convenient to look at the Zone-H statistics. Zone-H.org reports on website breach defacements, as reported by bragging hackers. The exact same attack methodologies are used to make a website host malware or a phishing site.

“Today they reported 1,110 defacements so far. For the month of October 2009 they reported 47,560. So that's about half a million defaced websites per year. Now keep in mind that this is reporting by hackers themselves. Imagine the number of sites that are attacked and breached that are not reported to Zone-H.”

It’s a scary thought: can we trust the web? Bruce Schneier, a security consultant and columnist for the Guardian, thinks the important thing for the web user is to stay aware. “You need to have a good bullshit detector when you’re out there,” he says. “I lock down my browser. I don’t have stuff that I haven’t asked to be running – audio, video, whatever.” But as to when it will end, Schneier is not hopeful. “It’s an arms race,” he says simply.

ADVERTISEMENT
(Published 17 November 2009, 16:40 IST)

Follow us on

ADVERTISEMENT
ADVERTISEMENT