Which system manager, while under attack from hackers or botnets, hasn’t fantasised about hacking back? Using the bad guys’ own techniques and technologies against them — so-called ‘active defence’ — is tempting. But whether for purposes of research, prosecution or revenge, this kind of activity is fraught with dangers, legal and technical.
Back in April 2008, a group of researchers from France and Germany published the results of their ‘poisoning’ of the infamous Storm botnet.1 The researchers fed the botnet false information in an effort to disrupt it, with some success. They also attempted to separate part of the botnet from the rest. This failed.
The publication of the research raised another kind of storm: the ethics and legality of the work were hotly debated because, inevitably, the research involved innocent people — those who owned the trojan-infected machines that comprised the botnet.
Taking over Torpig
The controversy has been revived following the hijacking of the Torpig, aka Sinowal or Anserin, botnet by another group of researchers at the University of Santa Barbara in California.2 This time the approach was more passive, closer in spirit to a honeypot.
The researchers decompiled the trojan code to discover how it communicates with Command & Control (C&C) servers. Torpig deploys a domain flux approach: the botnet operators constantly move the domains of the C&C servers. The bots know where to look because they use an algorithm to create a list of domain names, seeded in part by the current date. The number of domain names generated this way is very large, but because the bots will try each in turn, until they find one that sends back the correct response, the operators don’t need to own many of the domains, just enough to establish regular connections.
It also means that the list of potential domains is predictable for any given week. The researchers simply needed to register one of the names ahead of the bad guys, and set up a server configured to send the right responses.
It worked, and they had control of the botnet for 10 days before the operators were able to regain control. During that period, the researchers’ server was contacted by an estimated 182,800 infected machines which delivered more than 70Gb of data. This included credentials of 8,310 accounts at 410 institutions, such as PayPal and banks, and 1,660 credit and debit card numbers.
Not long after, another botnet controversy was sparked by the BBC’s TV programme Click. It bought the use of a botnet with 22,000 infected machines in order to demonstrate how bots are used to send spam and launch Denial of Service (DoS) attacks.3 The spam was sent to two email accounts set up for the purpose. And the DoS attack was launched against a domain owned by security company Prevx, with its permission.
The BBC said it then “destroyed” the botnet — but not before changing the desktop wallpaper of the infected machines to warn their users that their PCs were infected.
Dangers of interaction
So where’s the problem? An example of the dangers was, perhaps, provided recently when a botnet, based on the Zeus toolkit, apparently auto-destructed. The cause is obscure, but it’s possible that the botnet’s own operators took it down to avoid detection or, and this is the intriguing option, simply by mistake.
It was destroyed by a C&C server issuing a kill command to all bots — the so-called ‘nuclear option’. Not all trojans include this facility, but many do. The actions taken vary: the trojan, in an attempt to cover its tracks, might erase areas of memory, delete registry entries and disk files, and generally cause havoc. All too often, this renders the machine unbootable.
What worries many security specialists about interacting with botnets is the possibility of unintended and unpredictable consequences as a result of issuing the wrong response or hitting a bug in the code.
Anti-virus software vendors have been known to use kill commands to force trojans to remove themselves. But this is a solution that runs locally, on individual machines in which the bot code has been carefully identified. Even so, it’s done rarely.
“The bots are clever now and it’s extremely risky, because passing such a comment nowadays might kill the operating system,” explains Raimund Genes, CTO of Trend Micro. “Bots might have safeguards because the bad guys have learned what we are doing; so they build in failsafes and these might be ‘kill the computer’.”
Decompiling the threat
The Torpig researchers went to great pains to understand the trojan code — they decompiled it to understand what it did and how it responded. One of the researchers, Brett Stone-Gross, told us: “In order to make sure our responses were safe, we went through the following testing process on our own local honeypot. We set up a virtual environment with an infected machine and performed DNS poisoning to make the bot connect to our test web server. On the web server we served the two responses that we observed from the real C&C server. During the entire process, we logged all network traffic and monitored file system activity.”
The researchers observed that the standard response from the server did not result in any system modification by the bot. “We also reverse engineered the binary and did not see any type of ‘kill switch’ command,” they added.
Torpig is built on top of the Mebroot rootkit which is installed as the result of a drive-by infection. It provides a framework for trojan code of various types. It also provides mechanisms for updating exploit code and adding new modules.
This raises a key issue. While researchers may carefully analyse trojan code on their own infected machines, it is impossible to guarantee that this code is identical to that infecting other bots. Even if the trojan code reports its version number — as is the case with Torpig — how certain can you be that this data is reliable?
“The advance in skill among malware writers means that the code you’re analysing at any one time is not guaranteed to be the same code a day or two later,” says Dave Hartley, a security consultant at Activity. “You might think you have control of a botnet, that you know everything that it does, and you’re operating it within safe realms. But the bad guys have control of the code base. They’re the ones in control.”
The BBC’s explanation that it “destroyed” the botnet it used has also been treated with some incredulity among the security community, with many questioning whether they could actually achieve that.
“They may purchase access to a botnet, or certain services from it,” says Hartley. “But the bad guys that actually compromise those machines and have ultimate control sell certain portions off. These guys are not idiots. They’re not going to hand over full control: they want to maintain control themselves so they can sell it on.”
Even if the BBC did manage to shut it down, this raises questions about how this was achieved while assuring the safety of the infected machines.
Of course, the problems aren’t just technical. In all these cases, some use was made of the infected machines which raises ethical and legal issues.
You can only do these sorts of experiments on your own equipment,” says Neil O’Neil, principal digital forensics investigator at the Logic Group. “You can’t involve any third party’s machine without explicit permission — legally or ethically. The moment you write to anyone’s hard drive, that’s unauthorised access.”
Even the basic communication between a C&C server and its bots, such as in the Torpig research, could represent unauthorised use of the infected machines. The BBC experiment involved actually using the machines to perform spamming and a DoS attack, not to mention changing the machines’ wallpaper.
In the UK, this activity would likely breach the Computer Misuse Act (CMA) 1990 and Police and Justice Act (PJA) 2006. There are other legal dangers, too. In the case of the Torpig study: “They captured people’s login details, emails and so on, and they actually searched through the content of their emails, which are private communications,” says Hartley. “In the UK, this would have contravened the Human Rights Act, the Data Protection Act and the Regulation of Investigatory Powers Act (RIPA), because they had no authority to intercept those communications.”
(It should be pointed out that the Torpig researchers co-operated closely with law enforcement agencies and other authorities and ensured that the personal data they captured was encrypted and kept safe.)
So, if you had the chance to take control of a botnet that was attacking you and shut it down, should you?
“If we see a command and control centre, we immediately talk with the authorities,” says Trend Micro’s Genes. “We are equally careful making a data dump: sometimes we make it, but we immediately say to the authorities that we’ve done it. Just the possession of that data is extremely risky. I wouldn’t recommend a company to try it.”
Whether it’s a DoS attack or a hacker penetrating your firewall, it’s natural to undertake at least some forensic activity. Simply tracing and identifying the sources of attack using basic tools like traceroute and whois won’t put you beyond the law. But that’s about as far as you can safely go. Yet this is unlikely to yield much in the way of useful information.
“Hackers never use their own machines.” says O’Neil, “so you’re unlikely ever to get to the hacker’s PC.”
Going any further than basic tracing could put you beyond the law. And any kind of forensic investigation, if not done properly, could be counter-productive.
“My advice would be not to even try to trace it back because you might show the attacker that you’ve spotted them,” says Genes. “We’ve seen customers panic — unplugging network cables and reinstalling machines. The best thing is to keep your system alive as a decoy, isolate it from the rest of the network environment, and seek professional help.”
Worse, you could compromise the work of the professionals, including law enforcement.
“It is extremely difficult to piece together a trail of evidence if the systems are not correctly configured to log the relevant information and protect that information from manipulation or modification,” says Hartley. “Architecting systems so that they can provide sufficient evidence that can be used in a court of law is also a difficult task to accomplish without professional advice and assistance from consultants versed in the intricacies of the law and digital forensics techniques.”
So when it comes to fighting back, there’s a simple rule: don’t try this at home.
1. ‘Measurements and Mitigation of Peer-to-Peer-based Botnets:
A Case Study on Storm Worm’ by Thorsten Holz, Moritz Steiner, Frederic Dahl, Ernst Biersack, Felix Freiling – University of Mannheim, Institut Eurécom, Sophia Antipolis, 2008. http://www.usenix.org/events/leet08/tech/full_papers/holz/holz.pdf
2. ‘My Botnet is Your Botnet’ by Brett Stone-Gross, Marco Cova, Lorenzo Cavallaro, Bob Gilbert, Martin Szydlowski, Richard Kemmerer, Chris Kruegel, and Giovanni Vigna — UCSB Technical Report, Santa Barbara, CA, April 2009. http://www.cs.ucsb.edu/~seclab/projects/torpig/index.html
3. BBC team exposes cyber crime risk – http://news.bbc.co.uk/2/hi/programmes/click_online/7932816.stm