For almost a year now, I have had the opportunity to work with a small network of machines part of a high-speed network and publicly facing the Internet – thus consequently reachable by everyone. During the last quarter of the previous year (around November 2014) and the first quarter of the current one (around February 2015), these machines were targeted by Chinese attackers. They gained access to some of the machines, probably by brute-force attacks via SSH, and placed a couple of binaries that infected the machine to -I assume- send continuous traffic to a set of targeted locations. At the end of the day, this is nothing new: if you have machines connected to the Internet that are not properly maintained or secured (e.g. weak passwords, default SSH port, allowing access as root, lacking a firewall, etc) is music to the attacker’s ears.

This comes after I firstly noticed strange behaviours on a number of machines: top showed a strange process called IptabLex (or a random sequence of characters, in later attacks) consuming tons of CPU. Apparently, this process forced the machine to be part of a DDoS-based botnet. The first wave of the attack is commented in the CSO blog and analysed in Malware Must Die!. As for the second wave, a detailed explanation is given in this post, among a description of its variants and the rootkit (XOR.DDoS) used by the attackers.

In this post I’ll summarise what a colleague and me found on the matter. If you fear your machine is infected, you could get a glimpse of the attack here, but be aware that the attack may be further improved by the time you read this.

First wave of the attack

During the first attack I noticed an outrageous % of CPU consumption on one of the machines, which was due to a strange file, called IptabLex and located under the /root/ folder. By that time, I just killed its process and removed the file, then checked again that the consumption was okay. At this point I did not investigate further, as I am not the sysadmin for these machines, and the consumption seemed just fine.

Second wave of the attack

During February, however; the problem with the consumption of CPU had extended to some other machines. A considerable number of our experimental network had been compromised at that point of time. However, the attack seemed more subtle this time: the consuming process respawned with a different name after its previous instance was killed and even some files removed. Let’s assume there’s an infected machine and let’s go step by step on this.

What to do now?

The safest and cleanest option is install from scratch or restore a backup. See this and this. This is sort of an embuggerance, in Pratchett’s terms.

However, this post is not focused on installing from scratch or restoring a previous version. If you have a complex environment and for whatever reason you have not documented, automated or performed a backup of your deployment; then you may prefer try sanitizing your environment to continue working on the machine as soon as possible. This is not a thorough guide, but here are some hints that may help you to identify the problem, isolate it and hopefully, destroy it. As already said, this explanation attempts to be a workaround to fix a compromised environment so you can resume work on your machine, having back its full capacity; but nevertheless, you should do a clean install of your OS in the machine as soon as possible.

First: identify if your VM is compromised

It usually seems hard to identify whether your VM is part of a DDoS botnet, as the first thing the attacker does is cover their footprints.

In this case, while the malicious binaries attempt to hide themselves, it is actually (still) possible to see them. Providing a given machine reports strange consumption of CPU, you should originally look for the files IptabLes, IptabLex; but now also for files aiziwen and 2862ashui8u.

These are the paths of the infected files in my machines, corresponding to the first and second attack, respectively. Note that their locations may vary for future attacks and maybe in different machines.

Besides, you should also check top:

See that process with random characters? It’s consuming a lot of CPU and seems to have a random name. It does not seem the typical Unix process. If your machine presents any of these symptoms, then you can assume it is infected.

Second: isolate your machine

First things first: you have to get the machine off the public network. Do you have physical access to it? Great! Work locally only. Don’t you have? You can either inform the sysadmin and seek for help or try to fix it first by yourself. However, if you want to proceed alone in the first stage, you should be confident that you are able to stop and start the machine anytime you want. This is so because you shouldn’t leave this machine connected to a public network more time than necessary. Remember, it is being used to attack other machines.

Third: understanding how the attack works

Look first for the process with the random name that is consuming most of the machine’s CPU (djaafnvlvv for this iteration). You should perform a lsof -p $process_pid. This will give you the physical location of the files used to run the process. Alternatively, you could directly scan the filesystem for it:

Then, have a look at the content of the files, for instance, the file under /etc/init.d/:

The file /etc/init.d/djaafnvlvv is an initscript that starts the infected ELF binary file (/usr/bin/djaafnvlvv). You could try strings $binary_file to see the strings, the human-readable text. As an example, here are the 6 last lines of one of the original infected binaries, /root/2862ashui8u – which, if I remember correctly, copied itself under /lib/

The first IP matches against an open connection, opened by one of the attacker’s processes. Some lines before that, there’s a dynamic library,, that replaces the one in the system and is executed afterwards. That behaviour is really fishy and alarms should be blaring by now. You can find an interesting and more complete analysis of this attack here. You may notice in the previous post that the BB2FA36AAA9541F0 XOR key also appears. This is another indicator of an infected machine, in this case corresponding to the variant no. 2 of the attack.

Coming back to the analysis of the files, checking the strings in the /root/aiziwen file (strings /root/aiziwen | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}) returned tons of IPs, among them, several belonging to CHINANET and another located in California.

So far, it seems that the two initial files were downloaded into the machine and executed, then, an infected version of is placed under /lib and executed as well. After killing the process with a SIGKILL signal (kill -9 ) to the active process (e.g. djaafnvlvv for this round), another process with a similarly random name starts to run after a while. There’s something looking on the background for this kind of signal. Also, after removing the /lib/ library, it appears after some minutes.

Now, open the /etc/crontab file with your preferred editor. We saw the following entry at the end of the file:

Although the name of the file may vary in different attacks, the behaviour is the same: every 3 minutes, it calls that script. I do not recall its contents as it is already removed, but this one ran the infected process with the random name. At the same time, this entry is generated by the copy of the virus (presumably in /lib/; which makes this whole stuff act in a cyclic fashion, one spawning the other and viceversa.

Fourth: identifying open connections

This step is not that useful to clean your system, but it is interesting to know which location is your infected machine trying to flood, or from where it is downloading the infected files. The next step deals with cleaning the system, so you may just skip there.

Looking at the open connections (lsof -i tcp) related to the infected files, we found the server from where the two infected files mentioned in the first step were being downloaded:

Such IP belongs to CHINANET and seems to be blacklisted because of spam and botnets.

Update on March 22th, 2015: the server seems to be already taken down.

Fifth: disinfecting your system

The logical step now is to break the aforementioned vicious circle and remove the infected files. But it’s not that easy. Besides the loop of replications, there are some inconveniences: some of the files cannot be removed, even by the root user.

Someone who is not a sysadmin, or at least not fully devoted to it :), may be bewildered at this point. How was it that root couldn’t make something? Well… Seems that the infected files were changed its attributes to be immutable.

You must change the attributes of the files back to mutable before being able to delete them.

Use the previous as a side note to consider during the clean up process. I copy here and adapt the clean process described by Serhii in the SuperUser’s “DDoS Virus infection (as a unix service) on a Debian 8 VM Webserver” thread:

  1. Remove the line in /etc/crontab that calls to an infected script every 3 minutes.
  2. Identify the parent process of the virus (top, then f, then b). Stop it (do not kill it, as this signal triggers a respawn), e.g. with kill -STOP 1632.
  3. Check that only the parent infected process lives (e.g. ps aux). The children should die quickly.
  4. Delete the infected files under /usr/bin/, /etc/init.d/, /root/, /boot/ and so on. Leave the /lib/ for the moment! To identify any lately modified file (such as the binaries for kill, top, ps, etc), list the files in that folder like this: ls -latr and you’ll see the recently modified files at the bottom.
  5. Remove the infected cron script in /etc/cron.hourly/ (name may vary) and the /lib/ files.
  6. Kill completely the infected process.

After cleaning, I recommend you spend some time looking at top (to see the most-consuming CPU processes), uptime to see the system load average during the last 1, 5 and 15 minutes. See that these numbers correspond to your usage, and not to the virus. Look thoroughly for any other modified files that may be dampering your vision for the overall system status, such as the aforementioned system binaries for top or ps.

Sixth: update environment, patch and add proper security

After all you’ve passed to try to clean the environment, it’s advisable to add proper security. This means updating and upgrading your system in order to patch any security hole, such as shellshock (CVE-2014 in all its variants: 6277, 6278, 7169, 7186 and 7187).

It is very important to identify the attack vector: do you use a weak password, or run a server not properly maintained that may expose some security? All of that (and more) can be used to break into your system. In the CSO blog they suggest the attackers may be exploiting problematic versions of Apache and other servers to gain access. This was not our case, as these servers are not present in our machines. However, using weak passwords may very well be the cause. The first measure of the series was to increase the password complexity as well as installing a lockout program after a given number of failed login attempts.

I believe that the following is a list of reasonable improvements to make their attacks a little harder the next time:

  • Configure IPtables: if possible, dropping or rejecting every packet but the ones from a preferred network would be the ideal situation. Otherwise, try to block any traffic coming from/to the IPs found on the 3rd step.
  • Disable root access: if an attacker enters as root access, then your system is done. If, on the other hand, they only get normal user access, they won’t be able to modify system files or run some services. A good way to unite security and practicality is to protect your accessing user (e.g. accessing only via public keys) and adding it to the sudoers file.
  • Change your password: needless to say, the password should be strong enough. Combine letters with numbers symbols, cap, etc etc.
  • Add extra log-in controls: for instance, fail2ban locks any user that surpasses a number of log-in attempts. You may even add an IPtables rule to allow only connections from a whitelisted range of IPs.
  • Reconfigure SSH daemon: another way to protect your server, afaik, is properly configuring your /etc/ssh/sshd_config configuration file. There, you can add some security by obscurity by changing the default SSH port to a random port of your choice. This may not be the most brilliant solution, but it makes attacking slightly more difficult.
  • Restrict access to some locations: going further, it may be a good practice to restrict access to a subset of public keys; for instance your personal and work computers. Brute-force attack is simply non-viable with this approach.
Share on FacebookTweet about this on TwitterShare on Google+Share on RedditEmail this to someonePrint this page
Rate the usefulness:
(+1 rating, 1 votes)