Thank you for visiting the Rackspace Community
The The Community is live! Post new content or topics so our teams can assist.

Please contact your support team if you have a question or need assistance for any Rackspace products, services, or articles.

Basic security steps for Linux

Hello all,

Below you will find a list of basic steps you can and should take to harden your servers immediately after provisioning. In my humble opinion, a lot of these should be done by Rackspace during the provisioning process. Some of these will be no-brainers, but some are things that I think a lot of people haven't thought of.

First, a quick note. The examples I'm providing are for RedHat and derivatives like CentOS and Scientific Linux. If you are using another distribution such as Ubuntu, Debian, Arch, Gentoo, SuSE, etc, you will need to modify some of the commands to match your distribution's specifics.

This guide will be broken up into several posts to make it easier for me to edit and maintain.

  • First thing's first. The root account.

    It's important to understand that while your root account security is important, you can never be too careful. For this reason, I recommend going almost as far as Ubuntu has gone.

    For those who don't know, Ubuntu has disabled the root account, by default, so that you can only login to it through the command "sudo -i".
    They accomplished this while still allowing single user to work by patching their sulogin program to allow for a disabled root account (

    We aren't going to disable the account, but we are going to change the password to a string that humans cannot type in.

    If you are on Ubuntu, you can probably skip this step. I will highlight below where to pick back up.
    If you use anything other than Ubuntu, keep reading.

    Before we make the root password ineffective, we need to establish a new user account to be pseudo-root (pun not intended):

    # groupadd sudoers
    # useradd -m -G sudoers myuser
    # passwd myuser

    Now we have a new user with a password only we know. And that new user is a member of a group not provided by the system.

    I won't get into the reasons why I don't recommend using the "wheel" group for sudo in this thread, as it is not part of the scope of this thread, however there are good security-related reasons.

    Now that we have a user who can sudo to root, we need to change the root password to an extremely long password with characters that cannot be entered by a human.

    # dd if=/dev/urandom bs=256 count=1 |passwd --stdin root

    Finally, we need to add our new user to sudoers. If you are using RHEL7 or a derivative of it, such as CentOS 7, or Scientific Linux 7, (or if you are using Fedora 23+) you should modify the visudo command below as follows:

    # visudo -f /etc/sudoers.d/local.conf

    If you are using any distribution other than the ones listed above, use visudo without any extra command line parameters:

    # visudo

    Regardless of distribution, you are going to add the below line to the end of the file:

    %sudoers localhost=(ALL) ALL

    This line says that any member of group sudoers (% is mandatory to denote group, rather than user) can use the root account to run any command. If you are not familiar with the sudo command, it prompts for your user password (not the root password)

    If you are on Ubuntu, start reading again here.

    Additionally, we need to tell SSH not to permit root logins. Each distribution's SSHd config file is slightly different in this respect, as some default to having the directive commented out, while others have it uncommented and set to "yes".

    Generally, what you need to do is as follows:

    1) Edit the file /etc/ssh/sshd_config using your favorite editor
    2) Look for the line that contains "PermitRootLogin"
    3) Change that line so it is uncommented and says "PermitRootLogin no"

    This prevents any direct root login access over SSH whatsoever. Not that they would be able to anyway, but again -- this is all in the name of protecting your system. The more you can harden it, the better off you are.

    Next, we need to make it so that if someone gets into your MyCloud account, they cannot use your emergency console to reboot into single user mode.
    You'll want to use a different password here than what you used for your user account.
    The steps for doing this vary depending on whether your distribution uses legacy GRUB or GRUB2 bootloader. Additionally, even within GRUB2, there are 2 ways, depending on whether your distribution is based on RHEL7.2 or an earlier version of RHEL7.
    I will detail all 3 methods below.

    For RHEL7.2 and higher based distributions:

    # grub2-setpassword

    That's all you need to do.

    When you want to enter single user mode, the username is root, and the password is whatever you entered above.
    One caveat is documented in the RHEL 7.2 System Administrator's guide... (link)
    Manual changes to the /boot/grub2/grub.cfg persist when new kernel versions are installed, but are lost when re-generating grub.cfg using the grub2-mkconfig command. Therefore, to retain password protection, use the above procedure after every use of grub2-mkconfig.

    In order to set a password for other distributions with GRUB2, (including RHEL7.0 and RHEL7.1) the process is like this:

    # grub2-mkpasswd-pbkdf2
    --> Enter the password you want
    --> Enter the password you want, again
    --> Copy the resulting encrypted password, starting at "grub.pbkdf2"
    # vi /etc/grub.d/40_custom
    --> Enter the below lines into this file (do not modify any existing lines)
    set superusers="root"
    password_pbkdf2 john (paste copied password here)
    --> Save and exit, then reboot to test.

    You should be able to boot the system without any issue or human intervention, but it should prompt you for a password before allowing you to edit any existing entry.

    Finally, in order to set a password for old distributions using legacy GRUB:

    # grub-md5-crypt
    --> Enter the password you want
    --> Enter the password you want, again
    --> Copy the resulting encrypted password
    # vi /boot/grub/menu.lst
    --> Find the line that starts with "timeout="
    --> Create a new line after that which sets "password --md5 (password)"
    --> Paste the copied encrypted password on the same line as above, in place of "(password)".

    An update from Russell T of Rackspace Cloud Support

    Due to how the cloud performs virtualization, Russell was kind enough to point out that the above steps to secure the GRUB and GRUB2 bootloaders may not be enough. When you enter the emergency console, a new VM is spun up which accesses the disk of the original VM. This causes some unexpected behavior when GRUB loads. Namely, changes made in the GRUB and GRUB2 config don't actually take effect in the remote console by default. That means several things. For one, when you change the boot timeout, and then reboot the cloud server, the old timeout will still be in effect on the emergency console, and if you set a password, the password won't actually be present in the emergency console. Also, if you try to blacklist some modules on the kernel command line, those module blackouts won't take effect as they should in the emergency console.

    What you need to do is after you spin up a new VM, make the customizations you want to make to the GRUB config first before you do anything else. Then power off the VM and make an image of it. Then delete the VM, and spin up another VM from the image you just made. The reason you need to do this is that when a VM is spun up from your image, the GRUB options will also be applied in the new VM's emergency console, whereas when you use a VM built from Rackspace's template, the emergency console uses the GRUB options from their template instead.

    There you have it. Your root account is now more secure than it was when you started.

    If you have other ideas, please feel free to share. I will edit this post to provide credit to you and share the new information.

  • Next up. VPN Tunneling.

    This post is going to go over details about why you should use a VPN, and provide a general overview as to how I have mine setup, rather than getting into technical details.

    First, some definitions.

    A VPN is not a service that gives you access to their VPN tunnel in order to anonymize your internet connection. Instead, what I'm talking about is setting up an actual Virtual Private Network between your desktop/laptop or a server in your home or office, and your server at Rackspace.

    A VPN Client is the software that runs either on your desktop or laptop, or on a server at your local site. This client connects to the OpenVPN server over the public internet to establish an encrypted tunnel through the internet that allows you to connect to your servers via a back end private connection.

    A VPN Server is the software that runs on your Rackspace Cloud servers in order to allow clients to connect. It is responsible for negotiating the encryption and assigning private IP addresses to the clients.

    I personally like the software OpenVPN which I run on both the client and on the server, as I can use NetworkManager to manage the client side OpenVPN connection, and systemd to manage the server side OpenVPN daemon which ensures the connection is always available. Additionally, if you want to put the OpenVPN client on a laptop running Windows, you can, and setup is fairly straightforward.

    Note that there is a bug as of this writing (RHEL7.2) that prevents NetworkManager from bringing up the tunnel automatically on the client after a reboot, even when autostart is enabled. You can still use the GUI or the command line to bring it up manually, but you'll have to do that every reboot.

    You might wonder why you need a VPN when SSH which encrypts everything already. There are a couple of reasons.

    For one, when I get to firewalling, I'm going to recommend that you firewall off your servers from the public internet. Aside from the ports that need to be open for the server to do its job (80/443 for a web server, in this example), everything else will be firewalled. This includes port 22.

    Another reason for having a VPN is that if you have a website which gets a DDOS attack against it, and Rackspace has to "nullroute" your public IP address, that makes your server is completely inaccessible from the internet. If you have a jumphost that you can connect to over your VPN, you can still get into your server over the back end.

    Finally, I'll give you a little back story for why I do this. I've worked in IT for the last 10 years, including having worked for a short while at Rackspace. Most companies these days are switching to an indirect access model. That is, the important servers (app servers, web servers, database servers, etc) are all being kept behind an internal firewall (blocking the company offices from accessing servers at the datacenter through the internal network), even for SSH access, and anyone that needs to access them has to do so through a so-called jump host. It's essentially a server that serves one purpose. Sit on both networks (office and datacenter) and protect all the other servers from being accessed by SSH. The reason for this is that if the office network gets hacked, it prevents the hackers from getting into the datacenter. And similarly, if a server in the datacenter gets hacked, say through an exploit in Wordpress or some other app, the hackers can't go any farther than the one server (without additional effort), because no server can connect to another server over the internal datacenter network period, except for ssh to the jumphost, and specific services that you allow, like from your web server to your db server for example. Usually the jumphost has some sort of multi-factor authentication enabled in order to further restrict access back and forth. 

    Now that we've defined what a VPN is, and why you might need one, lets go ahead and get into more details about the actual setup.

    In my case, MFA for the jump host is overkill. I have one server at Rackspace which runs your basic LAMP stack, and one physical server at my house which is protected by my home router. Yes, I know there are flaws in most home routers -- that's why I have the server. They would have to break into my home network, then break into the server here, then use that to get across the VPN tunnel, then break into the jumphost, and then finally break into the actual server with the data. Remember, security is only as strong as the weakest link, and this guide is intended to eliminate as many of the weakest links as possible. This is called a multi-tiered configuration.

    Since I am running my sites out of my home, I have a dynamic IP address assigned by my cable company. That prevents me from whitelisting my home IP in the firewall at Rackspace. Thus I use the VPN so that I have some way to privately connect to my server. So, what I did was to setup a second cloud server (nothing fancy -- I'm using a Standard 512MB instance) to act as my jumphost and VPN Endpoint.

    So you setup this second host so that it has access to the public internet for now. Then you setup a private network (NOT ServiceNet) between the two cloud servers, and then you setup the VPN server software and make sure it is working and starts at boot. Then you firewall off all inbound protocols and ports except the VPN port on the public internet side, all inbound protocols and ports but SSH on the VPN side, and all inbound protocols and ports period on the Private side (this prevents any access if someone gets into your web server by exploiting your application through the internet, thus protecting the rest of your network.)

    On both of my cloud hosts, I turned off the ServiceNet network in the MyCloud control panel. There are caveats to doing this. They are listed elsewhere within the Rackspace cloud documentation if you are curious. I don't use any other cloud services except for monitoring which does not require ServiceNet, so I was able to disable it without any issue. My suggestion if you use other services from Rackspace is to leave it enabled, but find out what ports you need to have open on ServiceNet in order for your cloud server to connect to those services. Definitely firewall off port 22 on ServiceNet, if nothing else. That way, any other customer's cloud server which is compromised and has ServiceNet enabled, can't connect to your servers.

    Finally, on my main cloud server, I firewalled off everything except for ports 80 and 443 from the public internet, and firewalled off everything except for port 22 on the internal network. If you need other services, such as FTP or email, you can also open up those ports.

    The next post will go into the details about the firewall config.

  • I'm going to assume by now that you have already setup your VPN and gotten it working. If not, please google for walkthroughs on how to setup (and harden) your VPN. These next steps are going to be technical again, but I will try to address any questions as I go along.

    For the purposes of this exercise, I'm going to also assume you have 2 cloud servers. One which is your main website(s), which we'll call web1, and one which is your VPN endpoint, which we'll call gateway1. You can also use your VPN endpoint as your jump host, like I do, but if you have an operation which allows you to afford a third cloud server specifically for ssh, then I would highly recommend going that route, so as to silo off the VPN in case some exploit comes out later on which allows for hacking of the server running the VPN software you chose without authenticating to the VPN.

    I'm also going to assume that you disabled ServiceNet. If you did NOT, you will need to make changes to the commands in this post so that they apply to the correct interface. Generally, if you did NOT disable ServiceNet, you will need to change any reference of eth1 to eth2. If your server uses other names instead of ethX then change that as well.

    So, let's first test that your Emergency Console works on both servers. Login to MyCloud and check that. If you have any problems, open up a support ticket.

    Working fine? Great. If you lock yourself out, you can be sure that you're able to get back in.

    Now, fire up your VPN connection. Once you are connected, then ssh through the VPN to gateway1, and obtain root.

    The first thing I'm going to recommend is that you download a script I wrote. This script will install iptables, ip6tables, and a tool called ipset, and use them to setup a list of IP addresses which will be blocked from communicating with your servers. These IP addresses come from what's known as an RBL, or Remote Block List. This RBL is generated from logs sent in to the RBL provider by the tool fail2ban, which basically logs failed login attempts and other malicious traffic, and optionally can send those logs to places like an RBL provider.

    You can download this script at the link below. This script has been tested on RedHat based OS's. Due to missing functionality in the ipset RPM for both RHEL6 and RHEL7, I decided to write the script so that your /etc/sysconfig/iptables file does not get touched (assuming that you didn't modify your /etc/sysconfig/iptables-config file to save on restart -- if you did  that preference, please do not use this script.)


    Place the above file in /root/bin (you will need to make that directory), then change the ownership to root:root and permissions to 0700, then create a cron job to run it every night at midnight, and finally run it for the first time to ensure there are no problems with it.

    # mkdir -p /root/bin
    # wget -O /root/bin/
    # chown root:root /root/bin/
    # chmod 700 /root/bin/
    # crontab -e
    --> Insert the below line, then save and exit.
    0 0 * * * /root/bin/
    # /root/bin/

    When you run it, you'll see an introductory text telling you about the script, how it works, what it does, and some of the assumptions it makes. If all goes smoothly, you should not get any error messages after my email address appears on the bottom.

    If you do get an error, please feel free to email me and I will work with you to make an update that allows it to work properly.

    You can double check that everything is in proper shape by running the below commands

    # iptables -t raw -L
    # ip6tables -t raw -L

    When running those commands, you should get output like below

    # iptables -t raw -L
    Chain PREROUTING (policy ACCEPT)
    target prot opt source destination
    LOG all -- anywhere anywhere match-set ip4-block-net src LOG level warning prefix "IPv4 NETWORK BLOCK SET: "
    LOG all -- anywhere anywhere match-set ip4-block-ip src LOG level warning prefix "IPv4 IP BLOCK SET: "
    DROP all -- anywhere anywhere match-set ip4-block-net src
    DROP all -- anywhere anywhere match-set ip4-block-ip src
    Chain OUTPUT (policy ACCEPT)
    target prot opt source destination
    # ip6tables -t raw -L
    Chain PREROUTING (policy ACCEPT)
    target prot opt source destination
    LOG all anywhere anywhere match-set ip6-block-net src LOG level warning prefix "IPv6 NETWORK BLOCK SET: "
    LOG all anywhere anywhere match-set ip6-block-ip src LOG level warning prefix "IPv6 IP BLOCK SET: "
    DROP all anywhere anywhere match-set ip6-block-net src
    DROP all anywhere anywhere match-set ip6-block-ip src
    Chain OUTPUT (policy ACCEPT)
    target prot opt source destination

    What this will do is as soon as a packet hits the netfilter framework (inbound or outbound, does not matter), the packet will be checked against the 4 ipset lists ip4-block-ip, ip4-block-net, ip6-block-ip, and ip6-block-net. If the source or destination IP address matches a host covered by any of the 4 lists, then that packet will be logged and dropped. Note: As of right now, the script does not take any command line options, however I am considering adding an option to disable the 2 LOG entries from each of the 2 iptables commands above, so that your server doesn't get filled up with iptables block messages,

    The reason we do it this way instead of manually adding each rule to the INPUT filter is because each rule we add takes up a little bit of memory and each packet must be checked against each rule of the filter which can cause packet loss if the rules are too complex. Whereas, when using a match rule with an IP set, the data in the set is hashed, and the packet is checked against a much smaller set of rules which includes hashed data from the ipset, thereby significantly reducing how many CPU cycles are used on packet header inspection.

    As always, more updates are coming. Hope this helps get you started.

  • I've decided that the rest of this series is going to be better posted as a blog on my site. I haven't started writing it yet, but in going through and securing my own servers, I've realized that there is still quite a bit I was missing (sulogin vs sushell in single user mode, for example)

    So, what I will do is I will start documenting this stuff on my site and I will provide a single link (in the next post) to my site. Feel free to begin following the posts there. Apologies for all of the build up leading to this. I just found that it will be easier to break this up into smaller, more manageable chunks, and with the fact that I have a limited number of reserved posts, I didn't feel I could reasonably do this series the justice it deserves by keeping it here.

    I do definitely appreciate the input of Russell with respect to the single user mode quirks, and I will be sure to note things like that in the blog posts, albeit in a much more general format, such as "Contact your host if you are running a cloud server to ensure this step has no quirks. I know Rackspace for sure does have some additional steps you should take"