contact
----------------------------

Blog <-

Archive for the ‘sysadmin’ Category

RSS   RSS feed for this category

Various databases and how they scale

By chance I stumbled upon an article about databases and how they scale. It's a great read and does an excellent job describing the various stengths and weaknesses regarding different kinds of scaling for databases. Especially the images really capture the essence. Screenshot from 2014-05-09 12:31:57

Test a pull / merge request before accepting on Bitbucket

Git is a great tool, but its documentation leaves much to be desired at times. Bitbucket's documentation doesn't fare much better. Commands mentioned in its wiki often don't work as advertised. The fact that git's commands are often incredibly counter-intuitive, incomplete and at times simply wrong doesn't help either. So while you can get far with git by just copying random (and again, wrong more often than not) commands from Stack Overflow, there comes a time where you actually have to learn how it works.

In my case, I had a very simple request. Somebody opened a pull request from a fork of one of my projects and I simply wanted to test that change before I merged it. Bitbucket's wiki was unhelpful as the commands it listed simply didn't work. Here's how I got it to work in a way that actually makes sense.

What we'll be doing

We'll be working with two repositories here:

  • The main repository called "test", owned by user "fboender"
  • The forked repository called "test-fork", also owned by user "fboender". Normally the forked repository wouldn't be owned by the same person, but in Bitbucket you can fork your own repositories, and this makes for a good test. 

We have:

  • A pull request containing two commits on a branch called 'bugfix' on the forked 'test-fork' repository.
  • We want to review these commits, test them and either merge them into our main repository 'test' or reject them.

We'll perform the following steps:

  1. Prepare the working directory
  2. Retrieve the remote changes (commits) for the pull request to our local clone
  3. Review the changes
  4. Either reject or accept (merge) the changes
  5. Push the accepted changes (merge / pull request) back to Bitbucket.

Prepare the working directory

Make sure you have no uncommited changes in your working dir:

$ git status
# On branch master
nothing to commit, working directory clean

If there are uncommited changes, either commit them or get rid of them.

Retrieve remote changes

Next we'll retrieve the remote changes introduced by the pull request. We don't want those changes to affect our local repository in any way. We just want to retrieve them so we can review the changes. To do so, we'll need to know the source from where we want to fetch the changes. This is what the pull requests looks like in the Bitbucket interface:

 

Pull request

If you hover your mouse over the branch, you can see the URL for the source of the merge request. In this case: https://bitbucket.org/fboender/test-fork/branch/bugfix.

We'll use git's fetch command to fetch the objects (commits) in the pull request. The format for the fetch command in this case is:

git fetch <repository> <refspec>

For our pull request the repository is https://bitbucket.org/fboender/test-fork. The refspec can be almost anything really. A commit, a branch name, whatever. In this case we'll use the branch name, which is bugfix, as you can infer from the URL above.

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 8 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From https://bitbucket.org/fboender/test-fork
 * branch            bugfix     -> FETCH_HEAD

Git has now retrieved the commits and put them in our local index. If you wouldn't know better, you wouldn't be able to find them though! The commits are there, but they're not part of any branch or anytihng. To get to the changes, we'll need to use the last commit id (ad38a9b) or the (temporary) FETCH_HEAD ref, which git has created for us. 

Review the changes

Okay, so we want to do various reviews of the changes. First let's look at the changes between our current branch (master) and the new changes:

$ git diff master FETCH_HEAD
​​diff --git a/README.md b/README.md
index 08cdfb4..cc11a2f 100644
--- a/README.md
+++ b/README.md
@@ -6,4 +6,3 @@ About
 
 This is a test repository, for testing.
 
-TEST in sub
diff --git a/TEST b/TEST
index 3fb25cd..bc440ae 100644
--- a/TEST
+++ b/TEST
@@ -1 +1,2 @@
 BLA
+FOO

Looks good. Two commits that ammend the TEST file and update the README.md file. Now perhaps we want to run some tests on the new changes before accepting and merging the pull request. To get to the actual changes, we can checkout a refspec. In this case either the FETCH_HEAD refspec that git helpfully created for us, or we can use the last commit in the pull request: ad38a9b. We'll be using FETCH_HEAD. Remember that this refspec is temporary! If you do a new fetch (that includes git pull and various other commands), the FETCH_HEAD could have changed.

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at ad38a9b... Updated README

Okay, seems to have worked. The git log shows the commits:

$ git log
commit ad38a9b402eac93995902560292697245418a192
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:26 2014 +0200

    Updated README

commit c538608863dd9dda276edf5adcad9e0f2ef9f9ed
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:11 2014 +0200

    Ammended TEST file

commit f8d3d31ea1195e2cb1c0631d95c2b33c313b60b8
Author: Ferry Boender <ferry.boender@gmail.com>
Date:   Mon Mar 31 17:36:23 2014 +0000

    Created new branch bugfix

We can now run some tests on the working directory, further inspect the changes, etc.

Reject the changes

If you're not satisfied with the changes, you can just check out the original branch:

$ git checkout master
Warning: you are leaving 3 commits behind, not connected to
any of your branches:

  ad38a9b Updated README
  c538608 Ammended TEST file
  f8d3d31 Created new branch bugfix

If you want to keep them by creating a new branch, this may be a good time
to do so with:

 git branch new_branch_name ad38a9b

Switched to branch 'master'

​Note that the commits are still in your index. You've just chosen not to do anything with them. You can now reject the pull request in Bitbucket's interface.

Accept the changes

If you're satisfied that the changes made work properly and want to keep the changes from the pull request, you can merge them into a branch. In this case, we'll merge them directly into the master branch.

First, we reattach our HEAD to the correct branch:

$ git checkout master

Next, we merge in the changes:

$ git merge FETCH_HEAD
Updating 2f6ecbf..ad38a9b
Fast-forward
 README.md | 1 -
 TEST      | 1 +
 2 files changed, 1 insertion(+), 1 deletion(-)

Finally, we push the changes to Bitbucket. This will automatically accept the pull request on Bitbucket.

$ git push
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 873 bytes | 0 bytes/s, done.
Total 8 (delta 2), reused 1 (delta 1)
To git@bitbucket.org:fboender/test.git
   2f6ecbf..ad38a9b  master -> master

The changes have now been pushed to the origin repository and the pull request has been accepted automatically by Bitbucket.

That's it, we're done!

Make "changes" to the pull request

Suppose you find the pull request is largely okay, however it's missing some small things. Maybe the developer who forked didn't update the documentation to reflect the changes he made? Such a hypothetical situation would of course never happen in real life, but let's suppose it does! How do you go about fixing the pull request which still properly accepting it? Here's how:

We've fetched the remote changes and are now inspecting them in our working dir:

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
...
$ git checkout FETCH_HEAD
...

Everything is almost correct, but we need to make some minor changes before accepting the merge request. We can now simply create a new commit in the detached head state:

$ echo "QUUX" >> TEST
$ git add TEST
$ git commit -m "Fixed missing ammend"
[detached HEAD ab11d9a] Fixed missing ammending
 1 file changed, 1 insertion(+)

This gives us a new commit id "ab11d9a" which we can use to merge both the commits from the pull request as well as the new commit:

$ git checkout master
Warning: you are leaving 2 commits behind, not connected to
any of your branches:

  ab11d9a Fixed missing ammending
  e761f43 Ammended TEST some more

$ git merge ab11d9a
Updating ad38a9b..ab11d9a
Fast-forward
 TEST | 2 ++
 1 file changed, 2 insertions(+)

$ git push

The merge request will now automatically be accepted and our additional commit will also be pushed to the main repository. Note that this doesn't actually makes any changes to the pull request. It's just an additional commit that you're merging in at the same time as the commits from the pull request. This is useful to prevent broken code on branches.

my_indexr: Script to drop and recreate MySQL indexes

As can be read in this article, I was in need of a method to quickly drop all indexes (except primary keys) from a MySQL database. After googling around a bit and being astonished that apparently no-one had written such a thing yet, I wrote the script that can be seen in that article.

Unfortunately, that script wasn't very good, so I decided to do a cleaner better implementation of it. The result is my_indexr, which spits out SQL commands to drop and recreate indexes on a database. Other features include:

  • Process only certain tables
  • Process non-primary or both normal and primary indexes
  • Correctly handles:
    • Primary key indexes
    • Compound key / multi-column indexes
    • Index types (BTREE, etc)
    • Prefix lengths
    • Auto_increment columns, which MUST be a key (my_indexr skips indexes with these columns in them)

There may still be some edge-cases that are not properly handled by my_indexr. If you encouter one, please let me know.

You can download my_indexr from its Bitbucket page.

This is what its output looks like:

$ ./indexr.py -u root -p mydb
DROP INDEX `location` ON `idx_tst_innodb_basic`;
DROP INDEX `name_age` ON `idx_tst_innodb_basic`;
DROP INDEX `email` ON `idx_tst_innodb_basic`;
DROP INDEX `PRIMARY` ON `idx_tst_innodb_compkey`;
CREATE  INDEX `location` USING BTREE ON `idx_tst_innodb_basic` (`location_id`);
CREATE  INDEX `name_age` USING BTREE ON `idx_tst_innodb_basic` (`name`(40),`age`);
CREATE UNIQUE INDEX `email` USING BTREE ON `idx_tst_innodb_basic` (`email`);
ALTER TABLE `idx_tst_innodb_compkey` ADD PRIMARY KEY (`last_name`,`first_name`);

Increasing performance of bulk updates of large tables in MySQL

I recently had to perform some bulk updates on semi-large tables (3 to 7 million rows) in MySQL. I ran into various problems that negatively affected the performance on these updates. In this blog post I will outline the problems I ran into and how I solved them. Eventually I managed to increase the performance from about 30 rows/sec to around 7000 rows/sec. The final performance was mostly CPU bound. Since this was on a VPS with only limited CPU power, I expect you can get better performance on some decently outfitted machine/VPS.

The situation I was dealing with was as follows:

  • About 20 tables, 7 of which were between 3 and 7 million rows.

  • Both MyISAM and InnoDB tables.

  • Updates required on values on every row of those tables.

  • The updates where too complicated to do in SQL, so they required a script.

  • All updates where done on rows that were selected on just their primary key. I.e. WHERE id = …

Here are some of the problems I ran into.

Python’s MySQLdb is slow

I implemented the script in Python, and the first problem I ran into is that the MySQLdb module is slow. It’s especially slow if you’re going to use the cursors. MySQL natively doesn’t support cursors, so these are emulated in Python code. One of the trickiest things is that a simple SELECT * FROM tbl will retrieve all the results and put them in memory on the client. For 7 million rows, this quickly exhausts your memory. Real cursors would fetch the result one-by-one from the database so that you don’t exhaust memory.

The solution here is to not use MySQLdb, but use the native client bindings available with import mysql.

LIMIT n,m is slow

Since MySQL doesn’t support cursors, we can’t mix SELECT and UPDATE queries in a loop. Thus we need to read in a bunch of rows into memory and update in a loop afterwards. Since we can’t keep all the rows in memory, we must read in batches. An obvious solution for this would be a loop such as (pseudo-code):

offset = 0
size = 1000
while True:
    rows = query('SELECT * FROM tbl LIMIT :offset, :size"
    for row in rows:
        # do some updates
    if len(rows) < size:
        break
    offset += size

This would use the LIMIT to read the first 1000 rows on the first iteration of the loop, the next 1000 on the second iteration of the loop. The problem is: in MySQL this becomes linearly slower for higher values of the offset. I was already aware of this, but somehow it slipped my mind.

The problem is that the database has to advance an internal pointer forward in the record set, and the further in the table you get, the longer that takes. I saw performance drop from about 5000 rows/sec to about 100 rows/sec, just for selecting the data. I aborted the script after noticing this, but we can assume performance would have crawled to a halt if we kept going.

The solution is to use the order by the primary key and then select everything we haven’t processed yet:

size = 1000
last_id = 0
while True:
    rows = query('SELECT * FROM tbl WHERE id > :last_id ORDER BY id LIMIT :size')
    if not rows:
        break
    for row in rows:
        # do some updates
        last_id = row['id']

This requires that you have an index on the id field, or performance will greatly suffer again. More on that later.

At this point, +SELECT+s were pretty speedy. Including my row data manipulation, I was getting about 40.000 rows/sec. Not bad. But I was not updating the rows yet.

Connection settings

The next things I did is some standard tricks to speed up bulk updates/inserts by disabling some foreign key checks and running batches in a transaction. Since I was working with both MyISAM and InnoDB tables, I just mixed optimizations for both table types:

db.query('SET autocommit=0;')
db.query('SET unique_checks=0; ')
db.query('SET foreign_key_checks=0;')
db.query('LOCK TABLES %s WRITE;' % (tablename))
db.query('START TRANSACTION;')
# SELECT and UPDATE in batches
db.query('COMMIT;')
db.query('UNLOCK TABLES')
db.query('SET foreign_key_checks=1;')
db.query('SET unique_checks=1; ')
db.query('SET autocommit=1;')

I must admit that I’m not sure if this actually increased performance at all. It is entirely possible that this actually hurts performance instead. Please test this for yourselves if you’re going to use it. You should also be aware that some of these options bypass MySQL’s data integrity checks. You may end up with invalid data such as invalid foreign key references, etc.

One mistake I did make was that I accidentally included the following in an early version of the script:

db.query('ALTER TABLE %s DISABLE KEYS;' % (tablename))

Such is the deviousness of copy-paste. This option will disable the updating of non-unique indexes while it’s active. This is an optimization for MyISAM tables to massively improve performance of mass INSERTs, since the database won’t have to update the index on each inserted row (which is very slow). The problem is that this also disables the use of indexes for data retrieving, as noted in the MySQL manual:

While the nonunique indexes are disabled, they are ignored for statements such as SELECT and EXPLAIN that otherwise would use them.

That means update queries such as UPDATE tbl SET key=value WHERE id=1020033 will become incredibly slow, since they can no longer use indexes.

MySQL server tuning

I was running this on a stock Ubuntu 12.04 installation. MySQL is basically completely unconfigured out of the box on Debian and Ubuntu. This means that those 16 GBs of memory in your machine will go completely unused unless you tune some parameters. I modified /etc/mysql/my.cnf and added the following settings to improve the speed of queries:

[mysqld]
key_buffer         = 128M
innodb_buffer_pool_size = 3G

The key_buffer setting is a setting for MyISAM tables that determines how much memory may be used to keep indexes in memory. The equivalent setting for InnoDB is innodb_buffer_pool_size, except that the InnoDB setting also includes table data.

In my case the machine had 4 GB of memory. You can read more about the settings on these pages:

Don’t forget to restart your MySQL server.

Dropping all indexes except primary keys

One of the biggest performance boosts was to drop all indexes from all the tables that needed to be updates, except for the primary key indexes (those on the id fields). It is much faster to just drop the indexes and recreate them when you’re done. This is basically the manual way to accomplish what we hoped the ALTER TABLE %s DISABLE KEYS would do, but didn’t.

UPDATE: I wrote a better script which is available here.

Here’s a script that dumps SQL commands to drop and recreate indexes for all tables:

#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# DANGER WILL ROBINSON, READ THE important notes BELOW
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
import _mysql
import sys

mysql_username = 'root'
mysql_passwd = 'passwd'
mysql_host = '127.0.0.1'
dbname = 'mydb'

tables = sys.argv[1:]
indexes = []

db = _mysql.connect(user=mysql_username, passwd=mysql_passwd, db=dbname)
db.query('SHOW TABLES')
res = db.store_result()
for row in res.fetch_row(maxrows=0):
    tablename = row[0]
    if not tables or tablename in tables:
        db.query('SHOW INDEXES FROM %s WHERE Key_name != "PRIMARY"' % (tablename))
        res = db.store_result()
        for row_index in res.fetch_row(maxrows=0):
            table, non_unique, key_name, seq_in_index, column_name, \
            collation, cardinality, sub_part, packed, null, index_type, \
            comment, index_comment = row_index
            indexes.append( (key_name, table, column_name) )

for index in indexes:
    key_name, table, column_name = index
    print "DROP INDEX %s ON %s;" % (key_name, table)

for index in indexes:
    key_name, table, column_name = index
    print "CREATE INDEX %s ON %s (%s);" % (key_name, table, column_name)

Output looks like this:

$ ./drop_indexes.py
DROP INDEX idx_username ON users;
DROP INDEX idx_rights ON rights;
CREATE INDEX idx_username ON users (username);
CREATE INDEX idx_perm ON rights (perm);

Some important notes about the above script:

  • The script is not foolproof! If you have non-BTREE indexes, if you have indexes spanning multiple columns, if you have any kind of index that goes beyond a BTREE single column index, please be careful about using this script.

  • You must manually copy-paste the statements into the MySQL client.

  • It does NOT drop the PRIMARY KEY indexes.

Conclusions

In the end, I went from about 30 rows per second around 8000 rows per second. The key to getting decent performance is too start simple, and slowly expand your script while keeping a close eye on performance. If you see a dip, investigate immediately to mitigate the problem.

Useful ways of investigation slow performance is by using tools to unearth evidence of the root of the problem.

  • top can tell you if a process is mostly CPU bound. If you’re seeing high amounts of CPU, check if your queries are using indexes to get the results they need.

  • iostat can tell you if a process is mostly IO bound. If you’re seeing high amounts of I/O on your disk, tune MySQL to make better use of memory to buffer indexes and table data.

  • Use the EXPLAIN function of MySQL to see if, and which, indexes are being used. If not, create new indexes.

  • Avoid doing useless work such as updating indexes after every update. This is mostly a matter of knowing what to avoid and what not, but that’s what this post was about in the first place.

  • Baby steps! It took me entirely too long to figure out that I was initially seeing bad performance because my SELECT LIMIT n,m was being so slow. I was completely convinced my UPDATE statements were the cause of the initial slowdowns I saw. Only when I started commenting out the major parts of the code did I see that it was actually the simple SELECT query that was causing problems initially.

That’s it! I hope this was helpful in some way!

HP Lights-Out 100i (LO100i) "Invalid username / password" when trying to connect to KVM

If you're trying to connect to the Virtual KVM (console) on a HP Lights-Out 100i (LO100i) using the Remote Console Client Java applet, you might be getting an error in the order of

Username / Password invalid

Or:

com.serverengines.r.rdr.EndOfStream: EndOfStream

This is a known problem with firmware version 4.24 (or Earlier):

The Virtual Keyboard/Video/Mouse (KVM )will not be accessible
on HP ProLiant 100-series servers with Lights-Out 100 Base 
Management Card Firmware Version 4.24 (or earlier), if the server
has been running without interruption for 248 days (or more). When
this occurs, when attempting to access Virtual KVM/Media as shown
below, the browser will generate the following message[...]

As a solution, HP recommends:

As a workaround, shut down the server and unplug the power cable.
After a few seconds, reconnect the power cable and restart the server.

I've found that it isn't required to actually unplug the powercable. For me, remotely cold-restarting the iLoM card got rid of the problem. You can remotely cold-start the iLoM with ipmitool:

$ ipmitool -H <ILOM_IP> -U <USERNAME> mc
Password:
MC Commands:
  reset <warm|cold>
  guid
  info
  watchdog <get|reset|off>
  selftest
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc reset cold
Password: 
Sent cold reset command to MC

Now we wait until the iLoM comes back up and we can succefully connect to the console via the KVM Java applet.

Getting started with Juju: "no public ssh keys found" error.

I'm trying out Juju with the 'local' environment, and ran into the following error:

$ sudo juju bootstrap
error: error parsing environment "local": no public ssh keys found

The Getting Started Guide mentions nothing of this error, and I couldn't find a solution on the web. After a bit of reading, it seems Juju requires a passwordless SSH key be available in your ~/.ssh dir. So to get rid of this error, just generate a new key with no password:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/fboender/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <EMPTY>
Enter same passphrase again: <EMPTY>

Now you bootstrap Juju:

$ sudo juju bootstrap -e local
$

Quick-n-dirty HAR (HTTP Archive) viewer

HAR, HTTP Archive, is a JSON-encoded dump of a list of requests and their associated headers, bodies, etc. Here's a partial example containing a single request:

{
  "startedDateTime": "2013-09-16T18:02:04.741Z",
  "time": 51,
  "request": {
    "method": "GET",
    "url": "http://electricmonk.nl/",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "queryString": [],
    "cookies": [],
    "headersSize": 38,
    "bodySize": 0
  },
  "response": {
    "status": 301,
    "statusText": "Moved Permanently",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "cookies": [],
    "content": {
      "size": 0,
      "mimeType": "text/html"
    },
    "redirectURL": "",
    "headersSize": 32,
    "bodySize": 0
  },
  "cache": {},
  "timings": {
    "blocked": 0,
  }
},

HAR files can be exported from Chrome's Network analyser developer tool (ctrl-shift-i → Network tab → capture some requests → Right-click and select Save as HAR with contents. (Additional tip: Check the "Preserve Log on Navigation option – which looks like a recording button – to capture multi-level redirects and such)

As human-readable JSON is, it's still difficult to get a good overview of the requests. So I wrote a quick Python script that turns the JSON into something that's a little easier on our poor sysadmin's eyes:

harview_output

It supports colored output, dumping request headers and response headers and the body of POSTs and responses (although this will be very slow). You can filter out uninteresting requests such as images or CSS/JSS with the --filter-X options.

You can get it by cloning the Git repository from the Bitbucket repository.

Cheers!

16 things you should absolutely configure on any new server

It seems even professional sysadmins occasionally forgets the bare minimum configuration that should be done on a new machine. As a developer and part-time system administrator, I can't count the number of times I've had to waste significantly more time Here's a, by no means exhaustive, list of things you should configure on any new machine you deploy.

1. Pick a good hostname

Set a sane hostname on your machine. Something that describes what the machine is or does. Something that uniquely identifies it from any other machines on, at least, the same network. For instance, machine for a client called Megacorp might be called "mc-tst-www-1" to identify the first test WWW server for Megacorp. The primary production loadbalancer might be called "mc-prod-lb-1". Never have your junior sysadmin bring down the master database backend because he thought he was on a different machine.

2. Put all hostnames in /etc/hosts

Put all hostnames your machine uses in the /etc/hosts file to avoid annoying DNS lookup delays and other problems.

3. Install ntpd

Running into problems related to clock drift on your server is not a matter of "if", but a matter of "when". And with clock drift it will be sooner rather than later, depending on which direction your clock is drifting in. Install NTPd, and synchronize it to the same servers as all your other machines. Don't use a default pool if you can avoid it, because they might use Round Robin DNS and give you different servers. Theoretically this shouldn't pose a problem. Theoretically…

If you're running virtual machines, turn off Virtualbox/VMWare/whatever's time synchronization. They've historically been proven to be very unreliable[1]. Install ntpd anyway. And I swear, as a developer, I will kick you in the face if I ever have to diagnose another problem caused by a lack of ntpd.

4. Make sure email can be delivered

This one is simple. Make sure email can be delivered to the outside world. Many programs and scripts will need to be able to send email. Make sure they can. Ideally, you should have a dedicated SMTP server set up on your network that hosts can relay email through. A gateway firewall should prevent all other outgoing traffic for port 25, unless you want your server to be turned into a zombified spam node (which will happen).

5. Cron email

Configure Cron such that output is emailed to an actual person. You want to know about that "No space left on device" error that crashed your cobbled-together backups script. You can specify the email address with the MAILTO directive in the crontab file. Don't forget about user crontabs! Since it's hard to ensure every user crontab has a MAILTO setting, you may want to configure your SMTP server to automatically forward all email to a special email address.

6. Protect the SSH port

Unauthorized probing of the SSH port will happen, unless you prevent it. Weak passwords can be easily guessed in a few hundred tries. Timing attacks can be used to guess which accounts live on the system, even if the attacker can't guess the password. There are several options for securing your SSH port

  • Listen on a different port. This is the least secure option, as it can usually be easily probed using a port scanner. It will fool some of the botnets out in the wild blindly scanning on port 22, but it won't keep out the more advanced attackers. If you go for this option, don't go for port 2222, but pick something arbitrary high, such as 58245.
  • Install Fail2ban. It scans your logs and blocks any IPs that show malicious signs. This is a good idea, regardless of whether you want to secure SSH or not
  • Firewall off the port completely. Only open access from a few select IPs, such as your management network. Use a port knocker to open SSH ports on demand in case you absolutely need access from unpredictable IPs.

7. Configure a firewall

This should go without saying.. install and configure a firewall. Firewall everything. Incoming traffic, outgoing traffic, all of it. Only open up what you need to open. Don't rely on your gateway's firewall to do its job; you will regret it when other machines on your network get compromised.

8. Monitor your system

Monitor your system, even if it's just a simple shell script that emails you about problems. Disks will fill up, services will mysteriously shut down and your CPU load will go to 300. I highly recommend also monitoring important services from a remote location.

9. Configure resource usage

Running Apache, a database or some Java stack? Configure it properly so it utilizes the resources your system has, but doesn't overload it. Configure the minimum and maximum connections Apache will accept, tune the memory your database is allowed to use, etc.

10. Keep your software up-to-date

Install something like apt-dater to keep your software up-to-date. Many server compromises are directly linked to outdated software. Don't trust yourself to keep a machine up to date. You will forget. If you're running third-party software not installed from your package repository, subscribe to their security announcement mailing list and keep a list of all third-party software installed on every server. A tool such as Puppet, Chef or Ansible can help keep your system not only up to date, but uniform.

11. Log rotation

Make sure all logs are automatically rotated, or your disks will fill up. Take a look at /etc/logrotate.d/ to see how. For instance, for Apache vhosts that each have their own log directory, you can add an entry such as:

/var/www/*/logs/*.log {
        weekly
        missingok
        rotate 52
        compress
        delaycompress
        notifempty
        # create 640 root adm # Disabled so old logfile's properties are used.
        sharedscripts
        postrotate
                if [ -f /var/run/apache2.pid ]; then
                        /etc/init.d/apache2 restart > /dev/null
                fi
        endscript
}

12. Prevent users from adding SSH keys

Remove the ability for users to add new authorized keys to their account. Which keys are allowed to connect should be in the admin's hand, not the users. Having the Authorized Keys files scattered all over your system also makes maintenance harder. To do this, change the AuthorizedKeysFile setting in /etc/ssh/sshd_config:

#AuthorizedKeysFile     %h/.ssh/authorized_keys
AuthorizedKeysFile      /etc/ssh/authorized_keys/%u

13. Limit user crontabs

Limit which users can create personal crontab entries by placing only allowed usernames in /etc/cron.allow. This prevents users from creating CPU/IO heavy cronjobs that interfere with your nightly backups.

14. Backups, backups and more backups

Make backups! Keep local backups for easy restoring of corrupt files, databases and other disasters. Databases should be backed up locally each night, if at all possible. Rotate backups on a daily, weekly and monthly cycle. Keep off-site backups too. For small servers I can highly recommend Boxbackup. It keeps remote encrypted backups, does full and incremental backups, keeps a history and does snapshotting as well as continues syncing. Only delta's (changes in files) are transferred and stored, so it is light on resources. wrote an article on setting it up which might prove useful.

15. Install basic tools

Make sure basic tools for daily admin tasks are pre-installed. There's nothing more annoying than having to track down problems and not having the means to do so, especially when your network refuses to come up. Some essential tools:

  • vi
  • iotop
  • strace
  • Whatever more you need..

16. Install fail2ban

I've already mentioned this in the "Protect your SSH port", but it bears mentioning again: install Fail2ban to automatically block offending IPs.

Conclusion

That's it. These are the things I would consider the bare minimum that should be properly configured when you deploy a new machine. It will take a little bit more time up front to configure machines properly, but it will save you time in the end. I can highly recommend using Puppet, Chef or Ansible to help you automate these tasks.

Notes

[1]This was the case a few years ago. I'm not sure it still is for VMWare. For VirtualBox, it most certainly is, but you wouldn't run that in a production environment probably. At the very least, install NTPd on your host.

bbcloner: create mirrors of your public and private Bitbucket Git repositories

 

bbclonerI wrote a small tool that assists in creating mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires that you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

You can download a tar.gz, a Debian/Ubuntu package or clone it from the Bitbucket page.

Features

  • Clone / mirror / backup public and private repositories and wikis.
  • No need to store your username and password to update clones.
  • Exclude repositories.
  • No need to run an SSH agent. Uses passwordless private Deployment Keys. (thus without write access to your repositories)

Usage

Here's how it works in short. Generate a passwordless SSH key:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: /home/fboender/.ssh/bbcloner_rsa<ENTER>
Enter passphrase (empty for no passphrase):<ENTER>
Enter same passphrase again: <ENTER>

You should add the generated public key to your repositories as a Deployment Key. The first time you use bbcloner, or whenever you've added new public or private repositories, you have to specify your username/password. BBcloner will retrieve a list of your repositories and create mirrors for any new repositories not yet mirrored:

$ bbcloner -n -u fboender /home/fboender/gitclones/
Password: 
Cloning new repositories
Cloning project_a
Cloning project_a wiki
Cloning project_b

Now you can update the mirrors without using a username/password:

$ bbcloner /home/fboender/gitclones/
Updating existing mirrors
Updating /home/fboender/gitclones/project_a.git
Updating /home/fboender/gitclones/project_a-wiki.git
Updating /home/fboender/gitclones/project_b.git

You can run the above from a cronjob. Specify the -s argument to prevent bbcloner from showing normal output.

The mirrors are full remote git repositories, which means you can clone them:

$ git clone /home/fboender/gitclones/project_a.git/
Cloning into project_a...
done.

Don't push changes to it, or the mirror won't be able to sync. Instead, point the remote origin to your Bitbucket repository:

$ git remote rm origin
$ git remote add origin git@bitbucket.org:fboender/project_a.git
$ git push
remote: bb/acl: fboender is allowed. accepted payload.

Get it

Here are ways of getting bbcloner:

More information

Fore more information, please see the Bitbucket repository.

Quick Introduction to LDAP Basics

Every now and then I have to work on something that involves LDAP, and every time I seem to have completely forgotten how it works. So I'm putting this here for future me: a quick introduction to LDAP basics. Remember, future me (and anyone else reading this), at the time of writing you are by no means an LDAP expert, so take that into consideration! Also, this will be very terse. There are enough books on LDAP on the internet. I don't think we need another.

What is LDAP?

  • LDAP stands for Lightweight Directory Access Protocol.
  • It is a standard for storing and accessing "Directory" information. Directory as in the yellow pages, not the filesystem kind.
  • OpenLDAP (unix) and Active Directory (Microsoft) implement LDAP.
  • Commonly used to store organisational information such as employee information.
  • Queried for access control definitions (logging in, checking access), addressbook information, etcetera.

How is information stored?

  • LDAP is a hierachical (tree-based) database.
  • Information is stored as key-value pairs.
  • The tree structure is basically free-form. Every organisation can choose how to arrange the tree for themselves, although there are some commonly used patterns.

The tree

An example of an LDAP tree structure (some otherwise required attributes are left out for clarity!):

dc=com
    dc=megacorp
        ou=people
            uid=jjohnson
                objectClass=inetOrgPerson,posixAccount
                cn=John Johnson
                uid=jjohnson
                mail=j.johnson@megacorp.com
            uid=ppeterson
                objectClass=inetOrgPerson,posixAccount
                cn=Peter Peterson
                uid=ppeterson
                mail=p.peterson@megacorp.com
  • Each leaf in the tree has a specific unique path called the Distinguished Name (DN). For example: uid=ppeterson,ou=people,dc=megacorp,dc=com
  • Unlike file paths and most other tree-based paths which have their roots on the left, the Distinguished Name has the root of the tree on the right.
  • Instead of the conventional path separators such as the dot ( . ) or forward-slash ( / ), the DN uses the comma ( , ) to separate path elements.
  • Unlike conventional paths (e.g. /com/megacorp/people/ppeterson), the DN path includes an attribute type for each element in the path. For instance: dc=, ou= and uid=. These are abbreviations that specify the type of the attribute. More on attribute types in the Entry chapter.
  • It is common to arrange the tree in a globally unique way, using dc=com,dc=megacorp to specify the organisation.
  • Entries are parts of the tree that actually store information. In this case: uid=jjohnson and uid=ppeterson.

Entries

An example entry for DN uid=jjohnson,ou=people,dc=megacorp,dc=com (some otherwise required attributes are left out for clarity!):

objectClass=inetOrgPerson,posixAccount
cn=John Johnson
uid=jjohnson
mail=j.johnson@megacorp.com
  • An entry has an Relative Distinguished Name (RDN). The RDN is a unique identifier for the entry in that part of the tree. For the entry with Distinguished Name (DN) uid=jjohnson,ou=people,dc=megacorp,dc=com, the RDN is uid=jjohnson.
  • An entry stores key/value pairs. In LDAP lingo, these are called attribute types and attribute values. Attribute types are sometimes abbreviations. In this case, the attribute types are cn= (CommonName), uid= (UserID) and mail=.
  • Keys may appear multiple times, in which case the are considered as a list of values.
  • An entry has one or more objectClasses.
  • Object classes are defined by schemas, and they determine which attributes must and may appear in an entry. For instance, the posixAccount object class is defined in the nis.schema and must include cn, uid, etc.
  • Different object classes may define the same attribute types.
  • A reference of common object classes can be found in Appendix E of the excellent Zytrax LDAP Guide.
  • A reference of common attribute types can also be found in Appendix E.

Connecting and searching LDAP servers

The most common action to perform on LDAP servers is to search for information in the directory. For instance, you may want to search for a username to verify if they entered their password correctly, or you may want to search for Common Names (CNs) to auto-complete names and email addresses in your email client. In order to search an LDAP server, we must perform the following:

  1. Connect to the LDAP server
  2. Authenticate against the LDAP server so we are allowed to search. This is called binding. Basically it's just logging in. We bind against an LDAP server by specifying a user's DN and password. This can be confusing because there can be DNs/password with which you can bind in the LDAP, but also user/passwords which are merely stored so that other systems can authenticate users using the LDAP server.
  3. Specify which sub-part of the tree we wish to search. This is called the Base DN (Base Distinguished Name). For example: ou=people,dc=megacorp,dc=com, so search only people. Different bind DN's may search different parts of the tree.
  4. Specify how deep we want to search in the tree. This is called the level. The level can be: BaseObject (search just the named entry, typically used to read one entry), singleLevel (entries immediately below the base DN), orwholeSubtree (the entire subtree starting at the base DN).
  5. Specify what kind of entries we'd like to search for. This is called the filter. For example, (objectClass=*) will search for ANY kind of object class. (objectClass=posixAccount) will only search for entries of the posixAccount object class.

Here's an example of connecting, binding and searching an LDAP server using the ldapsearch commandline util:

$ ldapsearch -W -h ldap.megacorp.com -D "uid=ldapreader,dc=megacorp,dc=com"
  -b ou=people,dc=megacorp,dc=com "(objectclass=*)"
password: ********
  • -W tells ldapsearch to prompt for a password.
  • -h is the hostname of the LDAP server to connect to.
  • -D is the Distguished Name (DN), a.k.a the username, with which to connect. In this case, a special ldapreader account.
  • -b is the Base DN, a.k.a the subtree, we want to search.

Finally, we specify a search filter: "(objectclass=*)". This means we want to search for all object classes.

The previous example, but this time in the Python programming language:

import ldap
l = ldap.initialize('ldap://ldap.megacorp.com:389')

l.bind('uid=ldapreader,dc=megacorp,dc=com', 'Myp4ssw0rD')
l.search_s('ou=people,dc=megacorp,dc=com', ldap.SCOPE_SUBTREE, 
           filterstr="(objectclass=*)")

Further Reading

That's it! Like I said, it's terse! If you need to know more about LDAP, here are some good resources on it: