Althought modern Linux desktops generally automatically mount external USB disks when plugged in, servers usually don't do this. When I replaced my home server desktop model with a Raspberry Pi 2 (running Raspbian), I wanted it to automatically mount USB drives and, more importantly, make the same USB drive available at the same path at all times.
The USBmount Debian package automatically mounts USB mass storage devices (typically USB pens) when they are plugged in, and unmounts them when they are removed. The mountpoints (/media/usb[0-7] by default), filesystem types to consider, and mount options are configurable. When multiple devices are plugged in, the first available mountpoint is automatically selected. If the device provides a model name, a symlink /var/run/usbmount/MODELNAME pointing to the mountpoint is automatically created.
Just what I needed.
root@rasp# sudo apt-get install usbmount
# Plug in USB drive
root@rasp# ls -la /var/run/usbmount/
lrwxrwxrwx 1 root root 11 Oct 4 10:30 Seagate_Expansion_1 -> /media/usb0
lrwxrwxrwx 1 root root 11 Oct 4 10:30 ST4000DM_000-1F2168_1 -> /media/usb1
Great. Now I wanted the "Seagate_Expansion_1" disk to always become available at
/storage. I could have created a symlink from
/var/run/usbmount/Seagate_Expansion_1, but I ran into a problem with SSHfs when trying to mount a server-side symlink on my client machine:
user@client$ sshfs -o transform_symlinks -o follow_symlinks 192.168.0.16:/storage Shares/timmy-storage/
192.168.0.16:/storage: Not a directory
So a symlink was out of the question. The binding option of 'mount' however, worked just fine:
# On the server
root@rasp# rm /storage
root@rasp# mkdir /storage
root@rasp# mount --bind /var/run/usbmount/Seagate_Expansion_1 /storage
# On the client
user@client$ sshfs 192.168.0.16:/storage Shares/timmy-storage/
user@client$ ls -l Shares/timmy-storage
drwxr-xr-x 1 1002 1003 4096 Sep 17 13:58 apps
drwxr-xr-x 1 root root 4096 Aug 24 09:15 backup
So I modified
/etc/usbmount/mount.d/00_create_model_symlink and added the following code:
if [ "$name" = "Seagate_Expansion_1" ]; then
mount --bind "/var/run/usbmount/$name" /storage
This is not a very clean solution, but it serves its purpose just fine. A nicer implementation would create a new file "
01_mount_bind" which reads a config file to determine which model names to mount –bind where. That implementation is left as a reader exercise ;-)
With this setup the /storage path will automatically become available at boot-time or when the correct USB drive is plugged in. I can use SSHfs to mount the remote /storage on my Linux machine. Samba takes care of the Windows users.
I've just released ansible-cmdb v1.6. This is a feature release, including the following changes:
- The -i switch now supports reading dynamic inventory scripts.
- host_vars directory now supported (by Malcolm Mallardi)
- Support for multiple inventory sources as per Ansible's documentation.
- Improved error handling prevents ansible-cmdb from stopping if it encounters non-critical errors (malformed host definitions, etc).
- Improved error reporting.
- html_fancy template column headers are now visually identifiable as being sortable.
Get the new release from the Github released page.
Ansible-cmdb takes the output of Ansible's setup module and converts it into a static HTML overview page containing system configuration information.
While the previous generated overview page was functional, it didn't look very good. So for the v1.5 release (which is now available), I gave it an overhaul. I decided on Material design because it gives a modern, clean look and feel. The host overview page now looks like this:
The column toggle buttons are more recognisable as actually being toggles and the table of hosts feels a lot cleaner. The bar at the top stays in view even when scrolling. When viewing a hosts detailed information, the header text changes to the host name, making it easier to recognise which host's information you're looking at:
The header bar also includes a link back to the top of the page. This is a big improvement over the previous design, which lacked such a feature.The new design also works better on smaller screens such as tablets or mobiles, although it could still do better.
Other than the new design, the v1.5 release also works when viewing it locally in the browser, without the need to specify the -p local_js option.
You can view a live example or download the new release from the Github releases page.
More information on ansible-cmdb can be found in the README.
Ansible-cmdb takes the output of Ansible's setup module and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.
You can visit the Github repo, or view an example output here.
This is the v1.4 release of ansible-cmdb, which brings a bunch of bug fixes and some new features:
- Support for host inventory patterns (e.g. foo[01:04].bar.com)
- Support for 'vars' and 'children' groups.
- Support passing a directory to the
-i param, in which case all the files in that directory are interpreted as one big hosts file.
- Support for the use of local jquery files instead of via a CDN. Allows you to view the hosts overview in your browser using file://. See README.md for info on how to enable it (hint:
ansible-cmdb -p local_js=1).
- Add -f/–fact-caching flag for compatibility with fact_caching=jsonfile fact dirs (Rowin Andruscavage).
- The search box in the html_fancy template is now automatically focussed.
- Show memory to one decimal to avoid "0g" in low-mem hosts.
- Templates can now receive parameters via the -p option.
- Strip ports from hostnames scanned from the host inventory file.
- Various fixes in the documentation.
- Fixes for Solaris output (memory and disk).
I would like to extend my gratitude to the following contributors:
- Sebastian Gumprich
- Rowin Andruscavage
- Cory Wagner
- Jeff Palmer
- Sven Schliesing
If you've got any questions, bug reports or whatever, be sure to open a new issue on Github!
A few days ago I released ansible-cmdb. Ansible-cmdb takes the output of Ansible's setup module and converts it into a static HTML overview page containing system configuration information. It supports multiple templates and extending information gathered by Ansible with custom data.
The tool was positively received and I got lots of good feedback. This has resulted in v1.3 of ansible-cmdb, which you can download from the releases page.
This is a maintenance release that fixes the following issues:
- Generated RPM now installs on operating systems with strict Yum (Fedora 22, Amazon AMI).
- The default templates (html_fancy, txt_table) no longer crash on missing information.
- Python3 compatibility. (by Sven Schliesing).
- Disk total and available columns have been deprecated in favour of adding the information to the Disk Usage columns. (by Sven Schliesing).
- No longer ignore disks smaller than 1Gb, but still ignore disks of total size 0.
- Minor fixes in the documentation (by Sebastian Gumprich, et al).
- Better error reporting.
For more information, see the Github page. Many thanks to the bug reporters and contributors!
For those of you that are using Ansible to manage hosts, you may have noticed you can use the
setup module to gather facts about the hosts in your inventory:
$ ansible -m setup --tree out/ all
$ ls out
centos.dev.local eek.electricmonk.nl zoltar.electricmonk.nl
$ head out/debian.dev.local
The setup module in combination with the
--tree option produces a directory of JSON files containing facts about ansible-managed hosts such as hostnames, IP addresses, total available and free memory, and much more.
I wrote ansible-cmdb to take that output and generate an user-friendly host overview / CMDB (Configuration Management Database) HTML page. Usage is simple:
$ ansible -m setup --tree out/ all # generate JSON output facts
$ ansible-cmdb out/ > cmdb.html # generate host-overview page
Here's an example of what it produces.
And here's a screenshot:
It can read your hosts inventory and gather variable values from it, which can be used in the templates that produce the output. You can also extend the gathered facts easily with your own facts by manually creating or generating additional output directories containing JSON files. This even allows you to manually define hosts which are not managed by Ansible.
Ansible-cmdb is template-driven, which means it's rather easy to modify the output. The output is generated using Mako templates.
I've just released v1.2. Packages are available in source, Debian/Ubuntu and Redhat/Centos formats.
For more information, see the Github page. I hope you like it!
When creating new credentials on Openvaz (6, 7 and 8), it takes a very long time to store the credentials.
The problem here is that the credentials are stored encrypted, and Openvaz (probably) has to generate a PGP key. This requires lots of random entropy, which is generally not abundantly available on a virtual machine. The solution is to install haveged:
sudo apt-get install haveged
Haveged will securely seed the random pool which will make a lot of random entropy available, even if you have no keyboard, mouse and soundcard attached. Ideal for VPSes.
I was trying to setup a jail for SSH on Ubuntu 14.04, but it didn't seem to work. The user I was trying to jail using ChrootDirectory could login with SFTP, but could still see everything. Turns out there were a few issues that were causing this. The summary is:
- All directories to the ChrootDirectory path must be owned by root and must not have world or group writability permissions.
- Ubuntu 14.04 sysv init and upstart scripts don't actually restart SSH, so changing the config file doesn't take effect.
- The "Match User XXXX" or "Match Group XXXX" configuration section must be placed at the end of the sshd.config file.
- Also don't forget to make your user a member of the sftponly group if you're using "Match Group sftponly".
All paths to the jail must have correct ownerships and permissions
All directories in the path to the jail must be owned by root. So if you configure the jail as:
Than /home, /home/backup/ and /home/backup/jail must be owned by root:<usergroup>:
chown root:root /home
chown root:backup /home/backup
chown root:backup /home/backup/jail
Permissions on at least the home directory and the jail directory must not include world-writability or group-writability:
chmod 750 /home/backup
chmod 750 /home/backup/jail
Ubuntu's SSH init script sucks
Ubuntu's SSH init script (both sysv init and upstart) suck. They don't actually even restart SSH (notice the PID):
# netstat -pant | grep LISTEN | grep sshd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 13838/sshd
# /etc/init.d/ssh restart
[root@eek]~# netstat -pant | grep LISTEN | grep sshd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 13838/sshd
The PID never changes! SSH isn't actually being restarted! The bug has been reported here: https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1390012.
To restart it you should use the "service" command, but even then it might not actually restart:
# service ssh restart
[root@eek]~# netstat -pant | grep LISTEN | grep sshd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 13838/sshd
This generally happens because you've got an error in your ssh configuration file. Naturally they don't actually bother with telling you as much, and the log file also shows nothing.
The Match section in the SSHd configuration must be placed at the end of the file
When I finally figured out that SSH wasn't being restarted, I tried starting it by hand. You might run into the following error:
# sshd -d
sshd re-exec requires execution with an absolute path
You should execute it with the full path because SSHd will start new sshd processes for each connection, so it needs to know where it lives:
Now I finally found out the real problem:
/etc/ssh/sshd_config line 94: Directive 'UsePAM' is not allowed within a Match block
My config looked like this:
Match User obnam
Aparently SSH is too stupid to realize the Match section is indented and thinks it runs until the end of the file. The answer here is to move the section to the end of the file:
Match User obnam
This will fix the problem and sftponly should work now.
Many posts have been written on putting your homedir in git. Nearly everyone uses a different method of doing so. I've found the method I'm about to describe in this blog post to work the best for me. I've been using it for more than a year now, and it hasn't failed me yet. My method was put together from different sources all over the web; long since gone or untracable. So I'm documenting my setup here.
So, what makes my method better than the rest? What makes it better than the multitude of pre-made tools out there? The answer is: it depends. I've simply found that this methods suits me personally because:
- It's simple to implement, simple to understand and simple to use.
- It gets out of your way. It doesn't mess with repositories deeper in your home directory, or with any tools looking for a .git directory. In fact, your home directory won't be a git repository at all.
- It's simple to see what's changed since you last committed. It's a little harder to see new files not yet in your repository . This is because by default everything is ignored unless you specifically add it.
- No special tools required, other than Git itself. A tiny alias in your .profile takes care of all of it.
- No fiddling with symlinks and other nonsense.
How does it work?
It's simple. We create what is called a "detached working tree". In a normal git repository, you've got your
.git dir, which is basically your repository database. When you perform a checkout, the directory containing this
.git dir is populated with files from the git database. This is problematic when you want to keep your home directory in Git, since many tools (including git itself) will scan upwards in the directory tree in order to find a
.git dir. This creates crazy scenario's such as Vim's CtrlP plugin trying to scan your entire home directory for file completions. Not cool. A detached working tree means your
.git dir lives somewhere else entirely. Only the actual checkout lives in your home dir. This means no more nasty
An alias '
dgit' is added to your
.profile that wraps around the
git command. It understands this detached working directory and lets you use git like you would normally. The dgit alias looks like this:
alias dgit='git --git-dir ~/.dotfiles/.git --work-tree=$HOME'
Simple enough, isn't it? We simply tell git that our working tree doesn't reside in the same directory as the .git dir (~/.dotfiles), but rather it's our directory. We set the git-dir so git will always know where our actual git repository resides. Otherwise it would scan up from the curent directory your in and won't find the .git dir, since that's the whole point of this exercise.
Setting it up
Create a directory to hold your git database (the
$ mkdir ~/.dotfiles/
$ cd ~/.dotfiles/
~/.dotfiles$ git init .
Create a .gitifnore file that will ignore everything. You can be more conservative here and only ignore things you don't want in git. I like to pick and choose exactly which things I'll add, so I ignore everything by default and then add it later.
~/.dotfiles$ echo "*" > .gitignore
~/.dotfiles$ git add -f .gitignore
~/.dotfiles$ git commit -m "gitignore"
Now we've got a repository set up for our files. It's out of the way of our home directory, so the .git directory won't cause any conflicts with other repositories in your home directory. Here comes the magic part that lets us use this repository to keep our home directory in. Add the dgit alias to your .bashrc or .profile, whichever you prefer:
~/.dotfiles$ echo "alias dgit='git --git-dir ~/.dotfiles/.git --work-tree=\$HOME'" >> ~/.bashrc
You'll have to log out and in again, or just copy-paste the alias defnition in your current shell. We can now the repository out in our home directory with the
~/.dotfiles$ cd ~
$ dgit reset --hard
HEAD is now at 642d86f gitignore
Now the repository is checked out in our home directory, and it's ready to have stuff added to it. The
dgit reset --hard command might seem spooky (and I do suggest you make a backup before running it), but since we're ignoring everything, it'll work just fine.
Everything we do now, we do with the
dgit command instead of normal git. In case you forget to use
dgit, it simply won't work, so don't worry about that.
dgit status shows nothing, since we've gitignored everything:
$ dgit status
On branch master
nothing to commit, working directory clean
We add things by overriding the ignore with
$ dgit add -f .profile
$ dgit commit -m "Added .profile"
[master f437f9f] Added .profile
1 file changed, 22 insertions(+)
create mode 100644 .profile
We can push our configuration files to a remote repository:
$ dgit remote add origin ssh://firstname.lastname@example.org:dotfiles
$ dgit push origin master
* [new branch] master -> master
And easily deploy them to a new machine:
$ ssh someothermachine
$ git clone ssh://email@example.com:dotfiles ./.dotfiles
$ alias dgit='git --git-dir ~/.dotfiles/.git --work-tree=$HOME'
$ dgit reset --hard
HEAD is now at f437f9f Added .profile
Please note that any files that exist in your home directory will be overwritten by the files from your repository if they're present.
This DIY method of keeping your homedir in git should be easy to understand. Although there are tools out there that are easier to use, this method requires no installing other than Git. As I've stated in the introduction, I've been using this method for more than a year, and have found it to be the best way of keeping my home directory in git.
SquashFS is generally used for LiveCDs or embedded devices to store a compressed read-only version of a file system. This saves space at the expense of slightly slower access times from the media. There's another use for SquashFS: keeping an easily accessible compressed mounted image available. This is particularly useful for archival purposes such as keeping a full copy of an old server or directory around.
Usage is quite easy under Debian-derived systems. First we install the squashfs-tools package
$ sudo apt-get install squashfs-tools
Create an compressed version of a directory:
$ sudo mksquashfs /home/fboender/old-server_20150608/ old-server_20150608.sqsh
Remove the original archive:
$ sudo rm -rf /home/fboender/old-server_20150608
Finally, mount the compressed archive:
$ sudo mkdir /home/fboender/old-server_2015060
$ sudo mount -t squashfs -o loop old-server_20150608.sqsh /home/fboender/old-server_2015060
Now you can directly access files in the compressed archive:
$ sudo ls /home/fboender/old-server_2015060
The space savings are considerable too.
$ sudo du -b -s /home/fboender/old-server_2015060
$ sudo ls -l old-server_20150608.sqsh
-rw-r--r-- 1 root root 1530535936 Jun 8 12:45
17 Gb for the full uncompressed archive versus only 1.5 Gb for the compressed archive. We just saved 15.5 Gb of diskspace. .
Optionally, you may want to have it mounted automatically at boottime:
$ sudo vi /etc/fstab
/home/fboender/old-server_20150608.sqsh /home/fboender/old-server_2015060 squashfs ro,loop 0 0
If the server starts up, the archive directory will be automatically mounted.