Blog <-

python-libtorrent program doesn't exit

(TL;DR: To the solution)

I was mucking about with the Python bindings for libtorrent, and made something like this:

import libtorrent
fname = 'test.torrent'
ses = libtorrent.session()
ses.listen_on(6881, 6891)
info = libtorrent.torrent_info(fname)
h = ses.add_torrent({'ti': info, 'save_path': session_dir})
prev_progress = -1
while (not h.is_seed()):
    status = h.status()
    progress = int(round(status.progress * 100))
    if progress != prev_progress:
        print 'Torrenting %s: %i%% done' % (, progress)
        prev_progress = progress
print "Done torrenting %s" % (
# ... more code

After running it a few times, I noticed the program would not always terminate. You'd immediately suspect a problem in the while loop condition, but in all cases "Done torrenting Foo" would be printed and then the program would hang.

In celebration of one of the rare occasions that I don't spot a hanging problem in such a simple piece of code right away, I fired up PDB, the Python debugger, which told me:

$ pdb ./tvt 
> /home/fboender/Development/tvtgrab/trunk/src/tvt(9)()
-> import sys
(Pdb) cont
Torrenting Example Torrent v1.0: 100% done
Done torrenting Example Torrent v1.0
The program finished and will be restarted

after which it promptly hung. That last line, "The program finished and will be restarted", that's PDB telling us execution of the program finished. Yet it still hung.

At this point, I was suspecting threads. Since libtorrent is a C++ program, and as the main loop in my code doesn't actually really do anything, it seems libtorrent is doing its thing in the background, and not properly shutting down every now and then. (Although it's more likely I just don't understand what it's doing) It's quite normal for torrent clients to take a while before closing down, especially if there are still peers connected. Most of the time, if I waited long enough, the program would terminate normally. However, sometimes it wouldn't terminate even after an hour, even if no peers were at any point connected to any torrents (the original code does not always load torrents into a session).

Digging through the documentation, I couldn't easily find a method of shutting down the session. I did notice the following:


The destructor of session will notify all trackers that our torrents have been shut down. If some trackers are down, they will time out. All this before the destructor of session returns. So, it's advised that any kind of interface (such as windows) are closed before destructing the session object. Because it can take a few second for it to finish. The timeout can be set with set_settings().

Seems like libtorrent uses destructors to shut down the session. Adding the following to the end of the code fixed the problem of the script not exiting:

del ses

The del statement in Python calls any destructors (if you're lucky) on that class. Having nearly zero C++ knowledge, I suspect C++ calls destructors automatically at program exit. Python doesn't do that though, so we have to call it manually.

Update: Calling the destructor does not definitively solve the problem. I am still experiencing problems with hangs when calling the session destructor. I will investigate further and update when a solution has been found.

Update II: Well, I've not been able to solve the problem any other way than upgrading to the latest version of libtorrent. So I guess that'll have to do.

MobaXterm – (Free) All-in-one Xserver/SSH/Linux environment for Windows

I recently stumbled on MobaXterm. It's a complete unix enviroment including X Server/SSH/Telnet/SCP/FTP client all in one. The list of features is impressive to say the least. This is an excellent replacement for Putty.

A small selection of the most useful features:

  • Free. What more is there to say?
  • Tabs and Horizontal / Vertical split panels finally bring the full power of native Unix/Linux terminal emulators to Windows
  • Integrated X server. MobaXterm comes with an integrated X Server. Everything is set up correctly out-of-the-box. X11 forwarding means you can simply SSH to a remote machine and start X11 programs. It supports displaying remote X11 windows as native windows or you can run the X Server ina separate tab/window.
  • Session Management makes it easy to quickly connect to the machine you want.
  • Integrated SFTP when SSHing to a remote machine means you don't have to start a separate SFTP/SCP session. Just browse, upload and download remote files from the left side of the SSH session.
  • Many supported services, such as SSH, Telnet, local Linux/Cygwin terminal, local Windows command prompt, RSH, XDMCP, RDP, VNC, FTP, etc.
  • Session multiplexing provides a quick method of running commands on multiple machines at the same time.
  • SSH bouncing through a gateway SSH server means no more SSHing from machine to machine.
  • Cygwin environment so you can actually get some work done natively on Windows. Batteries, bells, whistles and kitchen sinks (as well as games) included: full unix environment with tools like grep, find, vim, etc, etc, etc.

There are countless more features. This is the terminal emulator app I always hoped Putty would become. Of all the different shells around Putty, separate SSH connection managers and terminals I've tried, this is by far the best one.

Companies, why don't you want my feedback?

I'm one of those people who think everything can always be a little bit better. Apparently companies aren't interested in hearing about customer experience, since it's basically always completely impossible to find a working, decent customer feedback point on any commercial website.

How sad is it that the only way to properly get into contact with a company is via Twitter (which is, of course, limited to 140 chars making it basically impossible to tell them about your issues/problems)? How sad is it that some companies actually artificially limit the number of characters you can enter in a feedback form on their website? Hello! Interwebtube bytes are free! No reason to limited the number of characters to one-thousand characters guys. What's that? Your time is too valuable to read through entire essays from frustrated consumers? Oh, that's just fine! I'll take my business somewhere else, thank you!

If any Quality Assurance Managers are reading this, I'll make it real easy for you:

  • An easy to find, CHEAP/FREE phone number on your site. One I can call for questions, feedback, etc. DO NOT try to sell me shit when I call you with a question or complaint. Just.. don't. I will take my business somewhere else.
  • An easy to find question/feedback email address on your website.
  • If you absolute must have a form, make sure it doesn't ask for my phonenumber, it doesn't limit the type of question I can ask (try including an "other reason" option?) and I don't have to jump through hoops to validate my information. I don't want you to have my address, phone number, email address, or anything else. You don't ask that information from customers who call you with a question, do you? Then allow – don't force – me to fill it out on your forms. I just want to let you know that there's a problem with your website! Today I had to fill out an online form and I had to provide my land-line phone number! "Hello?! 1999 called using their land-line! They want their ancient technology back!" Who still has a land-line, seriously?!

Companies, seriously… why do you make it so exceptionally hard for me to provide you with feedback? I'm trying to help! I want to let you know about broken restitution forms on your website, I want to let you know about why I went to the competitors so you can improve your products. I really do! So stop with the bullshit online questionnaires that pop up on your website when I least expect nor want to see a "Please participate in our questionnaires!" when that's not why I'm on your site!

Stop wasting money on crappy Quality Assurance Managers. If your website doesn't have email contact information, someone in your company needs to be fired.


Setting I/O priorities on Linux

All us system admins know about nice, which lets you set the CPU priorities of a process. Every now and then I need to run an I/O-heavy process. This inevitably makes the system intermittently unresponsive or slow, which can be a real annoyance.

If your system is slow to respond, you can check to see if I/O is the problem (which it usually will be) using a program called iotop, which is similar to the normal top program except it doesn't show CPU/Memory but disk reads/writes. You may need to install it first:

# aptitude install iotop

The output looks like this:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                       
12404 be/4 fboender  124.52 K/s  124.52 K/s  0.00 % 99.99 % cp /home/fboender
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]

As you can see, the copy process with PID 12404 is taking up 99.99% of my I/O, leaving little for the rest of the system.

In recent Linux kernels (2.6.13 with the CFQ io scheduler), there's an option to renice the I/O of a process. The ionice tool allows you to renice the processes from userland. It comes pre-installed on Debian/Ubuntu machines in the util-linux package. To use it, you must specify a priority scheduling class using the -c option.

  • -c0 is an old deprecated value of "None", which is now the same as Best-Effort (-c2)
  • -c1 is Real Time priority, which will give the process the highest I/O priority
  • -c2X is Best-Effort priority puts the process in a round-robin queue where it will get a slice of I/O every so often. How much it gets can be specified using the -n option which takes a value from 0 to 7
  • -c3 is Idle, which means the process will only get I/O when no other process requires it.

For example, I want a certain process (PID 12404) to only use I/O when no other process requires it, because the task is I/O-heavy, but is not high priority:

# ionice -c3 -p 12404

The effects are noticeable immediately. My system responds faster, there is less jitter on the desktop and the commandline.


Persistent undo history in Vim

Once you quit Vim, the undo history for that file is gone. This sometimes gives me problems if I accidentally made a change in a file without knowing it. This usually happens due to a bad Vim command which, for instance, capitalized a single letter.

There's an option which allows you to make the undo history persistent between Vim sessions. That means you can still undo changes you made in a file, even if you've quit Vim in the meantime.

You can add the following to your .vimrc to enable it:

set undofile   # Maintain undo history between sessions

This will create undo files all over the place, which look like this:

-rw-r--r-- 1 fboender fboender 320 2012-07-26 10:23 bad_gateway.txt
-rw-r--r-- 1 fboender fboender 523 2012-07-24 14:51 .bad_gateway.txt.un~

You can remedy this by including the following option in your configuration:

set undodir=~/.vim/undodir

Make sure to create the undodir:

$ mkdir ~/.vim/undodir

The undo files will now be saved in the undodir:

$ ls -la .vim/undodir/
total 12
drwxr-xr-x  2 fboender fboender 4096 2012-07-26 10:32 .
drwxr-xr-x 12 fboender fboender 4096 2012-07-26 10:24 ..
-rw-r--r--  1 fboender fboender  519 2012-07-26 10:32 %home%fboender%bad_gateway.txt

Pocket: consume all content offline on your mobile device

My smartphone doesn't have a data plan because the last thing I want is to be able to check my email and facebook while I'm not behind my PC. I do like to read though, so I want to use my smartphone to read content I've previously somehow flagged as interesting.

I've tried many apps. Instapaper, Diigo, Readability and a few others. All of them suck.

Some suck because they don't include inline images from articles in the offline version. Some suck because they're not free, some suck because the don't extract just the article text but make the entire webpage available offline, which doesn't really work on my tiny screen. Others suck because they don't sync properly.

And then there was Pocket. It includes inline images in the offline version, but it doesn't include the rest of a webpage. If it can't reliably detect where the article starts or ends, it makes the entire page available offline. Even on my tiny screen, it still manages to make offline webpages very readable. It's free, and it has no limits on how many articles you can make available offline. It can also make images and videos available for offline viewing.

Pocket is by far the best option for reading offline content on your smartphone / tablet. Get it here, you won't be disappointed.

Conque: Terminal emulators in Vim buffers

For the longest time, I've searched for a way to run terminal emulators in Vim buffers.

As a kind of work-around, I created Bexec, which allows you to run the current contents of a buffer through an external program. It then captures the output and inserts/appends it to another buffer.

Although Bexec works reasonable, and still has it's uses, it's not a true terminal emulator in Vim. Today I finally found a Vim plugin that let's you actually run interactive commands / terminals in Vim buffers: Conque.

It requires Vim with Python support built in. Installation is straight-forward if you've got the requirements.

Download the .vmb file, edit it in vim, and issue:

:so %

It will then be installed. Quit vim, restart it, and you can now run start using it:

:ConqueTerm bash

Very awesome.

The All-Web paradigm is a long way away

Google, with their Google Chrome OS, are betting on our computing-experience moving to the Cloud in the future. Some people agree with that prediction. As Hacker News user Wavephorm mentions:

The "All-Web" paradigm is coming, folks. And it really doesn't matter how much you love your iPhone, or your Android, or Windows phone. Native apps are toast, in the long run. Your data is moving to the cloud — your pictures, your music, your movies, and every document you write. It's all going up there, and local hard drives will be history within 3 years. And what that means is ALL software is heading there too. Native apps running locally on your computer are going to be thing of the past, and it simply blows my mind that even people here on HackerNews completely fail to understand this fact.

Although I believe many things will be moving to the cloud in the (near) future, I also believe there are still major barriers to be overcome before we can move our entire computing into the cloud. An 'All-web' paradigm, where there are NO local apps – where there is NO local persistent storage – is a long, long way off, if not entirely impossible.

The Cloud lacks interoperability

One major thing currently missing from the Cloud is interoperability between Web applications. As mentioned on Hacker News: "local hard drives will be history". I believe we are greatly underestimating the level of interoperability local storage offers. Name a single native application that can't load and save files from and to your hard drive? Local storage ties all applications together and allows them to work with each other's data. I can just as easily open an JPEG in a picture viewer as in a photo editing software package or set it as my background, etcetera.

If the All-web paradigm is to succeed, Web apps will need a way to talk to each other or at the very least talk to some unified storage in the Cloud without the user needing to download and re-upload files each time. Right now, if I want to edit a photo stored in Picasa in a decent image editor, I have to download it from Picasa, upload it to an online image editor, download it from there and upload it again to Picasa (and removing the old photo). I have a pretty decent internet connection, but most of my time will be spent waiting 80 seconds for a 3.5 Mb picture to download, upload, download again, etc.

Perhaps cloud storage providers will start publishing APIs so that other web apps can accesss your files directly, but given that the Web historically has been about being as incompatible as possible with everything else, I believe this will be a very large, if not insurmountable, problem.

User control will be gone

When Google launched the new version of its Gmail interface, many people were annoyed. Many people are annoyed with Facebook's TimeLine interface. Many of my friends still run ancient versions of WinAmp to play their music, simply because it's the best music player out there. With the All-web paradigm, choice over which programs you use, and which version you want to use will be gone. The big men in the Cloud will determine what your interface will look like. There will be no running of older versions of programs. Unless web applications find some way to unify storage, (as I mentioned earlier), there will be no way to migrate to another application. At the very least it will be painful.

Cloud storage is expensive

I'm sure we all enjoy our cheap local storage. If I need to temporary store a few hundred gigabytes of data, I don't even have to think about where or how to store it. My home computer has installs for twelve different Operating Systems through VirtualBox. It takes up about 100 Gb. My collection of rare and local artist's music is around 15 Gb. Backups of my entire computing history take up about 150 Gb. Where in the cloud am I going to store all of that? Dropbox? It doesn't even list a price for storage in the Cloud like that! Going from the prices they do list, to replicate my local storage in the Cloud, I'd be paying about $200. A month.

Internet connections are not up to par

We may think our internet connections are fast, and compared to a few years ago they are, but they're not fast enough by a long shot to do our daily computing in the Cloud. First of all, upstreams are generally much more limited than upstreams. If the All-Web paradigm is going to work, that has to change. But home internet connections aren't really the problem, I think. The real problem is mobile networks. The All-web paradigm requires being online all the time, everywhere. Lately there's been a trend (at least in my country) of reducing mobile internet subscriptions from unlimited data plans to very limited plans. A 500 Mb limit per month is not uncommon now. Telco's reasoning is that they need to recuperate costs for operating the network. Some still offer "unlimited" data plans where, after exceeding your monthly quota, you'll be put back to 64kb/s. It's enough to check my email (barely), but it surely isn't enough to do anyone's day-to-day computing from the Cloud.

And that's the situation here, in one of the most well-connected countries in the world. Think of the number of countries that aren't so fortunate. If nothing else, those countries will keep local computing alive.


Most web apps require a monthly subscription to do anything meaningful with them. It could be just me, but I much rather pay a single price up front after which I will be able to use my purchase for as long as I like. With the All-web paradigm, I'd have to pay monthly fees for Google (Documents/storage), Dropbox, Netflix, some music streaming service, a VPS for development, and a lot more.

With the current prices, the monthly costs to me would be unacceptable. It's a lot cheaper to get a simple $400 desktop computer, which can take care of all those needs. Say I use it for 4 years. That comes down to about $8.50 a month. The cheapest Dropbox account is more expensive than that.

But the high price isn't really the problem. The problem is continuous payments. Say I lose my job, and I have to cut costs. With local computing, I could say "well, this PC is old, and should be replaced, but since I'm low on money, I'll keep using it for another year". Cancelling my subscription to some/all my services means I lose some/all my data. Remember, we're talking about an All-web environment here. No local storage large enough to store my data. The risks are simply too big.


There's no such thing as privacy in the Cloud. Your personal information and data will be mined, abused and sold. You have no control over it. The more data that is stored, the larger the temptation for companies and criminals to monetize that data. Right now, most people don't care too much about privacy. We still have a choice about what we put in the cloud and what we keep to ourselves. That picture of your girlfriend in lingerie won't be ending up on Facebook any time soon, right? With an All-web environment, you'll have no choice. Want to store or edit a picture? It has to move to the cloud. Even those most unconcerned with privacy won't accept that.

The best we can hope for would be that web companies will treat our data confidentially. Hope. We have no control. Arguments that companies who abuse our data will soon lose all their users are not relevant. Your data will already be abused by that time. We only need a single incident for people to start distrusting the All-web paradigm. In fact, I think that has already happened.


In the future, many local applications will move to the cloud. In fact, many already have. Music and movie streaming, word processing, image editing, storage; they will move more and more to the Cloud. The All-web paradigm though, will never fly. It would be a huge step back in terms of convenience, cost, privacy and abilities. Local computing is here to stay. It may become more and more of a niche market, but it won't disappear.

Re-use existing SSH agent (cygwin et al)

(Please note that this post is not specific to Windows nor Cygwin; it'll work on a remote unix machine just as well)

On my netbook, I use Windows XP in combination with Cygwin (A unix environment for Windows) and Mintty for my Unixy needs. From there, I usually SSH to some unix-like machine somewhere, so I can do systems administration or development.

Unfortunately, the default use of an SSH agent under Cygwin is difficult, since there's no parent process that can run it and put the required information (SSH_AUTH_SOCK) in the environment. On most Linux distribution, the SSH agent is started after you log in to an X11 session, so that every child process (terminals you open, etc) inherits the SSH_AUTH_SOCK environment setting and SSH can contact the ssh-agent to get your keys. Result? You have to start a new SSH agent, load your key and enter your password for each Mintty terminal you open. Quite annoying.

The upside is, it's not very hard to configure your system properly so that you need only one SSH agent running on your system, and thus only have to enter your password once.

The key lies in how ssh-agent creates the environment. When we start ssh-agent in the traditional manner, we do:

$ eval `ssh-agent`
Agent pid 1784

The command starts the SSH agent and sets a bunch of environment variables:

$ set | grep SSH_

The SSH_AUTH_SOCK is how the ssh command knows how to contact the agent. As you can see, the socket filename is generated randomly. That means you can't reuse the socket, since you can't guess the socket filename.

Good thing ssh-agent allows us to specify the socket filename, so we can easily re-use it.

Put the following in your ~/.bashrc:

# If no SSH agent is already running, start one now. Re-use sockets so we never
# have to start more than one session.

export SSH_AUTH_SOCK=/home/fboender/.ssh-socket

ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
   # No ssh-agent running
   rm -rf $SSH_AUTH_SOCK
   # >| allows output redirection to over-write files if no clobber is set
   ssh-agent -a $SSH_AUTH_SOCK >| /tmp/.ssh-script
   source /tmp/.ssh-script
   echo $SSH_AGENT_PID >| ~/.ssh-agent-pid
   rm /tmp/.ssh-script

What the script above does is, it sets the socket filename manually to /home/yourusername/.ssh-socket. It then runs ssh-add, which will attempt to connect to the ssh-agent through the socket. If it fails, it means no ssh-agent is running, so we do some cleanup and start one.

Now, all you have to do is start a single terminal, and load your keys once:

$ ssh-add ~/.ssh/fboender\@electricmonk.rsa
Enter passphrase for .ssh/fboender@electricmonk.rsa: [PASSWORD]
Identity added: .ssh/fboender@electricmonk.rsa (.ssh/fboender@electricmonk.rsa)

Now you can start as many new terminals as you'd like, and they'll all use the same ssh-agent, never requiring you to enter your password for that key more than once per boot.


I've updated the script with suggestions from Anthony Geoghegan. It now also works if noclobber is set.

monit alerts not emailed: Resource temporarily unavailable

While setting up Monit, a tool for easy monitoring of hosts and services, I ran into a problem. I had configured Monit to email alerts to my email address, using my personal mail server (IP/email addresses obfuscated to protect the innocence of my email inbox):

set mailserver
set alert
set httpd port 2812

check file test path /tmp/monittest
  if failed permission 644 then alert

After starting it with monit -c ./monitrc, I could reach the webserver at port 2812. I also saw that the check was failing:

# monit -c ./monitrc status
File 'test'
  status                            Permission failed
  monitoring status                 monitored
  permission                        600

However, it was not sending me emails with startup, and status error reports. My server's mail log showed no incoming mail, making it seem like Monit wasn't even trying to send email. Turning on Monit's logging feature, I noticed:

# monit -c ./monitrc quit
# monit -c ./monitrc -l ./monit.log
# tail monit.log
error : 'test' permission test failed for /tmp/monittest -- current permission is 0600
error : Sendmail: error receiving data from the mailserver '' -- Resource temporarily unavailable

I tried a manual connection to the mail server from the host where Monit was running, and it worked just fine.

The problem turned out to be a connection timeout to the mail server. Most mail servers nowadays wait a certain number of seconds before accepting connections. This reduces the rate at which spam can be delivered. Monit wasn't waiting long enough before determining that the mail server wasn't working, and bailed out of reporting errors with a 'Resource temporarily unavailable'.

The solution is easy. The set mailserver configuration allows you to specify a timeout:

set mailserver with timeout 30 seconds

I'm happy to report that Monit is now sending email alerts just fine.