contact
----------------------------

Blog <-

Archive for the ‘python’ Category

RSS   RSS feed for this category

bbcloner v1.3 released. BitBucket backup tool.

I've just released bbcloner v1.3. bbcloner creates mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

This release features the following changes:

  • Support for updating over https (patch by msboom)

 

my_indexr: Script to drop and recreate MySQL indexes

As can be read in this article, I was in need of a method to quickly drop all indexes (except primary keys) from a MySQL database. After googling around a bit and being astonished that apparently no-one had written such a thing yet, I wrote the script that can be seen in that article.

Unfortunately, that script wasn't very good, so I decided to do a cleaner better implementation of it. The result is my_indexr, which spits out SQL commands to drop and recreate indexes on a database. Other features include:

  • Process only certain tables
  • Process non-primary or both normal and primary indexes
  • Correctly handles:
    • Primary key indexes
    • Compound key / multi-column indexes
    • Index types (BTREE, etc)
    • Prefix lengths
    • Auto_increment columns, which MUST be a key (my_indexr skips indexes with these columns in them)

There may still be some edge-cases that are not properly handled by my_indexr. If you encouter one, please let me know.

You can download my_indexr from its Bitbucket page.

This is what its output looks like:

$ ./indexr.py -u root -p mydb
DROP INDEX `location` ON `idx_tst_innodb_basic`;
DROP INDEX `name_age` ON `idx_tst_innodb_basic`;
DROP INDEX `email` ON `idx_tst_innodb_basic`;
DROP INDEX `PRIMARY` ON `idx_tst_innodb_compkey`;
CREATE  INDEX `location` USING BTREE ON `idx_tst_innodb_basic` (`location_id`);
CREATE  INDEX `name_age` USING BTREE ON `idx_tst_innodb_basic` (`name`(40),`age`);
CREATE UNIQUE INDEX `email` USING BTREE ON `idx_tst_innodb_basic` (`email`);
ALTER TABLE `idx_tst_innodb_compkey` ADD PRIMARY KEY (`last_name`,`first_name`);

Think Bayes: Bayesian Statistics in Python

There's a free download available of "Think Bayes – Bayesian Statistics in Python" over at it-ebooks.info. A small excerpt from the book:

The premise of this book, and the other books in the Think X series, is that if you know
how to program, you can use that skill to learn other topics.

Most books on Bayesian statistics use mathematical notation and present ideas in terms
of mathematical concepts like calculus. This book uses Python code instead of math,
and discrete approximations instead of continuous mathematics. As a result, what
would be an integral in a math book becomes a summation, and most operations on
probability distributions are simple loops.

Increasing performance of bulk updates of large tables in MySQL

I recently had to perform some bulk updates on semi-large tables (3 to 7 million rows) in MySQL. I ran into various problems that negatively affected the performance on these updates. In this blog post I will outline the problems I ran into and how I solved them. Eventually I managed to increase the performance from about 30 rows/sec to around 7000 rows/sec. The final performance was mostly CPU bound. Since this was on a VPS with only limited CPU power, I expect you can get better performance on some decently outfitted machine/VPS.

The situation I was dealing with was as follows:

  • About 20 tables, 7 of which were between 3 and 7 million rows.

  • Both MyISAM and InnoDB tables.

  • Updates required on values on every row of those tables.

  • The updates where too complicated to do in SQL, so they required a script.

  • All updates where done on rows that were selected on just their primary key. I.e. WHERE id = …

Here are some of the problems I ran into.

Python’s MySQLdb is slow

I implemented the script in Python, and the first problem I ran into is that the MySQLdb module is slow. It’s especially slow if you’re going to use the cursors. MySQL natively doesn’t support cursors, so these are emulated in Python code. One of the trickiest things is that a simple SELECT * FROM tbl will retrieve all the results and put them in memory on the client. For 7 million rows, this quickly exhausts your memory. Real cursors would fetch the result one-by-one from the database so that you don’t exhaust memory.

The solution here is to not use MySQLdb, but use the native client bindings available with import mysql.

LIMIT n,m is slow

Since MySQL doesn’t support cursors, we can’t mix SELECT and UPDATE queries in a loop. Thus we need to read in a bunch of rows into memory and update in a loop afterwards. Since we can’t keep all the rows in memory, we must read in batches. An obvious solution for this would be a loop such as (pseudo-code):

offset = 0
size = 1000
while True:
    rows = query('SELECT * FROM tbl LIMIT :offset, :size"
    for row in rows:
        # do some updates
    if len(rows) < size:
        break
    offset += size

This would use the LIMIT to read the first 1000 rows on the first iteration of the loop, the next 1000 on the second iteration of the loop. The problem is: in MySQL this becomes linearly slower for higher values of the offset. I was already aware of this, but somehow it slipped my mind.

The problem is that the database has to advance an internal pointer forward in the record set, and the further in the table you get, the longer that takes. I saw performance drop from about 5000 rows/sec to about 100 rows/sec, just for selecting the data. I aborted the script after noticing this, but we can assume performance would have crawled to a halt if we kept going.

The solution is to use the order by the primary key and then select everything we haven’t processed yet:

size = 1000
last_id = 0
while True:
    rows = query('SELECT * FROM tbl WHERE id > :last_id ORDER BY id LIMIT :size')
    if not rows:
        break
    for row in rows:
        # do some updates
        last_id = row['id']

This requires that you have an index on the id field, or performance will greatly suffer again. More on that later.

At this point, +SELECT+s were pretty speedy. Including my row data manipulation, I was getting about 40.000 rows/sec. Not bad. But I was not updating the rows yet.

Connection settings

The next things I did is some standard tricks to speed up bulk updates/inserts by disabling some foreign key checks and running batches in a transaction. Since I was working with both MyISAM and InnoDB tables, I just mixed optimizations for both table types:

db.query('SET autocommit=0;')
db.query('SET unique_checks=0; ')
db.query('SET foreign_key_checks=0;')
db.query('LOCK TABLES %s WRITE;' % (tablename))
db.query('START TRANSACTION;')
# SELECT and UPDATE in batches
db.query('COMMIT;')
db.query('UNLOCK TABLES')
db.query('SET foreign_key_checks=1;')
db.query('SET unique_checks=1; ')
db.query('SET autocommit=1;')

I must admit that I’m not sure if this actually increased performance at all. It is entirely possible that this actually hurts performance instead. Please test this for yourselves if you’re going to use it. You should also be aware that some of these options bypass MySQL’s data integrity checks. You may end up with invalid data such as invalid foreign key references, etc.

One mistake I did make was that I accidentally included the following in an early version of the script:

db.query('ALTER TABLE %s DISABLE KEYS;' % (tablename))

Such is the deviousness of copy-paste. This option will disable the updating of non-unique indexes while it’s active. This is an optimization for MyISAM tables to massively improve performance of mass INSERTs, since the database won’t have to update the index on each inserted row (which is very slow). The problem is that this also disables the use of indexes for data retrieving, as noted in the MySQL manual:

While the nonunique indexes are disabled, they are ignored for statements such as SELECT and EXPLAIN that otherwise would use them.

That means update queries such as UPDATE tbl SET key=value WHERE id=1020033 will become incredibly slow, since they can no longer use indexes.

MySQL server tuning

I was running this on a stock Ubuntu 12.04 installation. MySQL is basically completely unconfigured out of the box on Debian and Ubuntu. This means that those 16 GBs of memory in your machine will go completely unused unless you tune some parameters. I modified /etc/mysql/my.cnf and added the following settings to improve the speed of queries:

[mysqld]
key_buffer         = 128M
innodb_buffer_pool_size = 3G

The key_buffer setting is a setting for MyISAM tables that determines how much memory may be used to keep indexes in memory. The equivalent setting for InnoDB is innodb_buffer_pool_size, except that the InnoDB setting also includes table data.

In my case the machine had 4 GB of memory. You can read more about the settings on these pages:

Don’t forget to restart your MySQL server.

Dropping all indexes except primary keys

One of the biggest performance boosts was to drop all indexes from all the tables that needed to be updates, except for the primary key indexes (those on the id fields). It is much faster to just drop the indexes and recreate them when you’re done. This is basically the manual way to accomplish what we hoped the ALTER TABLE %s DISABLE KEYS would do, but didn’t.

UPDATE: I wrote a better script which is available here.

Here’s a script that dumps SQL commands to drop and recreate indexes for all tables:

#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# DANGER WILL ROBINSON, READ THE important notes BELOW
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
import _mysql
import sys

mysql_username = 'root'
mysql_passwd = 'passwd'
mysql_host = '127.0.0.1'
dbname = 'mydb'

tables = sys.argv[1:]
indexes = []

db = _mysql.connect(user=mysql_username, passwd=mysql_passwd, db=dbname)
db.query('SHOW TABLES')
res = db.store_result()
for row in res.fetch_row(maxrows=0):
    tablename = row[0]
    if not tables or tablename in tables:
        db.query('SHOW INDEXES FROM %s WHERE Key_name != "PRIMARY"' % (tablename))
        res = db.store_result()
        for row_index in res.fetch_row(maxrows=0):
            table, non_unique, key_name, seq_in_index, column_name, \
            collation, cardinality, sub_part, packed, null, index_type, \
            comment, index_comment = row_index
            indexes.append( (key_name, table, column_name) )

for index in indexes:
    key_name, table, column_name = index
    print "DROP INDEX %s ON %s;" % (key_name, table)

for index in indexes:
    key_name, table, column_name = index
    print "CREATE INDEX %s ON %s (%s);" % (key_name, table, column_name)

Output looks like this:

$ ./drop_indexes.py
DROP INDEX idx_username ON users;
DROP INDEX idx_rights ON rights;
CREATE INDEX idx_username ON users (username);
CREATE INDEX idx_perm ON rights (perm);

Some important notes about the above script:

  • The script is not foolproof! If you have non-BTREE indexes, if you have indexes spanning multiple columns, if you have any kind of index that goes beyond a BTREE single column index, please be careful about using this script.

  • You must manually copy-paste the statements into the MySQL client.

  • It does NOT drop the PRIMARY KEY indexes.

Conclusions

In the end, I went from about 30 rows per second around 8000 rows per second. The key to getting decent performance is too start simple, and slowly expand your script while keeping a close eye on performance. If you see a dip, investigate immediately to mitigate the problem.

Useful ways of investigation slow performance is by using tools to unearth evidence of the root of the problem.

  • top can tell you if a process is mostly CPU bound. If you’re seeing high amounts of CPU, check if your queries are using indexes to get the results they need.

  • iostat can tell you if a process is mostly IO bound. If you’re seeing high amounts of I/O on your disk, tune MySQL to make better use of memory to buffer indexes and table data.

  • Use the EXPLAIN function of MySQL to see if, and which, indexes are being used. If not, create new indexes.

  • Avoid doing useless work such as updating indexes after every update. This is mostly a matter of knowing what to avoid and what not, but that’s what this post was about in the first place.

  • Baby steps! It took me entirely too long to figure out that I was initially seeing bad performance because my SELECT LIMIT n,m was being so slow. I was completely convinced my UPDATE statements were the cause of the initial slowdowns I saw. Only when I started commenting out the major parts of the code did I see that it was actually the simple SELECT query that was causing problems initially.

That’s it! I hope this was helpful in some way!

Quick-n-dirty HAR (HTTP Archive) viewer

HAR, HTTP Archive, is a JSON-encoded dump of a list of requests and their associated headers, bodies, etc. Here's a partial example containing a single request:

{
  "startedDateTime": "2013-09-16T18:02:04.741Z",
  "time": 51,
  "request": {
    "method": "GET",
    "url": "http://electricmonk.nl/",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "queryString": [],
    "cookies": [],
    "headersSize": 38,
    "bodySize": 0
  },
  "response": {
    "status": 301,
    "statusText": "Moved Permanently",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "cookies": [],
    "content": {
      "size": 0,
      "mimeType": "text/html"
    },
    "redirectURL": "",
    "headersSize": 32,
    "bodySize": 0
  },
  "cache": {},
  "timings": {
    "blocked": 0,
  }
},

HAR files can be exported from Chrome's Network analyser developer tool (ctrl-shift-i → Network tab → capture some requests → Right-click and select Save as HAR with contents. (Additional tip: Check the "Preserve Log on Navigation option – which looks like a recording button – to capture multi-level redirects and such)

As human-readable JSON is, it's still difficult to get a good overview of the requests. So I wrote a quick Python script that turns the JSON into something that's a little easier on our poor sysadmin's eyes:

harview_output

It supports colored output, dumping request headers and response headers and the body of POSTs and responses (although this will be very slow). You can filter out uninteresting requests such as images or CSS/JSS with the --filter-X options.

You can get it by cloning the Git repository from the Bitbucket repository.

Cheers!

bbcloner: create mirrors of your public and private Bitbucket Git repositories

 

bbclonerI wrote a small tool that assists in creating mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires that you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

You can download a tar.gz, a Debian/Ubuntu package or clone it from the Bitbucket page.

Features

  • Clone / mirror / backup public and private repositories and wikis.
  • No need to store your username and password to update clones.
  • Exclude repositories.
  • No need to run an SSH agent. Uses passwordless private Deployment Keys. (thus without write access to your repositories)

Usage

Here's how it works in short. Generate a passwordless SSH key:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: /home/fboender/.ssh/bbcloner_rsa<ENTER>
Enter passphrase (empty for no passphrase):<ENTER>
Enter same passphrase again: <ENTER>

You should add the generated public key to your repositories as a Deployment Key. The first time you use bbcloner, or whenever you've added new public or private repositories, you have to specify your username/password. BBcloner will retrieve a list of your repositories and create mirrors for any new repositories not yet mirrored:

$ bbcloner -n -u fboender /home/fboender/gitclones/
Password: 
Cloning new repositories
Cloning project_a
Cloning project_a wiki
Cloning project_b

Now you can update the mirrors without using a username/password:

$ bbcloner /home/fboender/gitclones/
Updating existing mirrors
Updating /home/fboender/gitclones/project_a.git
Updating /home/fboender/gitclones/project_a-wiki.git
Updating /home/fboender/gitclones/project_b.git

You can run the above from a cronjob. Specify the -s argument to prevent bbcloner from showing normal output.

The mirrors are full remote git repositories, which means you can clone them:

$ git clone /home/fboender/gitclones/project_a.git/
Cloning into project_a...
done.

Don't push changes to it, or the mirror won't be able to sync. Instead, point the remote origin to your Bitbucket repository:

$ git remote rm origin
$ git remote add origin git@bitbucket.org:fboender/project_a.git
$ git push
remote: bb/acl: fboender is allowed. accepted payload.

Get it

Here are ways of getting bbcloner:

More information

Fore more information, please see the Bitbucket repository.

python-libtorrent program doesn't exit

(TL;DR: To the solution)

I was mucking about with the Python bindings for libtorrent, and made something like this:

import libtorrent
 
fname = 'test.torrent'
ses = libtorrent.session()
ses.listen_on(6881, 6891)
 
info = libtorrent.torrent_info(fname)
h = ses.add_torrent({'ti': info, 'save_path': session_dir})
prev_progress = -1
while (not h.is_seed()):
    status = h.status()
    progress = int(round(status.progress * 100))
    if progress != prev_progress:
        print 'Torrenting %s: %i%% done' % (h.name(), progress)
        prev_progress = progress
    time.sleep(1)
 
print "Done torrenting %s" % (h.name())
# ... more code

After running it a few times, I noticed the program would not always terminate. You'd immediately suspect a problem in the while loop condition, but in all cases "Done torrenting Foo" would be printed and then the program would hang.

In celebration of one of the rare occasions that I don't spot a hanging problem in such a simple piece of code right away, I fired up PDB, the Python debugger, which told me:

$ pdb ./tvt 
> /home/fboender/Development/tvtgrab/trunk/src/tvt(9)()
-> import sys
(Pdb) cont
Torrenting Example Torrent v1.0: 100% done
Done torrenting Example Torrent v1.0
The program finished and will be restarted

after which it promptly hung. That last line, "The program finished and will be restarted", that's PDB telling us execution of the program finished. Yet it still hung.

At this point, I was suspecting threads. Since libtorrent is a C++ program, and as the main loop in my code doesn't actually really do anything, it seems libtorrent is doing its thing in the background, and not properly shutting down every now and then. (Although it's more likely I just don't understand what it's doing) It's quite normal for torrent clients to take a while before closing down, especially if there are still peers connected. Most of the time, if I waited long enough, the program would terminate normally. However, sometimes it wouldn't terminate even after an hour, even if no peers were at any point connected to any torrents (the original code does not always load torrents into a session).

Digging through the documentation, I couldn't easily find a method of shutting down the session. I did notice the following:

~session()

The destructor of session will notify all trackers that our torrents have been shut down. If some trackers are down, they will time out. All this before the destructor of session returns. So, it's advised that any kind of interface (such as windows) are closed before destructing the session object. Because it can take a few second for it to finish. The timeout can be set with set_settings().


Seems like libtorrent uses destructors to shut down the session. Adding the following to the end of the code fixed the problem of the script not exiting:

del ses

The del statement in Python calls any destructors (if you're lucky) on that class. Having nearly zero C++ knowledge, I suspect C++ calls destructors automatically at program exit. Python doesn't do that though, so we have to call it manually.

Update: Calling the destructor does not definitively solve the problem. I am still experiencing problems with hangs when calling the session destructor. I will investigate further and update when a solution has been found.

Update II: Well, I've not been able to solve the problem any other way than upgrading to the latest version of libtorrent. So I guess that'll have to do.

Python UnitTest: AssertRaises pitfall

I ran into a little pitfall with Python's UnitTest module. I was trying to unit test some failure cases where the code I called should raise an exception.

Here's what I did:

def test_file_error(self):
    self.assertRaises(IOError, file('/foo', 'r'))

I mistakenly thought this would work, in that assertRaises would notice the IOError exception and mark the test as passed. Naturally, it doesn't:

ERROR: test_file_error (__main__.SomeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "./test.py", line 10, in test_file_error
    self.assertRaises(IOError, file('/foo', 'r'))
IOError: [Errno 2] No such file or directory: '/foo'

The problem is that I'm a dumbass and I didn't read the documentation carefully enough:


assertRaises(exception, callable, *args, **kwds)
Test that an exception is raised when callable is called with any positional or keyword arguments that are also passed to assertRaises().

If you look carefully, you'll notice that I did not pass in a callable. Instead, I passed in the result of a callable! Here's the correct code:

def test_file_error(self):
    self.assertRaises(IOError, file, '/foo', 'r')

The difference is that this time I pass a callable (file) and the arguments ('/foo' and 'r') that the test case should pass to that callable. self.AssertRaises will then call it for me with the specified arguments and catch the IOError. In the first scenario (the wrong code), the call is made before the unit test is actually watching out for it.

Evolutionary Algorithm: Evolving "Hello, World!"

Note: The latest version of this article is always available from the Writings page in HTML, PDF, ePub and AsciiDoc (source) format.

My interest in Evolutionary Algorithms started when I read On the Origin of Circuits over at DamnInteresting.com. I always wanted to try something like that out for myself, but never really found the time. Now I have, and I think I've found some interesting results.

Disclaimer: I know next to nothing about Evolutionary Algorithms. Everything you read in here is the product of my own imagination and tests. I may use the wrong algorithms, nomenclature, methodology and might just be getting very bad results. They are, however, interesting to me, and I do know something about evolution, so here it is anyway.

How Evolution Works

So, how does an Evolutionary Algorithm work? Why, the same as normal biological evolution, mostly! Very (very) simply said, organism consist of DNA, which determine their characteristics. When organisms reproduce, there is a chance their offspring's DNA contains a mutation, which can lead to difference in characteristics. Sufficiently negative changes in offspring make that offspring less fit to survive, causing it, and the mutation, to die out eventually. Positive changes are passed on to future offspring. So through evolution an set of DNA naturally tends to grow towards its "goal", which is ultimate fitness for its environment. Now this is not an entirely correct description, but for our purposes it is good enough.

A simple evolutionary algorithm

There is nothing stopping us from using the same technique to evolve things towards goals set by a programmer. As can be seen from the Antenna example in the DamnInteresting article, this can sometimes even produce better things than engineers can come up with. In this post, I'm going to evolve the string "Hello, World!" from random garbage. The first example won't be very interesting, but it demonstrates the concept rather well.

First, lets define our starting point and end goal:

source = "jiKnp4bqpmAbp"
target = "Hello, World!"

Our evolutionary algorithm will start with "jiKnp4bqpmAbp", which we can view as the DNA of our "organism". It will then randomly mutate some of the DNA, and judge the new mutated string's fitness. But how do we determine fitness? This is probably the most difficult part of any evolutionary algorithm.

Lucky for us, there's an easy way to do this with strings. All we have to do is take the value of each character in the mutated string, and see how much it differs from the same character in the target string. This is called the distance between two characters. We then add all those differences, which leads us to a single value which is the fitness of that string. A fitness of 0 is perfect, and means that both strings are exactly the same. A fitness of 1 means one of the characters is off by one. For instance, the strings "Hfllo" and "Hdllo" both have a fitness of one. The higher the fitness number, the less fit it actually is!

Here's the fitness function.

def fitness(source, target):
   fitval = 0
   for i in range(0, len(source)):
      fitval += (ord(target[i]) - ord(source[i])) ** 2
   return(fitval)

If you look closely, you'll notice that for each character, I square the difference. This is to convert any negative numbers to positive ones, and to put extra emphasis on larger differences. If we don't do this, the string "Hannp" would have a fitness of 0. You see, the difference between 'e' and 'a' is -5, between 'l' and 'n' is +2 (which we have twice) and between 'o' and 'p' is +1. Adding these up yields a fitness of 0, but it's not the string we want at all. If we square the differences, they become 25, 4, 4 and 1, which yields a fitness of 34. Effectively, we square each difference so that they can't cancel each other out.

Edit: In the mutation algorithm below, I only mutate one character by one value at a time. It has been pointed out that, unless I actually allow for larger mutations, squaring the distance is largely pointless, since new mutations will always only differ by one value. At the time I wrote this fitness function, I had no idea how the rest of the algorithm would look like. It seemed like a good idea.

Now we need to introduce mutations into our string. This is rather easy. We simply pick a random character in the string, and either increment or decrease it by one, or leave it alone:

def mutate(source):
   charpos = random.randint(0, len(source) - 1)
   parts = list(source)
   parts[charpos] = chr(ord(parts[charpos]) + random.randint(-1,1))
   return(''.join(parts))

Time to tie the whole shabang together!

fitval = fitness(source, target)
i = 0
while True:
   i += 1
   m = mutate(source)
   fitval_m = fitness(m, target)
   if fitval_m < fitval:
      fitval = fitval_m
      source = m
      print "%5i %5i %14s" % (i, fitval_m, m)
   if fitval == 0:
      break

This should be easy enough to understand. For each iteration of the While-loop, we mutate the string and then calculate its fitness. If it is fitter then the original string (the parent), we make the child the new string. Otherwise, we throw it away. If the fitness is 0, we're done!

Lets look at some output. I'm snipping out some intermediary output cause it's not terribly interesting.

At generation 1, we have a fitness of 15491, and the string looks nothing like "Hello, World!". The same for generation 20, 40, 60, etc.

    1 15491  jjKnp4bqpmAbp
   20 15400  jiKnp3bppoAbp
   40 15377  jiKlo2bpooAdp
   60 15130  iiKlo2aoooAdp

Not much progress so far. At generation 500 it's still a load of nonsense:

  500  9986  \eTlo,YaorNdf

Generation 1200, we start to see something that looks like "Hello, World!":

 1200  4186  Heglo,LWorhdP

Generation 1500, we're getting very close!

 1500  3370  Hello,GWorldL

It still takes a good 1500 generations more before we're finally there:

 3078     2  Hello, Vorld"
 3079     2  Hfllo, World"
 3080     2  Hfllo, World"
 3081     0  Hello, World!

There it is!

A better, more interesting, algorithm

Okay, so that worked. But... it was kinda lame. Nothing interesting to see, really, was there? That's because our algorithm was a little too simplistic. Only one "organism" in the gene pool, only one character mutated at any time. We can do better than that, so let's modify the program to make it more interesting.

We're not going to touch our fitness function, since that works rather well. Instead, lets introduce a gene pool. Instead of having only one string, why not have a whole bunch or randomly generated strings and let them duke it out among themselves. That sounds a bit more real-life, doesn't it?

GENSIZE = 20
genepool = []
for i in range(0, GENSIZE):
   dna = [random.choice(string.printable[:-5]) for j in range(0, len(target))]
   fitness = calc_fitness(dna, target)
   candidate = {'dna': dna, 'fitness': fitness }
   genepool.append(candidate)

This little snippet generates a gene pool with 20 random strings and their fitnesses. In an official implementation, the gene pool would be called the population. (Thanks, reddit!)

Now, lets modify our mutation function. Instead of mutating one single character, we feed it two parents, picked at random from the genepool, and it will mix their DNA together a bit. This is called crossover. It will also randomly mutate one character in the resulting DNA. It then returns the newly fabricated child, including its fitness.

def mutate(parent1, parent2):
   child_dna = parent1['dna'][:]

   # Mix both DNAs
   start = random.randint(0, len(parent2['dna']) - 1)
   stop = random.randint(0, len(parent2['dna']) - 1)
   if start > stop:
      stop, start = start, stop
   child_dna[start:stop] = parent2['dna'][start:stop]

   # Mutate one position
   charpos = random.randint(0, len(child_dna) - 1)
   child_dna[charpos] = chr(ord(child_dna[charpos]) + random.randint(-1,1))
   child_fitness = calc_fitness(child_dna, target)
   return({'dna': child_dna, 'fitness': child_fitness})

We also need a routine to pick two random parents from the genepool. Now, we could just pick them completely random, but what you really want is for parents with a good fitness to have a better chance of offspring. This is called elitism If we sort the genepool list by fitness, we can use a uniform product distribution to make sure that parents with better fitness get chosen more often.

Now you might ask, what the hell is a uniform product distribution? When you randomly pick a number between, say, one and ten, each number has the same chance of being picked. This is called a "uniform distribution". But when you pick two random numbers, and you multiply them, there's a much bigger chance of getting a bigger number than a smaller number. Hence the name "uniform product distribution". Here's how that looks:

So our random parent picker will do just that. We select two random real numbers between 0 and 1, multiple those two random numbers and then scale the result up to our poolsize by multiplying the result with the size of the pool. We return that parent from the pool.

def random_parent(genepool):
   wRndNr = random.random() * random.random() * (GENSIZE - 1)
   wRndNr = int(wRndNr)
   return(genepool[wRndNr])

There! Now it's time for our main loop

while True:
   genepool.sort(key=lambda candidate: candidate['fitness'])

   if genepool[0]['fitness'] == 0:
      # Target reached
      break

   parent1 = random_parent(genepool)
   parent2 = random_parent(genepool)

   child = mutate(parent1, parent2)
   if child['fitness'] < genepool[-1]['fitness']:
      genepool[-1] = child

For each iteration of the While True loop, we first sort the genepool by fitness so that the most fit parents are at the top. We check to see if the fittest happens to be the target string we're looking for. If so, we stop the loop.

Then we select two parents from the genepool using the uniform product distribution so that fitter parents are chosen more often. We create a bastard mutated child that will mix both parents' DNA together and introduce a little mutation. If the new child is more fit than the worst in the genepool, it will replace that degenerate one in the genepool. In the next iteration, the pool is sorted again on fitness so that the new child takes its rightful place.

Results

Now it's time to run this puppy and see what it does. Again, I snip out some of the less interesting stuff.

Here's the genepool in the beginning. The first number is the generation (the number of times the While-loop has run), the second number the fitness and the third column is the DNA for that entry in the genepool.

     1   7617   'iSx{$,K`u~(B
     1   9284   SQf`1N#UdrPlT
     1  12837   sYIu<E"Fq'^_.
     1  15531   DC8Dg1I$*mUs-
     1  16064   L~*}JBVdF7bu2
     1  16533   1,XU%)5$q[YuO
     1  16588   ff],ceW<0fud&
     1  17316   [V3@2'VgY\{KV
     1  17356   kWw#v/P<#apG9
     1  17581   <Lrh(1hN_Bd)3
     1  18777   TM]_]TbtxFY:q
     1  19656   $zS+EI?BS>%z(
     1  19841   =S;B~((W8 D,6
     1  20398   P_A$D|NPJPio/
     1  21957   J&f=O:g\8'{S2
     1  22543   5*T2c"pMZ80L'
     1  24954   A&lZ#A_}MxI"P
     1  25186   &9MrI|0&x)q,N
     1  28110   OlXT/Q{y3{"LR
     1  29656   8WB99hx%0]}h[

One big random jumbled mess. Note the ones I've emphasized. These are the parents that were selected for the new child in the next generation. Lets see how it looks after one generation:

     2   7617   'iSx{$,K`u~(B
     2   8742   SQf`1N#UdfumT
     2   9284   SQf`1N#UdrPlT
     2  12837   sYIu<E"Fq'^_.
     2  15531   DC8Dg1I$*mUs-
     2  16064   L~*}JBVdF7bu2
     2  16533   1,XU%)5$q[YuO
     2  16588   ff],ceW<0fud&
     2  17316   [V3@2'VgY\{KV
     2  17356   kWw#v/P>#apG9
     2  17581   <Lrh(1hN_Bd)3
     2  18777   TM]_]TbtxFY:q
     2  19656   $zS+EI?BS>%z(
     2  19841   =S;B~((W8 D,6
     2  20398   P_A$D|NPJPio/
     2  21957   J&f=O:g\8'{S2
     2  22543   5*T2c"pMZ80L'
     2  24954   A&lZ#A_}MxI"P
     2  25186   &9MrI|0&x)q,N
     2  28110   OlXT/Q{y3{"LR

Two random parents from the previous generation have their DNA mixed, and have generated an offspring (the bold one) which is better then both of them. It comes in second with a fitness of 8742, while its parents only had fitness of 9284 and 16588. Lets skip ahead a bit and look at the 6th generation:

     6   7617   'iSx{$,K`u~(B
     6   8742   SQf`1N#UdfumT
     6   9284   SQf`1N#UdrPlT
     6  10198   SQfD1N#UdfumT
     6  12837   sYIu<E"Fq'^_.
     6  15531   DC8Dg1I$*mUs-
     6  16064   L~*}JBVdF7bu2
     6  16387   SQf`1N"MZ80LT
     6  16533   1,XU%)5$q[YuO
     6  16588   ff],ceW<0fud&
     6  17316   [V3@2'VgY\{KV
     6  17356   kWw#v/P>#apG9
     6  17356   kWw#v/P>#apG9
     6  17581   <Lrh(1hN_Bd)3
     6  18777   TM]_]TbtxFY:q
     6  19656   $zS+EI?BS>%z(
     6  19841   =S;B~((W8 D,6
     6  20287   fe],1eW<0fud&
     6  20398   P_A$D|NPJPio/
     6  21957   J&f=O:g\8'{S2

As you can see, the "SQf" has reproduced again with success, and there are now four variants of it in the genepool. We also note the "kWw#", which there are two identical ones of. This can happen when the entire DNA of one parent is copied and no mutation occurs. In our mutate function, we use the first parent's DNA as a base and then randomly overlay some of the seconds parent's DNA. This can anything from the entire second parent's DNA, or nothing at all. But generally, the chance is higher that the first parent's DNA survives largely in tact.

The next interesting generation is 13:

    13   4204   RQf`{$,KdfumT
    13   7617   'iSx{$,K`u~(B
    13   7617   'iSx{$,K`u~(B
    13   8742   SQf`1N#UdfumT
    13   8742   SQf`1N#UdfumT
    13   9284   SQf`1N#UdrPlT
    13   9284   SQf`1N#UdrPlT
    13  10198   SQfD1N#UdfumT
    13  12837   sYIu<E"Fq'^_.
    13  15531   DC8Dg1I$*mUs-
    13  15838   L~*xJBVdG7bu2
    13  15856   $zS+<E"Fq(^_(
    13  15883   L~*xJCVdG7bu2
    13  16064   L~*}JBVdF7bu2
    13  16387   SQf`1N"MZ80LT
    13  16533   1,XU%)5$q[YuO
    13  16588   ff],ceW<0fud&
    13  17316   [V3@2'VgY\{KV
    13  17356   kWw#v/P>#apG9
    13  17356   kWw#v/P>#apG9

Wow! "SQf" has been really busy and now almost rules the genepool. "iSx" is second and third, but has lost its number one position to the "RQf" variant of "SQf". "RQf" was introduced in the 12th generation as a child of an "iSx" and "SQf" variant. We see that "kWv" has been knocked almost to the end of the list by more fit candidates. It is very obvious that this pool is no longer random. Patterns are starting to emerge all over it.

By the time we reach generation 40:

    40   3306   RQSw{$-KcfumB
    40   4204   RQf`{$,KdfumT
    40   4229   RQf`|$,KdfumT
    40   4242   RQe`|$,KdfumT
    40   4795   RQSw{$-KdfumT
    40   4971   RQSwz$*K`uSnT
    40   4973   RQSwz$+K`uSmT
    40   4992   RQSwz$+K`uSnT
    40   5017   SQSxz$+K`uSmT
    40   5017   SQSxz$+K`uSmT
    40   5951   (QSxz$+KdfSmT
    40   5985   'QSxz$+K`uSmT
    40   6421   SQfx{$+K`u~(B
    40   6444   TQf`{$+K`u~(B
    40   6489   SQfx{$+KdfS(B
    40   6492   TQf`{$-K`u~(B
    40   7034   SQSxy$+KdfS(B
    40   7617   'iSx{$,K`u~(B
    40   7617   'iSx{$,K`u~(B
    40   7625   'iS`{$,Kdg~(B

The genepool is now almost entirely dominated by the "RQf" variants. Forms of its original parents "SQf" and "iSx" can still be found here and there, although "iSx" is almost entire gone from the pool. An interesting thing is that we can see combinations of letters (bold) that keep reappearing. These are almost like actual genes! Combinations of DNA that work well together and therefor stay in the genepool in that combination. It takes lots of generations to make variants of these genes that are more fit then previous versions.

The next milestone is found in the 67th generation:

    67   3138   RQSw{$+KdfukA
    67   3161   RQSw{$+KcfukA
    67   3176   RQSw{$,KdfulA
    67   3176   RQSw{$+KcfulA
    67   3218   RQSw{$-LcfumA
    67   3222   RQSw{%,KefumB
    67   3237   RQSw{$-LcfvmA
    67   3241   RQSw{$-KcfumA
    67   3241   RQSw{$-KcfumA
    67   3266   RQSw{$-KceumA
    67   3266   RQSw{$-KceumA
    67   3267   RRSw{$-KcfumB
    67   3289   RQSw{%,KefumC
    67   3306   RQSw{$-KcfumB
    67   3306   RQSw{$-KcfumB
    67   3323   RQSw{#-KcfumB
    67   3324   RPSw{$-KdfumB
    67   3331   RQSw{$-KbfumB
    67   3348   RQSw{#-KbfumB
    67   3489   RQSw{$+KdfumA

This marks the first generation where there are no other variations then the RQS one. But immediately, we see the next generation in which a new number one is found:

    68   3119   QQSw{$+KdfukA
    68   3138   RQSw{$+KdfukA
    68   3161   RQSw{$+KcfukA

By the 96th generation, QQS has taken over the top:

    96   3060   QQSw{%+KdhukA
    96   3065   QRSw{%+KdfukA
    96   3081   QQSw{%+KdgukA
    96   3081   QQSw{%+KdgukA
    96   3081   QQSw{%+KdgukA
    96   3096   QQSw{$+KdgukA
    96   3104   QQSw{%+KdfukA
    96   3119   QQSw{$+KdfukA
    96   3119   QQSw{$+KdfukA
    96   3119   QQSw{$+KdfukA
    96   3137   RRSw{$,KdfulA
    96   3137   RRSw{$,KdfulA
    96   3138   RQSw{$+KdfukA
    96   3138   RQSw{$+KdfukA
    96   3138   RQSw{$+KdfukA
    96   3138   RQSw{$+KdfukA
    96   3138   RQSw{$+KdfukA
    96   3142   QQSw{$,KdfukA
    96   3142   QQSw{$+KcfukA
    96   3144   QQSw|$+KdfukA

This is where the race gets boring. Every now and then a new, better, mutation will arise and take over the genepool. Change is slow though, and no big surprised are left. The candidates slowly but surely mutate until the reach something resembling the "Hello, World!" we are looking for in generation 1600:

  1600     19   Hdllo+ Worle%
  1600     20   Hdklo+ Worle%
  1600     20   Hdklo+ Worle%
  1600     20   Hdklo+ Worle%
  1600     20   Hdklo+ Worle%
  1600     20   Hdklo+ Workd%

It takes almost another half-thousand generation to get to the final target:

  1904      0   Hello, World!
  1904      1   Hello, World"
  1904      1   Hello, World"
  1904      2   Hello, Wprld"
  1904      2   Helmo, World"
  1904      2   Helmo, World"
  1904      2   Hdllo, World"
  1904      2   Hello, Worle"

Here are the program so you can download them and play with it a bit (ignore the SSL warning; it's a self-signed certificate):

Interesting (if you're boring like me and you like this kind of stuff) facts:

  • It usually takes anywhere between 2500 and 4000 generations to evolve the target.
  • On average, it takes approximately 3100 generations to evolve the target.
  • If we remove the parent DNA mixing and rely solely on mutations, it takes on average 3650 generations to evolve the target.
  • The parent DNA mixing is only really useful in the beginning. In the first generations, it can quickly propel a new mix of DNA to the top of the list, but later on random mutations instead of mixing DNA becomes the main driving force between the evolution. (this doesn't have to be the case in real life evolution, naturally)
  • Sometimes "beneficial" mutations disappear. For instance, the word "World" already appeared in mutation 1469, but was overtaken by other mutations quickly. It was pushed out of the genepool at generation 1486, only to reappear in generation 1659. From then on, it quickly rose to the top and dominated the top 5 positions of the genepool within 10 generations.

Update: It has rightly been pointed out that are much more efficient methods of this algorithm. Please keep in mind that I had absolutely no idea what I was doing. :-D I'm surprised I got so close to how one would properly implement an Evolutionary Algorithm.

Also, here are some more interesting statistics. I modified the mutation function a number of times, and these are the results:

  • One char, -1, 0 or +1 ascii-value: 3100 generations
  • Two chars, -1, 0 or +1 assii-value: 1924 generations
  • Three chars, -1, 0 or +1 ascii-values: 1734 generations
  • Four chars, -1, 0 or +1 ascii-values: 1706 generations
  • One char, between -4 and +4 ascii-values: 1459 generations
  • two chars: between -4 and +4 ascii-values: 2122 generations
  • Three chars, between -4 and +4 ascii-values: 4490 generations

You can also read the
Reddit discussion and the Hacker News discussion for some nice insights. One of the most interesting comments mentions:

FWIW, for this problem, at least the way the OP set it up, the "naive" algorithm is actually a very good way to go - when I increase the population size to 20, and set the mutation/selection/crossover policies OP used, I find that the average number of fitness checks required to hit "Hello, World" (about 3510) is actually higher than the number in the naive version (in the neighborhood of 3k, usually a bit under). Also, the real time taken is larger. Which means that adding "genetic" to the algorithm has actually hurt us...
In fact, even with my full GA codebase in hand (not a substantial one, I wrote it in response to this post, but it's more flexible than the OP's), I couldn't find any situation where having a population size more than a few members helped - single member mutation (which is accepted/rejected if better/worse) always won. This is a good indication that this type of problem is vastly better suited to gradient descent than it is to a genetic algorithm.

Cool stuff.

PyWebkitGTK: 'module' object has no attribute 'WebView'

If you're working with PyWebkitGTK, and you get the following error:

Traceback (most recent call last):
  File "./webkit.py", line 7, in 
    import webkit
  File "/home/todsah/webkit.py", line 18, in 
    class BrowserPage(webkit.WebView):
AttributeError: 'module' object has no attribute 'WebView'

… make sure you haven't named your script 'webkit.py', and there is no other script with the same name in that directory. Also delete any webkit.pyc pic files. Do'h!