There's a free download available of "Think Bayes – Bayesian Statistics in Python" over at it-ebooks.info. A small excerpt from the book:
The premise of this book, and the other books in the Think X series, is that if you know
how to program, you can use that skill to learn other topics.
Most books on Bayesian statistics use mathematical notation and present ideas in terms
of mathematical concepts like calculus. This book uses Python code instead of math,
and discrete approximations instead of continuous mathematics. As a result, what
would be an integral in a math book becomes a summation, and most operations on
probability distributions are simple loops.
I recently had to perform some bulk updates on semi-large tables (3 to 7 million rows) in MySQL. I ran into various problems that negatively affected the performance on these updates. In this blog post I will outline the problems I ran into and how I solved them. Eventually I managed to increase the performance from about 30 rows/sec to around 7000 rows/sec. The final performance was mostly CPU bound. Since this was on a VPS with only limited CPU power, I expect you can get better performance on some decently outfitted machine/VPS.
The situation I was dealing with was as follows:
About 20 tables, 7 of which were between 3 and 7 million rows.
Both MyISAM and InnoDB tables.
Updates required on values on every row of those tables.
The updates where too complicated to do in SQL, so they required a script.
All updates where done on rows that were selected on just their primary key. I.e. WHERE id = …
Here are some of the problems I ran into.
Python’s MySQLdb is slow
I implemented the script in Python, and the first problem I ran into is that the MySQLdb module is slow. It’s especially slow if you’re going to use the cursors. MySQL natively doesn’t support cursors, so these are emulated in Python code. One of the trickiest things is that a simple SELECT * FROM tbl will retrieve all the results and put them in memory on the client. For 7 million rows, this quickly exhausts your memory. Real cursors would fetch the result one-by-one from the database so that you don’t exhaust memory.
The solution here is to not use MySQLdb, but use the native client bindings available with import mysql.
LIMIT n,m is slow
Since MySQL doesn’t support cursors, we can’t mix SELECT and UPDATE queries in a loop. Thus we need to read in a bunch of rows into memory and update in a loop afterwards. Since we can’t keep all the rows in memory, we must read in batches. An obvious solution for this would be a loop such as (pseudo-code):
offset = 0
size = 1000
rows = query('SELECT * FROM tbl LIMIT :offset, :size"
for row in rows:
# do some updates
if len(rows) < size:
offset += size
This would use the LIMIT to read the first 1000 rows on the first iteration of the loop, the next 1000 on the second iteration of the loop. The problem is: in MySQL this becomes linearly slower for higher values of the offset. I was already aware of this, but somehow it slipped my mind.
The problem is that the database has to advance an internal pointer forward in the record set, and the further in the table you get, the longer that takes. I saw performance drop from about 5000 rows/sec to about 100 rows/sec, just for selecting the data. I aborted the script after noticing this, but we can assume performance would have crawled to a halt if we kept going.
The solution is to use the order by the primary key and then select everything we haven’t processed yet:
size = 1000
last_id = 0
rows = query('SELECT * FROM tbl WHERE id > :last_id ORDER BY id LIMIT :size')
if not rows:
for row in rows:
# do some updates
last_id = row['id']
This requires that you have an index on the id field, or performance will greatly suffer again. More on that later.
At this point, +SELECT+s were pretty speedy. Including my row data manipulation, I was getting about 40.000 rows/sec. Not bad. But I was not updating the rows yet.
The next things I did is some standard tricks to speed up bulk updates/inserts by disabling some foreign key checks and running batches in a transaction. Since I was working with both MyISAM and InnoDB tables, I just mixed optimizations for both table types:
db.query('SET unique_checks=0; ')
db.query('LOCK TABLES %s WRITE;' % (tablename))
# SELECT and UPDATE in batches
db.query('SET unique_checks=1; ')
I must admit that I’m not sure if this actually increased performance at all. It is entirely possible that this actually hurts performance instead. Please test this for yourselves if you’re going to use it. You should also be aware that some of these options bypass MySQL’s data integrity checks. You may end up with invalid data such as invalid foreign key references, etc.
One mistake I did make was that I accidentally included the following in an early version of the script:
db.query('ALTER TABLE %s DISABLE KEYS;' % (tablename))
Such is the deviousness of copy-paste. This option will disable the updating of non-unique indexes while it’s active. This is an optimization for MyISAM tables to massively improve performance of mass INSERTs, since the database won’t have to update the index on each inserted row (which is very slow). The problem is that this also disables the use of indexes for data retrieving, as noted in the MySQL manual:
While the nonunique indexes are disabled, they are ignored for statements such as SELECT and EXPLAIN that otherwise would use them.
That means update queries such as UPDATE tbl SET key=value WHERE id=1020033 will become incredibly slow, since they can no longer use indexes.
MySQL server tuning
I was running this on a stock Ubuntu 12.04 installation. MySQL is basically completely unconfigured out of the box on Debian and Ubuntu. This means that those 16 GBs of memory in your machine will go completely unused unless you tune some parameters. I modified /etc/mysql/my.cnf and added the following settings to improve the speed of queries:
key_buffer = 128M
innodb_buffer_pool_size = 3G
The key_buffer setting is a setting for MyISAM tables that determines how much memory may be used to keep indexes in memory. The equivalent setting for InnoDB is innodb_buffer_pool_size, except that the InnoDB setting also includes table data.
In my case the machine had 4 GB of memory. You can read more about the settings on these pages:
Don’t forget to restart your MySQL server.
Dropping all indexes except primary keys
One of the biggest performance boosts was to drop all indexes from all the tables that needed to be updates, except for the primary key indexes (those on the id fields). It is much faster to just drop the indexes and recreate them when you’re done. This is basically the manual way to accomplish what we hoped the ALTER TABLE %s DISABLE KEYS would do, but didn’t.
UPDATE: I wrote a better script which is available here.
Here’s a script that dumps SQL commands to drop and recreate indexes for all tables:
# DANGER WILL ROBINSON, READ THE important notes BELOW
mysql_username = 'root'
mysql_passwd = 'passwd'
mysql_host = '127.0.0.1'
dbname = 'mydb'
tables = sys.argv[1:]
indexes = 
db = _mysql.connect(user=mysql_username, passwd=mysql_passwd, db=dbname)
res = db.store_result()
for row in res.fetch_row(maxrows=0):
tablename = row
if not tables or tablename in tables:
db.query('SHOW INDEXES FROM %s WHERE Key_name != "PRIMARY"' % (tablename))
res = db.store_result()
for row_index in res.fetch_row(maxrows=0):
table, non_unique, key_name, seq_in_index, column_name, \
collation, cardinality, sub_part, packed, null, index_type, \
comment, index_comment = row_index
indexes.append( (key_name, table, column_name) )
for index in indexes:
key_name, table, column_name = index
print "DROP INDEX %s ON %s;" % (key_name, table)
for index in indexes:
key_name, table, column_name = index
print "CREATE INDEX %s ON %s (%s);" % (key_name, table, column_name)
DROP INDEX idx_username ON users;
DROP INDEX idx_rights ON rights;
CREATE INDEX idx_username ON users (username);
CREATE INDEX idx_perm ON rights (perm);
Some important notes about the above script:
The script is not foolproof! If you have non-BTREE indexes, if you have indexes spanning multiple columns, if you have any kind of index that goes beyond a BTREE single column index, please be careful about using this script.
You must manually copy-paste the statements into the MySQL client.
It does NOT drop the PRIMARY KEY indexes.
In the end, I went from about 30 rows per second around 8000 rows per second. The key to getting decent performance is too start simple, and slowly expand your script while keeping a close eye on performance. If you see a dip, investigate immediately to mitigate the problem.
Useful ways of investigation slow performance is by using tools to unearth evidence of the root of the problem.
top can tell you if a process is mostly CPU bound. If you’re seeing high amounts of CPU, check if your queries are using indexes to get the results they need.
iostat can tell you if a process is mostly IO bound. If you’re seeing high amounts of I/O on your disk, tune MySQL to make better use of memory to buffer indexes and table data.
Use the EXPLAIN function of MySQL to see if, and which, indexes are being used. If not, create new indexes.
Avoid doing useless work such as updating indexes after every update. This is mostly a matter of knowing what to avoid and what not, but that’s what this post was about in the first place.
Baby steps! It took me entirely too long to figure out that I was initially seeing bad performance because my SELECT LIMIT n,m was being so slow. I was completely convinced my UPDATE statements were the cause of the initial slowdowns I saw. Only when I started commenting out the major parts of the code did I see that it was actually the simple SELECT query that was causing problems initially.
That’s it! I hope this was helpful in some way!
If you're trying to connect to the Virtual KVM (console) on a HP Lights-Out 100i (LO100i) using the Remote Console Client Java applet, you might be getting an error in the order of
Username / Password invalid
This is a known problem with firmware version 4.24 (or Earlier):
The Virtual Keyboard/Video/Mouse (KVM )will not be accessible
on HP ProLiant 100-series servers with Lights-Out 100 Base
Management Card Firmware Version 4.24 (or earlier), if the server
has been running without interruption for 248 days (or more). When
this occurs, when attempting to access Virtual KVM/Media as shown
below, the browser will generate the following message[...]
As a solution, HP recommends:
As a workaround, shut down the server and unplug the power cable.
After a few seconds, reconnect the power cable and restart the server.
I've found that it isn't required to actually unplug the powercable. For me, remotely cold-restarting the iLoM card got rid of the problem. You can remotely cold-start the iLoM with ipmitool:
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc reset cold
Sent cold reset command to MC
Now we wait until the iLoM comes back up and we can succefully connect to the console via the KVM Java applet.
I'm trying out Juju with the 'local' environment, and ran into the following error:
$ sudo juju bootstrap
error: error parsing environment "local": no public ssh keys found
The Getting Started Guide mentions nothing of this error, and I couldn't find a solution on the web. After a bit of reading, it seems Juju requires a passwordless SSH key be available in your ~/.ssh dir. So to get rid of this error, just generate a new key with no password:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/fboender/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <EMPTY>
Enter same passphrase again: <EMPTY>
Now you bootstrap Juju:
$ sudo juju bootstrap -e local
HAR, HTTP Archive, is a JSON-encoded dump of a list of requests and their associated headers, bodies, etc. Here's a partial example containing a single request:
"statusText": "Moved Permanently",
HAR files can be exported from Chrome's Network analyser developer tool (
ctrl-shift-i → Network tab → capture some requests → Right-click and select
Save as HAR with contents. (Additional tip: Check the "Preserve Log on Navigation option – which looks like a recording button – to capture multi-level redirects and such)
As human-readable JSON is, it's still difficult to get a good overview of the requests. So I wrote a quick Python script that turns the JSON into something that's a little easier on our poor sysadmin's eyes:
It supports colored output, dumping request headers and response headers and the body of POSTs and responses (although this will be very slow). You can filter out uninteresting requests such as images or CSS/JSS with the
You can get it by cloning the Git repository from the Bitbucket repository.
It seems even professional sysadmins occasionally forgets the bare minimum configuration that should be done on a new machine. As a developer and part-time system administrator, I can't count the number of times I've had to waste significantly more time Here's a, by no means exhaustive, list of things you should configure on any new machine you deploy.
1. Pick a good hostname
Set a sane hostname on your machine. Something that describes what the machine is or does. Something that uniquely identifies it from any other machines on, at least, the same network. For instance, machine for a client called Megacorp might be called "mc-tst-www-1" to identify the first test WWW server for Megacorp. The primary production loadbalancer might be called "mc-prod-lb-1". Never have your junior sysadmin bring down the master database backend because he thought he was on a different machine.
2. Put all hostnames in /etc/hosts
Put all hostnames your machine uses in the /etc/hosts file to avoid annoying DNS lookup delays and other problems.
3. Install ntpd
Running into problems related to clock drift on your server is not a matter of "if", but a matter of "when". And with clock drift it will be sooner rather than later, depending on which direction your clock is drifting in. Install NTPd, and synchronize it to the same servers as all your other machines. Don't use a default pool if you can avoid it, because they might use Round Robin DNS and give you different servers. Theoretically this shouldn't pose a problem. Theoretically…
If you're running virtual machines, turn off Virtualbox/VMWare/whatever's time synchronization. They've historically been proven to be very unreliable. Install ntpd anyway. And I swear, as a developer, I will kick you in the face if I ever have to diagnose another problem caused by a lack of ntpd.
4. Make sure email can be delivered
This one is simple. Make sure email can be delivered to the outside world. Many programs and scripts will need to be able to send email. Make sure they can. Ideally, you should have a dedicated SMTP server set up on your network that hosts can relay email through. A gateway firewall should prevent all other outgoing traffic for port 25, unless you want your server to be turned into a zombified spam node (which will happen).
5. Cron email
Configure Cron such that output is emailed to an actual person. You want to know about that "No space left on device" error that crashed your cobbled-together backups script. You can specify the email address with the MAILTO directive in the crontab file. Don't forget about user crontabs! Since it's hard to ensure every user crontab has a MAILTO setting, you may want to configure your SMTP server to automatically forward all email to a special email address.
6. Protect the SSH port
Unauthorized probing of the SSH port will happen, unless you prevent it. Weak passwords can be easily guessed in a few hundred tries. Timing attacks can be used to guess which accounts live on the system, even if the attacker can't guess the password. There are several options for securing your SSH port
- Listen on a different port. This is the least secure option, as it can usually be easily probed using a port scanner. It will fool some of the botnets out in the wild blindly scanning on port 22, but it won't keep out the more advanced attackers. If you go for this option, don't go for port 2222, but pick something arbitrary high, such as 58245.
- Install Fail2ban. It scans your logs and blocks any IPs that show malicious signs. This is a good idea, regardless of whether you want to secure SSH or not
- Firewall off the port completely. Only open access from a few select IPs, such as your management network. Use a port knocker to open SSH ports on demand in case you absolutely need access from unpredictable IPs.
7. Configure a firewall
This should go without saying.. install and configure a firewall. Firewall everything. Incoming traffic, outgoing traffic, all of it. Only open up what you need to open. Don't rely on your gateway's firewall to do its job; you will regret it when other machines on your network get compromised.
8. Monitor your system
Monitor your system, even if it's just a simple shell script that emails you about problems. Disks will fill up, services will mysteriously shut down and your CPU load will go to 300. I highly recommend also monitoring important services from a remote location.
9. Configure resource usage
Running Apache, a database or some Java stack? Configure it properly so it utilizes the resources your system has, but doesn't overload it. Configure the minimum and maximum connections Apache will accept, tune the memory your database is allowed to use, etc.
10. Keep your software up-to-date
Install something like apt-dater to keep your software up-to-date. Many server compromises are directly linked to outdated software. Don't trust yourself to keep a machine up to date. You will forget. If you're running third-party software not installed from your package repository, subscribe to their security announcement mailing list and keep a list of all third-party software installed on every server. A tool such as Puppet, Chef or Ansible can help keep your system not only up to date, but uniform.
11. Log rotation
Make sure all logs are automatically rotated, or your disks will fill up. Take a look at /etc/logrotate.d/ to see how. For instance, for Apache vhosts that each have their own log directory, you can add an entry such as:
# create 640 root adm # Disabled so old logfile's properties are used.
if [ -f /var/run/apache2.pid ]; then
/etc/init.d/apache2 restart > /dev/null
12. Prevent users from adding SSH keys
Remove the ability for users to add new authorized keys to their account. Which keys are allowed to connect should be in the admin's hand, not the users. Having the Authorized Keys files scattered all over your system also makes maintenance harder. To do this, change the AuthorizedKeysFile setting in /etc/ssh/sshd_config:
13. Limit user crontabs
Limit which users can create personal crontab entries by placing only allowed usernames in /etc/cron.allow. This prevents users from creating CPU/IO heavy cronjobs that interfere with your nightly backups.
14. Backups, backups and more backups
Make backups! Keep local backups for easy restoring of corrupt files, databases and other disasters. Databases should be backed up locally each night, if at all possible. Rotate backups on a daily, weekly and monthly cycle. Keep off-site backups too. For small servers I can highly recommend Boxbackup. It keeps remote encrypted backups, does full and incremental backups, keeps a history and does snapshotting as well as continues syncing. Only delta's (changes in files) are transferred and stored, so it is light on resources. wrote an article on setting it up which might prove useful.
15. Install basic tools
Make sure basic tools for daily admin tasks are pre-installed. There's nothing more annoying than having to track down problems and not having the means to do so, especially when your network refuses to come up. Some essential tools:
- Whatever more you need..
16. Install fail2ban
I've already mentioned this in the "Protect your SSH port", but it bears mentioning again: install Fail2ban to automatically block offending IPs.
That's it. These are the things I would consider the bare minimum that should be properly configured when you deploy a new machine. It will take a little bit more time up front to configure machines properly, but it will save you time in the end. I can highly recommend using Puppet, Chef or Ansible to help you automate these tasks.
This was the case a few years ago. I'm not sure it still is for VMWare. For VirtualBox, it most certainly is, but you wouldn't run that in a production environment probably. At the very least, install NTPd on your host.
I wrote a small tool that assists in creating mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires that you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.
You can download a tar.gz, a Debian/Ubuntu package or clone it from the Bitbucket page.
Clone / mirror / backup public and private repositories and wikis.
No need to store your username and password to update clones.
No need to run an SSH agent. Uses passwordless private Deployment Keys. (thus without write access to your repositories)
Here's how it works in short. Generate a passwordless SSH key:
Generating public/private rsa key pair.
Enter file in which to save the key: /home/fboender/.ssh/bbcloner_rsa<ENTER>
Enter passphrase (empty for no passphrase):<ENTER>
Enter same passphrase again: <ENTER>
You should add the generated public key to your repositories as a Deployment Key. The first time you use bbcloner, or whenever you've added new public or private repositories, you have to specify your username/password. BBcloner will retrieve a list of your repositories and create mirrors for any new repositories not yet mirrored:
$ bbcloner -n -u fboender /home/fboender/gitclones/
Cloning new repositories
Cloning project_a wiki
Now you can update the mirrors without using a username/password:
$ bbcloner /home/fboender/gitclones/
Updating existing mirrors
You can run the above from a cronjob. Specify the -s argument to prevent bbcloner from showing normal output.
The mirrors are full remote git repositories, which means you can clone them:
$ git clone /home/fboender/gitclones/project_a.git/
Cloning into project_a...
Don't push changes to it, or the mirror won't be able to sync. Instead, point the remote origin to your Bitbucket repository:
$ git remote rm origin
$ git remote add origin firstname.lastname@example.org:fboender/project_a.git
$ git push
remote: bb/acl: fboender is allowed. accepted payload.
Here are ways of getting bbcloner:
Fore more information, please see the Bitbucket repository.
Every now and then I have to work on something that involves LDAP, and every time I seem to have completely forgotten how it works. So I'm putting this here for future me: a quick introduction to LDAP basics. Remember, future me (and anyone else reading this), at the time of writing you are by no means an LDAP expert, so take that into consideration! Also, this will be very terse. There are enough books on LDAP on the internet. I don't think we need another.
What is LDAP?
- LDAP stands for Lightweight Directory Access Protocol.
- It is a standard for storing and accessing "Directory" information. Directory as in the yellow pages, not the filesystem kind.
- OpenLDAP (unix) and Active Directory (Microsoft) implement LDAP.
- Commonly used to store organisational information such as employee information.
- Queried for access control definitions (logging in, checking access), addressbook information, etcetera.
How is information stored?
- LDAP is a hierachical (tree-based) database.
- Information is stored as key-value pairs.
- The tree structure is basically free-form. Every organisation can choose how to arrange the tree for themselves, although there are some commonly used patterns.
An example of an LDAP tree structure (some otherwise required attributes are left out for clarity!):
- Each leaf in the tree has a specific unique path called the Distinguished Name (DN). For example: uid=ppeterson,ou=people,dc=megacorp,dc=com
- Unlike file paths and most other tree-based paths which have their roots on the left, the Distinguished Name has the root of the tree on the right.
- Instead of the conventional path separators such as the dot ( . ) or forward-slash ( / ), the DN uses the comma ( , ) to separate path elements.
- Unlike conventional paths (e.g. /com/megacorp/people/ppeterson), the DN path includes an attribute type for each element in the path. For instance: dc=, ou= and uid=. These are abbreviations that specify the type of the attribute. More on attribute types in the Entry chapter.
- It is common to arrange the tree in a globally unique way, using dc=com,dc=megacorp to specify the organisation.
- Entries are parts of the tree that actually store information. In this case: uid=jjohnson and uid=ppeterson.
An example entry for DN uid=jjohnson,ou=people,dc=megacorp,dc=com (some otherwise required attributes are left out for clarity!):
- An entry has an Relative Distinguished Name (RDN). The RDN is a unique identifier for the entry in that part of the tree. For the entry with Distinguished Name (DN) uid=jjohnson,ou=people,dc=megacorp,dc=com, the RDN is uid=jjohnson.
- An entry stores key/value pairs. In LDAP lingo, these are called attribute types and attribute values. Attribute types are sometimes abbreviations. In this case, the attribute types are cn= (CommonName), uid= (UserID) and mail=.
- Keys may appear multiple times, in which case the are considered as a list of values.
- An entry has one or more objectClasses.
- Object classes are defined by schemas, and they determine which attributes must and may appear in an entry. For instance, the posixAccount object class is defined in the nis.schema and must include cn, uid, etc.
- Different object classes may define the same attribute types.
- A reference of common object classes can be found in Appendix E of the excellent Zytrax LDAP Guide.
- A reference of common attribute types can also be found in Appendix E.
Connecting and searching LDAP servers
The most common action to perform on LDAP servers is to search for information in the directory. For instance, you may want to search for a username to verify if they entered their password correctly, or you may want to search for Common Names (CNs) to auto-complete names and email addresses in your email client. In order to search an LDAP server, we must perform the following:
- Connect to the LDAP server
- Authenticate against the LDAP server so we are allowed to search. This is called binding. Basically it's just logging in. We bind against an LDAP server by specifying a user's DN and password. This can be confusing because there can be DNs/password with which you can bind in the LDAP, but also user/passwords which are merely stored so that other systems can authenticate users using the LDAP server.
- Specify which sub-part of the tree we wish to search. This is called the Base DN (Base Distinguished Name). For example: ou=people,dc=megacorp,dc=com, so search only people. Different bind DN's may search different parts of the tree.
- Specify how deep we want to search in the tree. This is called the level. The level can be: BaseObject (search just the named entry, typically used to read one entry), singleLevel (entries immediately below the base DN), orwholeSubtree (the entire subtree starting at the base DN).
- Specify what kind of entries we'd like to search for. This is called the filter. For example, (objectClass=*) will search for ANY kind of object class. (objectClass=posixAccount) will only search for entries of the posixAccount object class.
Here's an example of connecting, binding and searching an LDAP server using the ldapsearch commandline util:
$ ldapsearch -W -h ldap.megacorp.com -D "uid=ldapreader,dc=megacorp,dc=com"
-b ou=people,dc=megacorp,dc=com "(objectclass=*)"
- -W tells ldapsearch to prompt for a password.
- -h is the hostname of the LDAP server to connect to.
- -D is the Distguished Name (DN), a.k.a the username, with which to connect. In this case, a special ldapreader account.
- -b is the Base DN, a.k.a the subtree, we want to search.
Finally, we specify a search filter: "(objectclass=*)". This means we want to search for all object classes.
The previous example, but this time in the Python programming language:
l = ldap.initialize('ldap://ldap.megacorp.com:389')
That's it! Like I said, it's terse! If you need to know more about LDAP, here are some good resources on it:
Say you're trying to set the "ignore" property on something in a subversion checkout like this:
svn propset svn:ignore "foo.pyc" .
Next you do a
It seems it isn't working. In order to fix this, you must remember to first:
- Remove the file from subversion and commit
svn update all the checkouts of that repository so that the file is gone everywhere!
- Set the
- Now commit the property change, or
svn status will still show it (even in the local checkout)!
svn update all the checkouts of the repository
host1$ svn rm foo.pyc && svn commit -m "Remove compiled python code"
host2$ svn update
host1$ svn propset svn:ignore "foo.pyc" .
host1$ svn commit -m "Ignore compiled python code"
host2$ svn update
If you get conflicts because you didn't follow these steps exactly:
host2$ svn update
host2$ svn resolve --accept working foo.pyc
host2$ svn rm foo.pyc
host2$ svn update
At revision 123
That should solve it.
If you want all your subversion problems solved, try this.
The Joomla v2.5 backend administrator interface by default will log you out after you've been inactive for 24 minutes (some on the internet claim it's 15, others 30 minutes. For me, it seems it was 24). This is quite annoying, and usually easily fixed in most PHP applications by changing the session timeout. Joomla also requires that you modify some other parts. Here's how I got it to work:
Summary for the lazy technical people. These are the steps to modify the session timeout:
- In php.ini, find the session.gc_maxlifetime setting, and change it.
- In Joomla Admin inteface, go to Site → Global Configuration → System and change the Session Lifetime value.
- In Joomla's root directory, open configuration.php and change public $lifetime = '1440'; to the number of seconds.
If this wasn't enough information for you, read the following which explains more in-depth:
Step 1: Modify php.ini
Figure out which php.ini Joomla uses by creating the following "info.php" file in your Joomla directory:
Direct your browser to the file, for instance: http://mysite.example.com/info.php. You should see the purple/blue PHP info page. Locate the "Loaded Configuration File" setting. This is which php.ini file will be used. Make sure to delete the info.php file when you're done!
Edit the file (for me its /etc/php5/apache2/php.ini) and find the following setting:
session.gc_maxlifetime = ....
Change the setting to however many seconds you want to remain logged in without activity, before being logged out automatically. I set mine to 8 hours (28800 seconds):
session.gc_maxlifetime = 28800
Step 2: Timeout in the Joomla interface
I'm not sure this step is required, but I changed it, so you may also have too.
Open the Joomla Adminisatror backend (http://mysite.example.com/administator/), login as a Super User ('admin' usually), and open Site → Global Configuration → System. On the right side, change Session Lifetime to the number of seconds you want to keep the session alive. For me, that's 28000 seconds again.
Step 3: Joomla's configuration.php
Final step. In the Joomla top directory, you'll find a file called configuration.php. Open this file with your editor, and search for:
public $lifetime = '1440';
Change the number (1440) to the number of seconds you want the session to stay alive:
public $lifetime = '288000';
Save the file.
Step 4: Restart your webserver
As a final step, you may have to restart your webserver. How to do this depends on your installation.
Now your session should remain alive for the number of seconds specified, even if you're not active.