contact
----------------------------

Blog <-

Archive for the ‘linux’ Category

RSS   RSS feed for this category

Script to start a Chrome browser with an SSH Socks5 proxy

Socks5 proxies are great. They allow you to tunnel all traffic for applications that support Socks proxies through the proxy. One example I frequently use is starting a Chrome window that will do everthing as if it was an a remote machine. This is especially useful to bypass firewalls so you can test websites that are only available on localhost on a remote machine, or sites that can only be accessed if you're on a remote network. Basically it's a poor-man's application-specific VPN over SSH.

Normally I run the following:

ssh -D 8000 -N remote.example.com &
chromium-browser --temp-profile --proxy-server="socks5://localhost:8000"

However that quickly becomes tedious to type, so I wrote a script:

#!/bin/bash
HOST=$1
SITE=$2

if [ -z "$HOST" ]; then
    echo "Usage; $0 <HOST> [SITE]"
    exit 1
fi

while `true`;
do
    PORT=$(expr 8000 + $RANDOM / 32) # random port in range 8000 - 9000
    if [ \! "$(netstat -lnt | awk '$6 == "LISTEN" && $4 ~ ".$PORT"')" ]; then
        # Port not in use
        ssh -D $PORT -N $HOST &
        PID=$!
        chromium-browser --temp-profile --proxy-server="socks5://localhost:$PORT" $SITE
        kill $PID
        exit 0
    fi
done

The script finds a random unused port in the range 8000 – 9000, starts a Socks5 proxy to it and then starts Chromium on that socks proxy.

Together with the excellent Scripts panel plugin for Cinnamon, this makes for a nice menu to easily launch a browser to access remote sites otherwise unreachable:

lm_panel_scripts

Update: Added a second optional parameter to specify the site you want the browser to connect too, for added convience.

BBCloner v1.4: Bitbucket backup tool

I've released v1.4 of BBCloner.

BBCloner (Bitbucket Cloner) creates mirrors of your public and private Bitbucket Git repositories. It also synchronizes already existing mirrors. Initial mirror setup requires you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

This release features a new flag: –tolerant (-t). It prevents bbcloner from complaining about failed repository synchronisation if a repository fails the first time. Only on the second failure to synchronize the same repository does bbcloner send out an email when using the -t switch. This should save on a lot of unwarranted emails, given that Bitbucket quite regularly seems to bork while syncing repositories.

Get the new release from the Downloads page or directly:

Host inventory overview using Ansible's Facts

Ansible is a multiplexing configuration orchestrator. It allows you to run commands and configure servers from a central system. For example, to run the uname -a command on servers in the group "intranet":

[fboender@jib]~/Projects/ansible$ ansible -m shell -a "uname -a" intranet
host001.example.com | success | rc=0 >>
Linux host001.example.com 2.6.32-45-server #102-Ubuntu SMP Wed Jan 2 22:53:00 UTC 2013 x86_64 GNU/Linux

host004.example.com | success | rc=0 >>
Linux vps004c.example.com 2.6.32-55-server #117-Ubuntu SMP Tue Dec 3 17:45:11 UTC 2013 x86_64 GNU/Linux

Ansible can also gather system information using the 'setup' module. It returns the information as a JSON structure:

[fboender@jib]~/Projects/ansible$ ansible -m setup intranet
host001.example.com | success >> {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "182.78.44.33", 
            "10.0.0.1"
        ], 
        "ansible_architecture": "x86_64", 
        "ansible_bios_date": "NA", 
        "ansible_bios_version": "NA", 
        ... etc

We can use this to display a short tabulated overview of important system information such as the FQDN, configured IPs, Disk and memory information. I wrote a quick script to do this. The result looks like this:

[fboender@jib]~/Projects/ansible$ ./hostinfo intranet
Name                     FQDN                     Datetime                    OS            Arch    Mem              Disk                 Diskfree          IPs
-----------------------  -----------------------  --------------------------  ------------  ------  ---------------  -------------------  ----------------  -------------------------------------------------------------------------
host001                  host001.example.com      2015-01-20 14:37 CET +0100  Ubuntu 12.04  x86_64  4g (free 0.16g)  80g                  40g               182.78.44.33, 10.0.0.1
host002                  host002.example.com      2015-01-20 14:37 CET +0100  Ubuntu 14.04  x86_64  2g (free 1.21g)  40g                  18g               182.78.44.34, 10.0.0.2
xxxxxx.xxxxxx.xx         xxxxxxx.example.com      2015-01-20 13:37 CET +0000  Ubuntu 10.04  x86_64  2g (free 0.04g)  241g                 20g               192.168.0.2, 10.000.0.4
xxxx.xxxxxx.xxx          xxxx.otherdom.com        2015-01-20 14:37 CET +0100  Ubuntu 13.04  x86_64  8g (free 0.14g)  292g, 1877g, 1877g   237g, 583g, 785g  192.168.1.9, 192.168.1.10, 10.0.0.6
xxxxxxxx.xxxxxx.xx       xxxx.otherdom.com        2015-01-20 14:36 CET +0100  Ubuntu 14.04  i386    6g (free 0.25g)  1860g, 1877g, 1877g  960g, 292g, 360g  10.0.0.5, 10.0.0.14, 192.168.1.12
xxxxx.xxxxx.xxx          test.otherdom.com        2015-01-20 14:37 CET +0100  Ubuntu 9.10   x86_64  2g (free 0.28g)  40g                  16g               10.0.0.15, 10.0.0.9

The script:

#!/usr/bin/python
# MIT license

import os
import sys
import shutil
import json
import tabulate
import pprint

host = sys.argv[1]
tmp_dir = 'tmp_fact_col'

try:
    shutil.rmtree(tmp_dir)
except OSError:
    pass
os.mkdir(tmp_dir)
cmd = "ansible -t {} -m setup {} >/dev/null".format(tmp_dir, host)
os.system(cmd)

headers = [
    'Name', 'FQDN', 'Datetime', 'OS', 'Arch', 'Mem', 'Disk', 'Diskfree', 'IPs',
]
d = []

for fname in os.listdir(tmp_dir):
    path = os.path.join(tmp_dir, fname)
    j = json.load(file(path, 'r'))
    if 'failed' in j:
        continue
    d.append(
        (
            fname,
            j['ansible_facts']['ansible_fqdn'],
            "%s %s:%s %s %s" % (
                j['ansible_facts']['ansible_date_time']['date'],
                j['ansible_facts']['ansible_date_time']['hour'],
                j['ansible_facts']['ansible_date_time']['minute'],
                j['ansible_facts']['ansible_date_time']['tz'],
                j['ansible_facts']['ansible_date_time']['tz_offset'],
            ),
            "%s %s" % (
                j['ansible_facts']['ansible_distribution'],
                j['ansible_facts']['ansible_distribution_version'],
            ),
            j['ansible_facts']['ansible_architecture'],
            '%0.fg (free %0.2fg)' % (
                (j['ansible_facts']['ansible_memtotal_mb'] / 1000.0),
                (j['ansible_facts']['ansible_memfree_mb'] / 1000.0)
                ),
            ', '.join([str(i['size_total']/1048576000) + 'g' for i in j['ansible_facts']['ansible_mounts']]),
            ', '.join([str(i['size_available']/1048576000) + 'g' for i in j['ansible_facts']['ansible_mounts']]),
            ', '.join(j['ansible_facts']['ansible_all_ipv4_addresses']),
        )
    )
    os.unlink(path)
shutil.rmtree(tmp_dir)
print tabulate.tabulate(d, headers=headers)

The script requires the Tabulator python library. Put the script in the directory containing your ansible hosts file, and run it.

 

Can't save imported OpenVPN configuration in Network Manager

I ran into an issue where I couldn't save an imported OpenVPN (.ovpn) configuration in Network Manager. The "Save" button remains disabled:

vpn

It turns out I need to enter a password for the Private Key. Ofcourse, this particular private key doesn't have a password, but you can simply enter a single space as your password. After that the "Save" button becomes active.

Bexec v0.8: Execute a vim buffer and capture output in split window

I released v0.8 of my Bexec vim plugin. The Bexec plugin allows the user to execute the current buffer if it contains a script with a shebang (#!/path/to/interpreter) on the first line or if the default interpreter for the script's type is known by Bexec. The output of the script will be grabbed and displayed in a separate buffer. 

New in this release:

  • Honor splitbelow and splitright vim setting (patch by Christopher Pease).

bexec

Installation instructions:

  1. Download the Vimball
  2. Start vim with: vim bexec-v0.8.vmb
  3. In Vim, type: :source %
  4. Bexec is now installed. Type :Bexec to run it, or use <MapLeader>bx

 

 

Work around insufficient remote permissions when SCPing

Here's a problem I often run into:

  • I need to copy files from a remote system to my local system.
  • I have root access to the remote system via sudo or su, but not directly via SSH.
  • I don't have enough permissions to read the remote files as a normal user; I need to be root.
  • There isn't enough space to copy the files to a temp dir and change their ownership.

One solution is to use sudo tar remotely and output the tar file on stdout:

fboender@local$ ssh fboender@example.com "sudo tar -vczf - /root/foo" > foo.tar.gz

This relies on the remote host allowing X11 forwarding though, and you have to have an SSH askpass program installed. Half of the time, I can't get this work properly.

An easier solution is to build a reverse remote tunnel:

fboender@local$ ssh -R 19999:localhost:22 fboender@example.com

This maps the remote port 19999 on example.com to my local port 22. That means I can now access the SSH server running locally from the remote server by SSHing to port 19999. For example:

fboender@example.com$ scp -P 19999 -r /root/foo fboender@127.0.0.1
Password: 

There you go. Easy as pie.

Scripting a Cisco switch with Python and Expect

WS-C3750G-48TS-S2In the spirit of "Automate Everything" I was tasked with scripting some oft needed tasks on Cisco Switches. It's been a while since I've had to do anything even remotely related to switches, so I thought I'd start by googling for some ways to automate tasks on switches. What I found:

Both seemed to be able to get the job done quite well. Unfortunately it turns out that the source for sw_script is actually nowhere to be found and Trigger wouldn't even install properly, giving me a whole plethora of compiler errors. Since I was rather time constrained, I decided to fall back to good old Expect.

Expect

Expect s a framework to automate interactive applications. Basically what it does is let the user insert text into the input of the program, and then watches the output of the program for specific occurrences of text, hence the name "Expect". For example, consider a program that requires the user to enter a username and password. It lets the user know this by giving us prompts:

$ ftp host.local
Username: 
Password:

We can use Expect to scan the output of the program and respond with the username and password when appropriate:

spawn ftp host.local
expect "Username:"
send "fboender\r"
expect "password:"
send "sUp3rs3creT\r"

It's a wonderful tool, but error handling can be somewhat tricky, as you'll see further in this article.

Scripting a Cisco switch

There is an excellent Expect library for Python called Pexpect. Installation on Debian-derived systems is as easy as "aptitude install python-pexpect".

Here's an example session on a Cisco switch we'll automate with Expect in a bit:

$ ssh user@10.0.0.1
Password:
Switch>enable
Password:
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#interface Gi2/0/2 
Switch(config-if)#switchport access vlan 300
Switch(config-if)#no shutdown
Switch(config-if)#end
Switch#wr mem
Building configuration...
[OK]
Switch#quit

This is a simple manual session that changes the Vlan of switch port "Gi2/0/2" to Vlan 300. So how do we go about automating this with PExpect?

Logging in

The first step is to log in. This is fairly easy:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_port = "Gi2/0/2"
switch_vlan = 300

child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
child.logfile = sys.stdout
child.timeout = 4
child.expect('Password:')
child.sendline(switch_pw)
child.expect('>')

First we import the pexpect module. Then we spawn a new process "ssh user@10.0.0.1". We set the process' logfile to sys.stdout. This is merely for debugging purposes. It tells PExpect to show all the output it's receiving on our terminal. The default timeout is set to 4 seconds.

Then comes the first juicy bit. We let Expect know that we expect to see a 'Password:' prompt. If something goes wrong, for instance the switch at 10.0.0.1 is down, expect will wait for 4 seconds, looking for the text 'Password:' in SSH's output. Of course, it won't get that prompt since the switch is down. It will then raise a pexpect.TIMEOUT exception after 4 seconds. If it does detect the 'Password:' prompt, it will then send the switch password and wait until it detects the prompt.

Catching errors

If we want to catch errors and show the user somewhat helpful error messages, we can use try/except clauses:

try:
  child.expect('Password:')
except pexpect.TIMEOUT:
  raise OurException("Login prompt not received")

​After the password prompt, we send the password. If all goes well, we'll receive the prompt. Otherwise the switch will ask for the password again. We don't "expect" this, so PExpect will timeout once again while waiting for the ">" prompt.

try:
  child.sendline(switch_pw)
  child.expect('>')
except pexpect.TIMEOUT:
  raise OurException("Login failed")

Let's jump ahead a bit and look at the next interesting problem. What if we supply the wrong port? The switch will respond like so:

Switch(config)#interface Gi2/0/2 
                         ^
% Invalid input detected at '^' marker.

If, on the other hand, our port is correct, we'll simply get a prompt:

Switch(config-if)#

So here we have two possible scenario's. Something goes wrong, or it goes right. How do we detect this? We can tell Expect that we expect two different scenario's:

o = child.expect(['\(config-if\)#', '% Invalid'])
if o != 0:
  raise OurException("Unknown switch port '%s'" % (port))

The first scenario '\(config-if\)#' is our successful one. The second is when an error occurred. We then simply check that we got the successful one, and otherwise raise an error.

​The rest of the script is just straight-forward expects and sendline's.

The full script

Here's the full script:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_enable_pw = "m0r3s3cr3t"
port = "Gi2/0/2"
vlan = 300

try:
  try:
    child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
    if verbose:
        child.logfile = sys.stdout
    child.timeout = 4
    child.expect('Password:')
  except pexpect.TIMEOUT:
    raise OurException("Couldn't log on to the switch")

  child.sendline(switch_pw)
  child.expect('>')
  child.sendline('terminal length 0')
  child.expect('>')
  child.sendline('enable')
  child.expect('Password:')
  child.sendline(switch_enable_pw)
  child.expect('#')
  child.sendline('conf t')
  child.expect('\(config\)#')
  child.sendline('interface %s' % (port))
  o = child.expect(['\(config-if\)#', '% Invalid'])
  if o != 0:
      raise Exception("Unknown switch port '%s'" % (port))
  child.sendline('switchport access vlan %s' % (vlan))
  child.expect('\(config-if\)#')
  child.sendline('no shutdown')
  child.expect('#')
  child.sendline('end')
  child.expect('#')
  child.sendline('wr mem')
  child.expect('[OK]')
  child.expect('#')
  child.sendline('quit')
except (pexpect.EOF, pexpect.TIMEOUT), e:
    error("Error while trying to move the vlan on the switch.")
    raise

Conclusion

It's too bad that I couldn't use any of the existing frameworks. I could have tried getting Trigger to compile, but I was time constrained so I didn't bother. There are other ways of configuring Switches too. SNMP is one way, but it is complex and prone to errors. I believe it's also possible to retrieve the entire configuration from a switch, modify it and put it back. This is partly what Rancid does. However that would require even more time.

Expect was a good fit in this case. Although it too is rather error prone, it's fairly easy to catch errors as long as you're expecting (no pun intended) them. I strongly suggest you give Trigger a try before falling back to Expect. It seems like a very decent tool.

Upload a file by command line via sftp.

If you want to upload a file by commandline via SFTP, you may end up on this StackOverflow page. The answer there is WRONG. Those are not using the SFTP subsystem, they use SSH and process output redirection. Using scp will result in an error if the server only allows the SFTP subsystem: 

This service allows sftp connections only.

Instead, use this:

$ echo "put system.log" | sftp upload@sftp.example.com:logs/
Connected to example.com.
Changing to: /home/upload/logs/
sftp> put system.log
Uploading system.log to /home/upload/logs/system.log
system.log                100%   61     0.1KB/s   00:00    

to upload a local file "system.log" to the remote SFTP host in the logs/ directory. You can also use wildcards.

The above command generates some output you may not want. In that case you can use the -q switch and redirect output to /dev/null:

$ echo "put system.log" | sftp -q upload@sftp.example.com:logs/ > /dev/null
Connected to example.com

As you can see, this still generates a little output. To fix this, we have to use batch mode:

$ cat batch.txt 
put *.log
$ sftp -q -b batch.txt upload@sftp.example.com:logs/  > /dev/null
$

Batch mode requires the use of a passwordless private key. If you don't want to load it into a key agent (for automated scripts, etc), you can use the "-o IdentityFile" option:

$ sftp -q -b batch.txt -o IdentityFile=key.rsa upload@sftp.example.com:logs/  > /dev/null

 

 

Test a pull / merge request before accepting on Bitbucket

Git is a great tool, but its documentation leaves much to be desired at times. Bitbucket's documentation doesn't fare much better. Commands mentioned in its wiki often don't work as advertised. The fact that git's commands are often incredibly counter-intuitive, incomplete and at times simply wrong doesn't help either. So while you can get far with git by just copying random (and again, wrong more often than not) commands from Stack Overflow, there comes a time where you actually have to learn how it works.

In my case, I had a very simple request. Somebody opened a pull request from a fork of one of my projects and I simply wanted to test that change before I merged it. Bitbucket's wiki was unhelpful as the commands it listed simply didn't work. Here's how I got it to work in a way that actually makes sense.

What we'll be doing

We'll be working with two repositories here:

  • The main repository called "test", owned by user "fboender"
  • The forked repository called "test-fork", also owned by user "fboender". Normally the forked repository wouldn't be owned by the same person, but in Bitbucket you can fork your own repositories, and this makes for a good test. 

We have:

  • A pull request containing two commits on a branch called 'bugfix' on the forked 'test-fork' repository.
  • We want to review these commits, test them and either merge them into our main repository 'test' or reject them.

We'll perform the following steps:

  1. Prepare the working directory
  2. Retrieve the remote changes (commits) for the pull request to our local clone
  3. Review the changes
  4. Either reject or accept (merge) the changes
  5. Push the accepted changes (merge / pull request) back to Bitbucket.

Prepare the working directory

Make sure you have no uncommited changes in your working dir:

$ git status
# On branch master
nothing to commit, working directory clean

If there are uncommited changes, either commit them or get rid of them.

Retrieve remote changes

Next we'll retrieve the remote changes introduced by the pull request. We don't want those changes to affect our local repository in any way. We just want to retrieve them so we can review the changes. To do so, we'll need to know the source from where we want to fetch the changes. This is what the pull requests looks like in the Bitbucket interface:

 

Pull request

If you hover your mouse over the branch, you can see the URL for the source of the merge request. In this case: https://bitbucket.org/fboender/test-fork/branch/bugfix.

We'll use git's fetch command to fetch the objects (commits) in the pull request. The format for the fetch command in this case is:

git fetch <repository> <refspec>

For our pull request the repository is https://bitbucket.org/fboender/test-fork. The refspec can be almost anything really. A commit, a branch name, whatever. In this case we'll use the branch name, which is bugfix, as you can infer from the URL above.

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 8 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From https://bitbucket.org/fboender/test-fork
 * branch            bugfix     -> FETCH_HEAD

Git has now retrieved the commits and put them in our local index. If you wouldn't know better, you wouldn't be able to find them though! The commits are there, but they're not part of any branch or anytihng. To get to the changes, we'll need to use the last commit id (ad38a9b) or the (temporary) FETCH_HEAD ref, which git has created for us. 

Review the changes

Okay, so we want to do various reviews of the changes. First let's look at the changes between our current branch (master) and the new changes:

$ git diff master FETCH_HEAD
​​diff --git a/README.md b/README.md
index 08cdfb4..cc11a2f 100644
--- a/README.md
+++ b/README.md
@@ -6,4 +6,3 @@ About
 
 This is a test repository, for testing.
 
-TEST in sub
diff --git a/TEST b/TEST
index 3fb25cd..bc440ae 100644
--- a/TEST
+++ b/TEST
@@ -1 +1,2 @@
 BLA
+FOO

Looks good. Two commits that ammend the TEST file and update the README.md file. Now perhaps we want to run some tests on the new changes before accepting and merging the pull request. To get to the actual changes, we can checkout a refspec. In this case either the FETCH_HEAD refspec that git helpfully created for us, or we can use the last commit in the pull request: ad38a9b. We'll be using FETCH_HEAD. Remember that this refspec is temporary! If you do a new fetch (that includes git pull and various other commands), the FETCH_HEAD could have changed.

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at ad38a9b... Updated README

Okay, seems to have worked. The git log shows the commits:

$ git log
commit ad38a9b402eac93995902560292697245418a192
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:26 2014 +0200

    Updated README

commit c538608863dd9dda276edf5adcad9e0f2ef9f9ed
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:11 2014 +0200

    Ammended TEST file

commit f8d3d31ea1195e2cb1c0631d95c2b33c313b60b8
Author: Ferry Boender <ferry.boender@gmail.com>
Date:   Mon Mar 31 17:36:23 2014 +0000

    Created new branch bugfix

We can now run some tests on the working directory, further inspect the changes, etc.

Reject the changes

If you're not satisfied with the changes, you can just check out the original branch:

$ git checkout master
Warning: you are leaving 3 commits behind, not connected to
any of your branches:

  ad38a9b Updated README
  c538608 Ammended TEST file
  f8d3d31 Created new branch bugfix

If you want to keep them by creating a new branch, this may be a good time
to do so with:

 git branch new_branch_name ad38a9b

Switched to branch 'master'

​Note that the commits are still in your index. You've just chosen not to do anything with them. You can now reject the pull request in Bitbucket's interface.

Accept the changes

If you're satisfied that the changes made work properly and want to keep the changes from the pull request, you can merge them into a branch. In this case, we'll merge them directly into the master branch.

First, we reattach our HEAD to the correct branch:

$ git checkout master

Next, we merge in the changes:

$ git merge FETCH_HEAD
Updating 2f6ecbf..ad38a9b
Fast-forward
 README.md | 1 -
 TEST      | 1 +
 2 files changed, 1 insertion(+), 1 deletion(-)

Finally, we push the changes to Bitbucket. This will automatically accept the pull request on Bitbucket.

$ git push
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 873 bytes | 0 bytes/s, done.
Total 8 (delta 2), reused 1 (delta 1)
To git@bitbucket.org:fboender/test.git
   2f6ecbf..ad38a9b  master -> master

The changes have now been pushed to the origin repository and the pull request has been accepted automatically by Bitbucket.

That's it, we're done!

Make "changes" to the pull request

Suppose you find the pull request is largely okay, however it's missing some small things. Maybe the developer who forked didn't update the documentation to reflect the changes he made? Such a hypothetical situation would of course never happen in real life, but let's suppose it does! How do you go about fixing the pull request which still properly accepting it? Here's how:

We've fetched the remote changes and are now inspecting them in our working dir:

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
...
$ git checkout FETCH_HEAD
...

Everything is almost correct, but we need to make some minor changes before accepting the merge request. We can now simply create a new commit in the detached head state:

$ echo "QUUX" >> TEST
$ git add TEST
$ git commit -m "Fixed missing ammend"
[detached HEAD ab11d9a] Fixed missing ammending
 1 file changed, 1 insertion(+)

This gives us a new commit id "ab11d9a" which we can use to merge both the commits from the pull request as well as the new commit:

$ git checkout master
Warning: you are leaving 2 commits behind, not connected to
any of your branches:

  ab11d9a Fixed missing ammending
  e761f43 Ammended TEST some more

$ git merge ab11d9a
Updating ad38a9b..ab11d9a
Fast-forward
 TEST | 2 ++
 1 file changed, 2 insertions(+)

$ git push

The merge request will now automatically be accepted and our additional commit will also be pushed to the main repository. Note that this doesn't actually makes any changes to the pull request. It's just an additional commit that you're merging in at the same time as the commits from the pull request. This is useful to prevent broken code on branches.

HP Lights-Out 100i (LO100i) "Invalid username / password" when trying to connect to KVM

If you're trying to connect to the Virtual KVM (console) on a HP Lights-Out 100i (LO100i) using the Remote Console Client Java applet, you might be getting an error in the order of

Username / Password invalid

Or:

com.serverengines.r.rdr.EndOfStream: EndOfStream

This is a known problem with firmware version 4.24 (or Earlier):

The Virtual Keyboard/Video/Mouse (KVM )will not be accessible
on HP ProLiant 100-series servers with Lights-Out 100 Base 
Management Card Firmware Version 4.24 (or earlier), if the server
has been running without interruption for 248 days (or more). When
this occurs, when attempting to access Virtual KVM/Media as shown
below, the browser will generate the following message[...]

As a solution, HP recommends:

As a workaround, shut down the server and unplug the power cable.
After a few seconds, reconnect the power cable and restart the server.

I've found that it isn't required to actually unplug the powercable. For me, remotely cold-restarting the iLoM card got rid of the problem. You can remotely cold-start the iLoM with ipmitool:

$ ipmitool -H <ILOM_IP> -U <USERNAME> mc
Password:
MC Commands:
  reset <warm|cold>
  guid
  info
  watchdog <get|reset|off>
  selftest
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc reset cold
Password: 
Sent cold reset command to MC

Now we wait until the iLoM comes back up and we can succefully connect to the console via the KVM Java applet.