contact
----------------------------

Blog <-

Archive for the ‘linux’ Category

RSS   RSS feed for this category

Test a pull / merge request before accepting on Bitbucket

Git is a great tool, but its documentation leaves much to be desired at times. Bitbucket's documentation doesn't fare much better. Commands mentioned in its wiki often don't work as advertised. The fact that git's commands are often incredibly counter-intuitive, incomplete and at times simply wrong doesn't help either. So while you can get far with git by just copying random (and again, wrong more often than not) commands from Stack Overflow, there comes a time where you actually have to learn how it works.

In my case, I had a very simple request. Somebody opened a pull request from a fork of one of my projects and I simply wanted to test that change before I merged it. Bitbucket's wiki was unhelpful as the commands it listed simply didn't work. Here's how I got it to work in a way that actually makes sense.

What we'll be doing

We'll be working with two repositories here:

  • The main repository called "test", owned by user "fboender"
  • The forked repository called "test-fork", also owned by user "fboender". Normally the forked repository wouldn't be owned by the same person, but in Bitbucket you can fork your own repositories, and this makes for a good test. 

We have:

  • A pull request containing two commits on a branch called 'bugfix' on the forked 'test-fork' repository.
  • We want to review these commits, test them and either merge them into our main repository 'test' or reject them.

We'll perform the following steps:

  1. Prepare the working directory
  2. Retrieve the remote changes (commits) for the pull request to our local clone
  3. Review the changes
  4. Either reject or accept (merge) the changes
  5. Push the accepted changes (merge / pull request) back to Bitbucket.

Prepare the working directory

Make sure you have no uncommited changes in your working dir:

$ git status
# On branch master
nothing to commit, working directory clean

If there are uncommited changes, either commit them or get rid of them.

Retrieve remote changes

Next we'll retrieve the remote changes introduced by the pull request. We don't want those changes to affect our local repository in any way. We just want to retrieve them so we can review the changes. To do so, we'll need to know the source from where we want to fetch the changes. This is what the pull requests looks like in the Bitbucket interface:

 

Pull request

If you hover your mouse over the branch, you can see the URL for the source of the merge request. In this case: https://bitbucket.org/fboender/test-fork/branch/bugfix.

We'll use git's fetch command to fetch the objects (commits) in the pull request. The format for the fetch command in this case is:

git fetch <repository> <refspec>

For our pull request the repository is https://bitbucket.org/fboender/test-fork. The refspec can be almost anything really. A commit, a branch name, whatever. In this case we'll use the branch name, which is bugfix, as you can infer from the URL above.

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 8 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From https://bitbucket.org/fboender/test-fork
 * branch            bugfix     -> FETCH_HEAD

Git has now retrieved the commits and put them in our local index. If you wouldn't know better, you wouldn't be able to find them though! The commits are there, but they're not part of any branch or anytihng. To get to the changes, we'll need to use the last commit id (ad38a9b) or the (temporary) FETCH_HEAD ref, which git has created for us. 

Review the changes

Okay, so we want to do various reviews of the changes. First let's look at the changes between our current branch (master) and the new changes:

$ git diff master FETCH_HEAD
​​diff --git a/README.md b/README.md
index 08cdfb4..cc11a2f 100644
--- a/README.md
+++ b/README.md
@@ -6,4 +6,3 @@ About
 
 This is a test repository, for testing.
 
-TEST in sub
diff --git a/TEST b/TEST
index 3fb25cd..bc440ae 100644
--- a/TEST
+++ b/TEST
@@ -1 +1,2 @@
 BLA
+FOO

Looks good. Two commits that ammend the TEST file and update the README.md file. Now perhaps we want to run some tests on the new changes before accepting and merging the pull request. To get to the actual changes, we can checkout a refspec. In this case either the FETCH_HEAD refspec that git helpfully created for us, or we can use the last commit in the pull request: ad38a9b. We'll be using FETCH_HEAD. Remember that this refspec is temporary! If you do a new fetch (that includes git pull and various other commands), the FETCH_HEAD could have changed.

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at ad38a9b... Updated README

Okay, seems to have worked. The git log shows the commits:

$ git log
commit ad38a9b402eac93995902560292697245418a192
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:26 2014 +0200

    Updated README

commit c538608863dd9dda276edf5adcad9e0f2ef9f9ed
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:11 2014 +0200

    Ammended TEST file

commit f8d3d31ea1195e2cb1c0631d95c2b33c313b60b8
Author: Ferry Boender <ferry.boender@gmail.com>
Date:   Mon Mar 31 17:36:23 2014 +0000

    Created new branch bugfix

We can now run some tests on the working directory, further inspect the changes, etc.

Reject the changes

If you're not satisfied with the changes, you can just check out the original branch:

$ git checkout master
Warning: you are leaving 3 commits behind, not connected to
any of your branches:

  ad38a9b Updated README
  c538608 Ammended TEST file
  f8d3d31 Created new branch bugfix

If you want to keep them by creating a new branch, this may be a good time
to do so with:

 git branch new_branch_name ad38a9b

Switched to branch 'master'

​Note that the commits are still in your index. You've just chosen not to do anything with them. You can now reject the pull request in Bitbucket's interface.

Accept the changes

If you're satisfied that the changes made work properly and want to keep the changes from the pull request, you can merge them into a branch. In this case, we'll merge them directly into the master branch.

First, we reattach our HEAD to the correct branch:

$ git checkout master

Next, we merge in the changes:

$ git merge FETCH_HEAD
Updating 2f6ecbf..ad38a9b
Fast-forward
 README.md | 1 -
 TEST      | 1 +
 2 files changed, 1 insertion(+), 1 deletion(-)

Finally, we push the changes to Bitbucket. This will automatically accept the pull request on Bitbucket.

$ git push
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 873 bytes | 0 bytes/s, done.
Total 8 (delta 2), reused 1 (delta 1)
To git@bitbucket.org:fboender/test.git
   2f6ecbf..ad38a9b  master -> master

The changes have now been pushed to the origin repository and the pull request has been accepted automatically by Bitbucket.

That's it, we're done!

Make "changes" to the pull request

Suppose you find the pull request is largely okay, however it's missing some small things. Maybe the developer who forked didn't update the documentation to reflect the changes he made? Such a hypothetical situation would of course never happen in real life, but let's suppose it does! How do you go about fixing the pull request which still properly accepting it? Here's how:

We've fetched the remote changes and are now inspecting them in our working dir:

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
...
$ git checkout FETCH_HEAD
...

Everything is almost correct, but we need to make some minor changes before accepting the merge request. We can now simply create a new commit in the detached head state:

$ echo "QUUX" >> TEST
$ git add TEST
$ git commit -m "Fixed missing ammend"
[detached HEAD ab11d9a] Fixed missing ammending
 1 file changed, 1 insertion(+)

This gives us a new commit id "ab11d9a" which we can use to merge both the commits from the pull request as well as the new commit:

$ git checkout master
Warning: you are leaving 2 commits behind, not connected to
any of your branches:

  ab11d9a Fixed missing ammending
  e761f43 Ammended TEST some more

$ git merge ab11d9a
Updating ad38a9b..ab11d9a
Fast-forward
 TEST | 2 ++
 1 file changed, 2 insertions(+)

$ git push

The merge request will now automatically be accepted and our additional commit will also be pushed to the main repository. Note that this doesn't actually makes any changes to the pull request. It's just an additional commit that you're merging in at the same time as the commits from the pull request. This is useful to prevent broken code on branches.

HP Lights-Out 100i (LO100i) "Invalid username / password" when trying to connect to KVM

If you're trying to connect to the Virtual KVM (console) on a HP Lights-Out 100i (LO100i) using the Remote Console Client Java applet, you might be getting an error in the order of

Username / Password invalid

Or:

com.serverengines.r.rdr.EndOfStream: EndOfStream

This is a known problem with firmware version 4.24 (or Earlier):

The Virtual Keyboard/Video/Mouse (KVM )will not be accessible
on HP ProLiant 100-series servers with Lights-Out 100 Base 
Management Card Firmware Version 4.24 (or earlier), if the server
has been running without interruption for 248 days (or more). When
this occurs, when attempting to access Virtual KVM/Media as shown
below, the browser will generate the following message[...]

As a solution, HP recommends:

As a workaround, shut down the server and unplug the power cable.
After a few seconds, reconnect the power cable and restart the server.

I've found that it isn't required to actually unplug the powercable. For me, remotely cold-restarting the iLoM card got rid of the problem. You can remotely cold-start the iLoM with ipmitool:

$ ipmitool -H <ILOM_IP> -U <USERNAME> mc
Password:
MC Commands:
  reset <warm|cold>
  guid
  info
  watchdog <get|reset|off>
  selftest
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc reset cold
Password: 
Sent cold reset command to MC

Now we wait until the iLoM comes back up and we can succefully connect to the console via the KVM Java applet.

Quick-n-dirty HAR (HTTP Archive) viewer

HAR, HTTP Archive, is a JSON-encoded dump of a list of requests and their associated headers, bodies, etc. Here's a partial example containing a single request:

{
  "startedDateTime": "2013-09-16T18:02:04.741Z",
  "time": 51,
  "request": {
    "method": "GET",
    "url": "http://electricmonk.nl/",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "queryString": [],
    "cookies": [],
    "headersSize": 38,
    "bodySize": 0
  },
  "response": {
    "status": 301,
    "statusText": "Moved Permanently",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "cookies": [],
    "content": {
      "size": 0,
      "mimeType": "text/html"
    },
    "redirectURL": "",
    "headersSize": 32,
    "bodySize": 0
  },
  "cache": {},
  "timings": {
    "blocked": 0,
  }
},

HAR files can be exported from Chrome's Network analyser developer tool (ctrl-shift-i → Network tab → capture some requests → Right-click and select Save as HAR with contents. (Additional tip: Check the "Preserve Log on Navigation option – which looks like a recording button – to capture multi-level redirects and such)

As human-readable JSON is, it's still difficult to get a good overview of the requests. So I wrote a quick Python script that turns the JSON into something that's a little easier on our poor sysadmin's eyes:

harview_output

It supports colored output, dumping request headers and response headers and the body of POSTs and responses (although this will be very slow). You can filter out uninteresting requests such as images or CSS/JSS with the --filter-X options.

You can get it by cloning the Git repository from the Bitbucket repository.

Cheers!

bbcloner: create mirrors of your public and private Bitbucket Git repositories

 

bbclonerI wrote a small tool that assists in creating mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires that you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

You can download a tar.gz, a Debian/Ubuntu package or clone it from the Bitbucket page.

Features

  • Clone / mirror / backup public and private repositories and wikis.
  • No need to store your username and password to update clones.
  • Exclude repositories.
  • No need to run an SSH agent. Uses passwordless private Deployment Keys. (thus without write access to your repositories)

Usage

Here's how it works in short. Generate a passwordless SSH key:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: /home/fboender/.ssh/bbcloner_rsa<ENTER>
Enter passphrase (empty for no passphrase):<ENTER>
Enter same passphrase again: <ENTER>

You should add the generated public key to your repositories as a Deployment Key. The first time you use bbcloner, or whenever you've added new public or private repositories, you have to specify your username/password. BBcloner will retrieve a list of your repositories and create mirrors for any new repositories not yet mirrored:

$ bbcloner -n -u fboender /home/fboender/gitclones/
Password: 
Cloning new repositories
Cloning project_a
Cloning project_a wiki
Cloning project_b

Now you can update the mirrors without using a username/password:

$ bbcloner /home/fboender/gitclones/
Updating existing mirrors
Updating /home/fboender/gitclones/project_a.git
Updating /home/fboender/gitclones/project_a-wiki.git
Updating /home/fboender/gitclones/project_b.git

You can run the above from a cronjob. Specify the -s argument to prevent bbcloner from showing normal output.

The mirrors are full remote git repositories, which means you can clone them:

$ git clone /home/fboender/gitclones/project_a.git/
Cloning into project_a...
done.

Don't push changes to it, or the mirror won't be able to sync. Instead, point the remote origin to your Bitbucket repository:

$ git remote rm origin
$ git remote add origin git@bitbucket.org:fboender/project_a.git
$ git push
remote: bb/acl: fboender is allowed. accepted payload.

Get it

Here are ways of getting bbcloner:

More information

Fore more information, please see the Bitbucket repository.

Setting I/O priorities on Linux

All us system admins know about nice, which lets you set the CPU priorities of a process. Every now and then I need to run an I/O-heavy process. This inevitably makes the system intermittently unresponsive or slow, which can be a real annoyance.

If your system is slow to respond, you can check to see if I/O is the problem (which it usually will be) using a program called iotop, which is similar to the normal top program except it doesn't show CPU/Memory but disk reads/writes. You may need to install it first:

# aptitude install iotop

The output looks like this:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                       
12404 be/4 fboender  124.52 K/s  124.52 K/s  0.00 % 99.99 % cp winxp.dev.local.vdi /home/fboender
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]

As you can see, the copy process with PID 12404 is taking up 99.99% of my I/O, leaving little for the rest of the system.

In recent Linux kernels (2.6.13 with the CFQ io scheduler), there's an option to renice the I/O of a process. The ionice tool allows you to renice the processes from userland. It comes pre-installed on Debian/Ubuntu machines in the util-linux package. To use it, you must specify a priority scheduling class using the -c option.

  • -c0 is an old deprecated value of "None", which is now the same as Best-Effort (-c2)
  • -c1 is Real Time priority, which will give the process the highest I/O priority
  • -c2X is Best-Effort priority puts the process in a round-robin queue where it will get a slice of I/O every so often. How much it gets can be specified using the -n option which takes a value from 0 to 7
  • -c3 is Idle, which means the process will only get I/O when no other process requires it.

For example, I want a certain process (PID 12404) to only use I/O when no other process requires it, because the task is I/O-heavy, but is not high priority:

# ionice -c3 -p 12404

The effects are noticeable immediately. My system responds faster, there is less jitter on the desktop and the commandline.

Nice.

Persistent undo history in Vim

Once you quit Vim, the undo history for that file is gone. This sometimes gives me problems if I accidentally made a change in a file without knowing it. This usually happens due to a bad Vim command which, for instance, capitalized a single letter.

There's an option which allows you to make the undo history persistent between Vim sessions. That means you can still undo changes you made in a file, even if you've quit Vim in the meantime.

You can add the following to your .vimrc to enable it:

set undofile   # Maintain undo history between sessions

This will create undo files all over the place, which look like this:

-rw-r--r-- 1 fboender fboender 320 2012-07-26 10:23 bad_gateway.txt
-rw-r--r-- 1 fboender fboender 523 2012-07-24 14:51 .bad_gateway.txt.un~

You can remedy this by including the following option in your configuration:

set undodir=~/.vim/undodir

Make sure to create the undodir:

$ mkdir ~/.vim/undodir

The undo files will now be saved in the undodir:

$ ls -la .vim/undodir/
total 12
drwxr-xr-x  2 fboender fboender 4096 2012-07-26 10:32 .
drwxr-xr-x 12 fboender fboender 4096 2012-07-26 10:24 ..
-rw-r--r--  1 fboender fboender  519 2012-07-26 10:32 %home%fboender%bad_gateway.txt

Conque: Terminal emulators in Vim buffers

For the longest time, I've searched for a way to run terminal emulators in Vim buffers.

As a kind of work-around, I created Bexec, which allows you to run the current contents of a buffer through an external program. It then captures the output and inserts/appends it to another buffer.

Although Bexec works reasonable, and still has it's uses, it's not a true terminal emulator in Vim. Today I finally found a Vim plugin that let's you actually run interactive commands / terminals in Vim buffers: Conque.

It requires Vim with Python support built in. Installation is straight-forward if you've got the requirements.

Download the .vmb file, edit it in vim, and issue:

:so %

It will then be installed. Quit vim, restart it, and you can now run start using it:

:ConqueTerm bash

Very awesome.

Re-use existing SSH agent (cygwin et al)

(Please note that this post is not specific to Windows nor Cygwin; it'll work on a remote unix machine just as well)

On my netbook, I use Windows XP in combination with Cygwin (A unix environment for Windows) and Mintty for my Unixy needs. From there, I usually SSH to some unix-like machine somewhere, so I can do systems administration or development.

Unfortunately, the default use of an SSH agent under Cygwin is difficult, since there's no parent process that can run it and put the required information (SSH_AUTH_SOCK) in the environment. On most Linux distribution, the SSH agent is started after you log in to an X11 session, so that every child process (terminals you open, etc) inherits the SSH_AUTH_SOCK environment setting and SSH can contact the ssh-agent to get your keys. Result? You have to start a new SSH agent, load your key and enter your password for each Mintty terminal you open. Quite annoying.

The upside is, it's not very hard to configure your system properly so that you need only one SSH agent running on your system, and thus only have to enter your password once.

The key lies in how ssh-agent creates the environment. When we start ssh-agent in the traditional manner, we do:

$ eval `ssh-agent`
Agent pid 1784

The command starts the SSH agent and sets a bunch of environment variables:

$ set | grep SSH_
SSH_AGENT_PID=1784
SSH_AUTH_SOCK=/tmp/ssh-QzfPveH696/agent.696

The SSH_AUTH_SOCK is how the ssh command knows how to contact the agent. As you can see, the socket filename is generated randomly. That means you can't reuse the socket, since you can't guess the socket filename.

Good thing ssh-agent allows us to specify the socket filename, so we can easily re-use it.

Put the following in your ~/.bashrc:

# If no SSH agent is already running, start one now. Re-use sockets so we never
# have to start more than one session.

export SSH_AUTH_SOCK=/home/fboender/.ssh-socket

ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
   # No ssh-agent running
   rm -rf $SSH_AUTH_SOCK
   # >| allows output redirection to over-write files if no clobber is set
   ssh-agent -a $SSH_AUTH_SOCK >| /tmp/.ssh-script
   source /tmp/.ssh-script
   echo $SSH_AGENT_PID >| ~/.ssh-agent-pid
   rm /tmp/.ssh-script
fi

What the script above does is, it sets the socket filename manually to /home/yourusername/.ssh-socket. It then runs ssh-add, which will attempt to connect to the ssh-agent through the socket. If it fails, it means no ssh-agent is running, so we do some cleanup and start one.

Now, all you have to do is start a single terminal, and load your keys once:

$ ssh-add ~/.ssh/fboender\@electricmonk.rsa
Enter passphrase for .ssh/fboender@electricmonk.rsa: [PASSWORD]
Identity added: .ssh/fboender@electricmonk.rsa (.ssh/fboender@electricmonk.rsa)

Now you can start as many new terminals as you'd like, and they'll all use the same ssh-agent, never requiring you to enter your password for that key more than once per boot.

Update:

I've updated the script with suggestions from Anthony Geoghegan. It now also works if noclobber is set.

monit alerts not emailed: Resource temporarily unavailable

While setting up Monit, a tool for easy monitoring of hosts and services, I ran into a problem. I had configured Monit to email alerts to my email address, using my personal mail server (IP/email addresses obfuscated to protect the innocence of my email inbox):

set mailserver 211.211.211.0
set alert ferry.boender@example.com
set httpd port 2812
  allow 0.0.0.0/0.0.0.0

check file test path /tmp/monittest
  if failed permission 644 then alert

After starting it with monit -c ./monitrc, I could reach the webserver at port 2812. I also saw that the check was failing:

# monit -c ./monitrc status
File 'test'
  status                            Permission failed
  monitoring status                 monitored
  permission                        600

However, it was not sending me emails with startup, and status error reports. My server's mail log showed no incoming mail, making it seem like Monit wasn't even trying to send email. Turning on Monit's logging feature, I noticed:

# monit -c ./monitrc quit
# monit -c ./monitrc -l ./monit.log
# tail monit.log
error : 'test' permission test failed for /tmp/monittest -- current permission is 0600
error : Sendmail: error receiving data from the mailserver '211.211.211.0' -- Resource temporarily unavailable

I tried a manual connection to the mail server from the host where Monit was running, and it worked just fine.

The problem turned out to be a connection timeout to the mail server. Most mail servers nowadays wait a certain number of seconds before accepting connections. This reduces the rate at which spam can be delivered. Monit wasn't waiting long enough before determining that the mail server wasn't working, and bailed out of reporting errors with a 'Resource temporarily unavailable'.

The solution is easy. The set mailserver configuration allows you to specify a timeout:

set mailserver 213.19.146.54 with timeout 30 seconds

I'm happy to report that Monit is now sending email alerts just fine.

The user isn't always wrong

Some time ago, my mother bought a new laptop. It came preinstalled with Windows Vista, which proved to be quite the disaster. The laptop wasn't nowhere near fast enough to run it, so I installed Ubuntu on it. This allowed my mom to do everything she needed to do with the laptop, while at the same time making it easy for me to administer the beast.

One day my mom phoned me, and explained about a problem she was having:

"Whenever I move the laptop into the kitchen, it stops working!"

Now my mom is no computer expert, but she picked up Ubuntu quickly and has never needed much hand-holding when it comes to using the laptop. This one, however, sounded to me like one of those situations where the user couldn't possibly be correct. We went through the basic telephone support routine, but she persisted in her observation that somehow the kitchen was responsible for her laptop misery.

Eventually, after deciding the problem couldn't be fixed over the phone, I agreed to come over to my parents house the next evening to take a look at it. With my general moody "a family member's PC needs fixing" attitude and a healthy dose of skepticism ("this is going to be one of those typical the-cable-isn't-plugged-in problems"), I arrived at my parents.

"Okay, let's see if we can't fix this problem", I said, as I powered up the laptop upstairs. Everything worked fine. Picking up the laptop, I moved it downstairs into the living room. No problems whatsoever. Next, the kitchen. And lo and behold:

The laptop crashed almost immediately.

"Coincidence", I thought, and tried it again. And again, as soon as I entered the kitchen, the laptop crashed. I… was… Stunned! I had never encountered a problem like this before. What could possibly make it behave like that?

After pondering this strange problem for a while, I thought "what's the only location-dependent thing in a laptop?", and it dawned on me that it might just be related to the WiFi. I powered up the laptop once again in the living room, completely turned off the WiFi by rmmod-ing the relevant kernel modules, and entered the kitchen. No crash. It kept on working perfectly. Until I turned on the WiFi again.

With the aid of some log files (which I should have checked in the first place, I admit), I quickly found the culprit. The very last thing I saw in the log files just before the computer crashed… an attempt to discover the neighbors WiFi! A wonky WiFi router in combination with buggy drivers cause the laptop to crash, but only when it came in range of said WiFi router. And that happened only in the kitchen!

In the end I disabled automatic WiFi discovery on the laptop, since my mom didn't really take it out of the house anyway, and the problems disappeared. I never encountered a problem like that again, but I did learn one thing though:

No matter how impossible the problem may seem… The user isn't always wrong.