contact
----------------------------

Blog <-

Archive for the ‘linux’ Category

RSS   RSS feed for this category

Scripting a Cisco switch with Python and Expect

WS-C3750G-48TS-S2In the spirit of "Automate Everything" I was tasked with scripting some oft needed tasks on Cisco Switches. It's been a while since I've had to do anything even remotely related to switches, so I thought I'd start by googling for some ways to automate tasks on switches. What I found:

Both seemed to be able to get the job done quite well. Unfortunately it turns out that the source for sw_script is actually nowhere to be found and Trigger wouldn't even install properly, giving me a whole plethora of compiler errors. Since I was rather time constrained, I decided to fall back to good old Expect.

Expect

Expect s a framework to automate interactive applications. Basically what it does is let the user insert text into the input of the program, and then watches the output of the program for specific occurrences of text, hence the name "Expect". For example, consider a program that requires the user to enter a username and password. It lets the user know this by giving us prompts:

$ ftp host.local
Username: 
Password:

We can use Expect to scan the output of the program and respond with the username and password when appropriate:

spawn ftp host.local
expect "Username:"
send "fboender\r"
expect "password:"
send "sUp3rs3creT\r"

It's a wonderful tool, but error handling can be somewhat tricky, as you'll see further in this article.

Scripting a Cisco switch

There is an excellent Expect library for Python called Pexpect. Installation on Debian-derived systems is as easy as "aptitude install python-pexpect".

Here's an example session on a Cisco switch we'll automate with Expect in a bit:

$ ssh user@10.0.0.1
Password:
Switch>enable
Password:
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#interface Gi2/0/2 
Switch(config-if)#switchport access vlan 300
Switch(config-if)#no shutdown
Switch(config-if)#end
Switch#wr mem
Building configuration...
[OK]
Switch#quit

This is a simple manual session that changes the Vlan of switch port "Gi2/0/2" to Vlan 300. So how do we go about automating this with PExpect?

Logging in

The first step is to log in. This is fairly easy:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_port = "Gi2/0/2"
switch_vlan = 300

child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
child.logfile = sys.stdout
child.timeout = 4
child.expect('Password:')
child.sendline(switch_pw)
child.expect('>')

First we import the pexpect module. Then we spawn a new process "ssh user@10.0.0.1". We set the process' logfile to sys.stdout. This is merely for debugging purposes. It tells PExpect to show all the output it's receiving on our terminal. The default timeout is set to 4 seconds.

Then comes the first juicy bit. We let Expect know that we expect to see a 'Password:' prompt. If something goes wrong, for instance the switch at 10.0.0.1 is down, expect will wait for 4 seconds, looking for the text 'Password:' in SSH's output. Of course, it won't get that prompt since the switch is down. It will then raise a pexpect.TIMEOUT exception after 4 seconds. If it does detect the 'Password:' prompt, it will then send the switch password and wait until it detects the prompt.

Catching errors

If we want to catch errors and show the user somewhat helpful error messages, we can use try/except clauses:

try:
  child.expect('Password:')
except pexpect.TIMEOUT:
  raise OurException("Login prompt not received")

​After the password prompt, we send the password. If all goes well, we'll receive the prompt. Otherwise the switch will ask for the password again. We don't "expect" this, so PExpect will timeout once again while waiting for the ">" prompt.

try:
  child.sendline(switch_pw)
  child.expect('>')
except pexpect.TIMEOUT:
  raise OurException("Login failed")

Let's jump ahead a bit and look at the next interesting problem. What if we supply the wrong port? The switch will respond like so:

Switch(config)#interface Gi2/0/2 
                         ^
% Invalid input detected at '^' marker.

If, on the other hand, our port is correct, we'll simply get a prompt:

Switch(config-if)#

So here we have two possible scenario's. Something goes wrong, or it goes right. How do we detect this? We can tell Expect that we expect two different scenario's:

o = child.expect(['\(config-if\)#', '% Invalid'])
if o != 0:
  raise OurException("Unknown switch port '%s'" % (port))

The first scenario '\(config-if\)#' is our successful one. The second is when an error occurred. We then simply check that we got the successful one, and otherwise raise an error.

​The rest of the script is just straight-forward expects and sendline's.

The full script

Here's the full script:

import pexpect

switch_ip = "10.0.0.1"
switch_un = "user"
switch_pw = "s3cr3t"
switch_enable_pw = "m0r3s3cr3t"
port = "Gi2/0/2"
vlan = 300

try:
  try:
    child = pexpect.spawn('ssh %s@%s' % (switch_un, switch_ip))
    if verbose:
        child.logfile = sys.stdout
    child.timeout = 4
    child.expect('Password:')
  except pexpect.TIMEOUT:
    raise OurException("Couldn't log on to the switch")

  child.sendline(switch_pw)
  child.expect('>')
  child.sendline('terminal length 0')
  child.expect('>')
  child.sendline('enable')
  child.expect('Password:')
  child.sendline(switch_enable_pw)
  child.expect('#')
  child.sendline('conf t')
  child.expect('\(config\)#')
  child.sendline('interface %s' % (port))
  o = child.expect(['\(config-if\)#', '% Invalid'])
  if o != 0:
      raise Exception("Unknown switch port '%s'" % (port))
  child.sendline('switchport access vlan %s' % (vlan))
  child.expect('\(config-if\)#')
  child.sendline('no shutdown')
  child.expect('#')
  child.sendline('end')
  child.expect('#')
  child.sendline('wr mem')
  child.expect('[OK]')
  child.expect('#')
  child.sendline('quit')
except (pexpect.EOF, pexpect.TIMEOUT), e:
    error("Error while trying to move the vlan on the switch.")
    raise

Conclusion

It's too bad that I couldn't use any of the existing frameworks. I could have tried getting Trigger to compile, but I was time constrained so I didn't bother. There are other ways of configuring Switches too. SNMP is one way, but it is complex and prone to errors. I believe it's also possible to retrieve the entire configuration from a switch, modify it and put it back. This is partly what Rancid does. However that would require even more time.

Expect was a good fit in this case. Although it too is rather error prone, it's fairly easy to catch errors as long as you're expecting (no pun intended) them. I strongly suggest you give Trigger a try before falling back to Expect. It seems like a very decent tool.

Upload a file by command line via sftp.

If you want to upload a file by commandline via SFTP, you may end up on this StackOverflow page. The answer there is WRONG. Those are not using the SFTP subsystem, they use SSH and process output redirection. Using scp will result in an error if the server only allows the SFTP subsystem: 

This service allows sftp connections only.

Instead, use this:

$ echo "put system.log" | sftp upload@sftp.example.com:logs/
Connected to example.com.
Changing to: /home/upload/logs/
sftp> put system.log
Uploading system.log to /home/upload/logs/system.log
system.log                100%   61     0.1KB/s   00:00    

to upload a local file "system.log" to the remote SFTP host in the logs/ directory. You can also use wildcards.

The above command generates some output you may not want. In that case you can use the -q switch and redirect output to /dev/null:

$ echo "put system.log" | sftp -q upload@sftp.example.com:logs/ > /dev/null
Connected to example.com

As you can see, this still generates a little output. To fix this, we have to use batch mode:

$ cat batch.txt 
put *.log
$ sftp -q -b batch.txt upload@sftp.example.com:logs/  > /dev/null
$

Batch mode requires the use of a passwordless private key. If you don't want to load it into a key agent (for automated scripts, etc), you can use the "-o IdentityFile" option:

$ sftp -q -b batch.txt -o IdentityFile=key.rsa upload@sftp.example.com:logs/  > /dev/null

 

 

Test a pull / merge request before accepting on Bitbucket

Git is a great tool, but its documentation leaves much to be desired at times. Bitbucket's documentation doesn't fare much better. Commands mentioned in its wiki often don't work as advertised. The fact that git's commands are often incredibly counter-intuitive, incomplete and at times simply wrong doesn't help either. So while you can get far with git by just copying random (and again, wrong more often than not) commands from Stack Overflow, there comes a time where you actually have to learn how it works.

In my case, I had a very simple request. Somebody opened a pull request from a fork of one of my projects and I simply wanted to test that change before I merged it. Bitbucket's wiki was unhelpful as the commands it listed simply didn't work. Here's how I got it to work in a way that actually makes sense.

What we'll be doing

We'll be working with two repositories here:

  • The main repository called "test", owned by user "fboender"
  • The forked repository called "test-fork", also owned by user "fboender". Normally the forked repository wouldn't be owned by the same person, but in Bitbucket you can fork your own repositories, and this makes for a good test. 

We have:

  • A pull request containing two commits on a branch called 'bugfix' on the forked 'test-fork' repository.
  • We want to review these commits, test them and either merge them into our main repository 'test' or reject them.

We'll perform the following steps:

  1. Prepare the working directory
  2. Retrieve the remote changes (commits) for the pull request to our local clone
  3. Review the changes
  4. Either reject or accept (merge) the changes
  5. Push the accepted changes (merge / pull request) back to Bitbucket.

Prepare the working directory

Make sure you have no uncommited changes in your working dir:

$ git status
# On branch master
nothing to commit, working directory clean

If there are uncommited changes, either commit them or get rid of them.

Retrieve remote changes

Next we'll retrieve the remote changes introduced by the pull request. We don't want those changes to affect our local repository in any way. We just want to retrieve them so we can review the changes. To do so, we'll need to know the source from where we want to fetch the changes. This is what the pull requests looks like in the Bitbucket interface:

 

Pull request

If you hover your mouse over the branch, you can see the URL for the source of the merge request. In this case: https://bitbucket.org/fboender/test-fork/branch/bugfix.

We'll use git's fetch command to fetch the objects (commits) in the pull request. The format for the fetch command in this case is:

git fetch <repository> <refspec>

For our pull request the repository is https://bitbucket.org/fboender/test-fork. The refspec can be almost anything really. A commit, a branch name, whatever. In this case we'll use the branch name, which is bugfix, as you can infer from the URL above.

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 8 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From https://bitbucket.org/fboender/test-fork
 * branch            bugfix     -> FETCH_HEAD

Git has now retrieved the commits and put them in our local index. If you wouldn't know better, you wouldn't be able to find them though! The commits are there, but they're not part of any branch or anytihng. To get to the changes, we'll need to use the last commit id (ad38a9b) or the (temporary) FETCH_HEAD ref, which git has created for us. 

Review the changes

Okay, so we want to do various reviews of the changes. First let's look at the changes between our current branch (master) and the new changes:

$ git diff master FETCH_HEAD
​​diff --git a/README.md b/README.md
index 08cdfb4..cc11a2f 100644
--- a/README.md
+++ b/README.md
@@ -6,4 +6,3 @@ About
 
 This is a test repository, for testing.
 
-TEST in sub
diff --git a/TEST b/TEST
index 3fb25cd..bc440ae 100644
--- a/TEST
+++ b/TEST
@@ -1 +1,2 @@
 BLA
+FOO

Looks good. Two commits that ammend the TEST file and update the README.md file. Now perhaps we want to run some tests on the new changes before accepting and merging the pull request. To get to the actual changes, we can checkout a refspec. In this case either the FETCH_HEAD refspec that git helpfully created for us, or we can use the last commit in the pull request: ad38a9b. We'll be using FETCH_HEAD. Remember that this refspec is temporary! If you do a new fetch (that includes git pull and various other commands), the FETCH_HEAD could have changed.

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at ad38a9b... Updated README

Okay, seems to have worked. The git log shows the commits:

$ git log
commit ad38a9b402eac93995902560292697245418a192
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:26 2014 +0200

    Updated README

commit c538608863dd9dda276edf5adcad9e0f2ef9f9ed
Author: Ferry Boender <ferry.boender@electricmonk.nl>
Date:   Mon Mar 31 19:37:11 2014 +0200

    Ammended TEST file

commit f8d3d31ea1195e2cb1c0631d95c2b33c313b60b8
Author: Ferry Boender <ferry.boender@gmail.com>
Date:   Mon Mar 31 17:36:23 2014 +0000

    Created new branch bugfix

We can now run some tests on the working directory, further inspect the changes, etc.

Reject the changes

If you're not satisfied with the changes, you can just check out the original branch:

$ git checkout master
Warning: you are leaving 3 commits behind, not connected to
any of your branches:

  ad38a9b Updated README
  c538608 Ammended TEST file
  f8d3d31 Created new branch bugfix

If you want to keep them by creating a new branch, this may be a good time
to do so with:

 git branch new_branch_name ad38a9b

Switched to branch 'master'

​Note that the commits are still in your index. You've just chosen not to do anything with them. You can now reject the pull request in Bitbucket's interface.

Accept the changes

If you're satisfied that the changes made work properly and want to keep the changes from the pull request, you can merge them into a branch. In this case, we'll merge them directly into the master branch.

First, we reattach our HEAD to the correct branch:

$ git checkout master

Next, we merge in the changes:

$ git merge FETCH_HEAD
Updating 2f6ecbf..ad38a9b
Fast-forward
 README.md | 1 -
 TEST      | 1 +
 2 files changed, 1 insertion(+), 1 deletion(-)

Finally, we push the changes to Bitbucket. This will automatically accept the pull request on Bitbucket.

$ git push
Counting objects: 13, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 873 bytes | 0 bytes/s, done.
Total 8 (delta 2), reused 1 (delta 1)
To git@bitbucket.org:fboender/test.git
   2f6ecbf..ad38a9b  master -> master

The changes have now been pushed to the origin repository and the pull request has been accepted automatically by Bitbucket.

That's it, we're done!

Make "changes" to the pull request

Suppose you find the pull request is largely okay, however it's missing some small things. Maybe the developer who forked didn't update the documentation to reflect the changes he made? Such a hypothetical situation would of course never happen in real life, but let's suppose it does! How do you go about fixing the pull request which still properly accepting it? Here's how:

We've fetched the remote changes and are now inspecting them in our working dir:

$ git fetch https://bitbucket.org/fboender/test-fork bugfix
...
$ git checkout FETCH_HEAD
...

Everything is almost correct, but we need to make some minor changes before accepting the merge request. We can now simply create a new commit in the detached head state:

$ echo "QUUX" >> TEST
$ git add TEST
$ git commit -m "Fixed missing ammend"
[detached HEAD ab11d9a] Fixed missing ammending
 1 file changed, 1 insertion(+)

This gives us a new commit id "ab11d9a" which we can use to merge both the commits from the pull request as well as the new commit:

$ git checkout master
Warning: you are leaving 2 commits behind, not connected to
any of your branches:

  ab11d9a Fixed missing ammending
  e761f43 Ammended TEST some more

$ git merge ab11d9a
Updating ad38a9b..ab11d9a
Fast-forward
 TEST | 2 ++
 1 file changed, 2 insertions(+)

$ git push

The merge request will now automatically be accepted and our additional commit will also be pushed to the main repository. Note that this doesn't actually makes any changes to the pull request. It's just an additional commit that you're merging in at the same time as the commits from the pull request. This is useful to prevent broken code on branches.

HP Lights-Out 100i (LO100i) "Invalid username / password" when trying to connect to KVM

If you're trying to connect to the Virtual KVM (console) on a HP Lights-Out 100i (LO100i) using the Remote Console Client Java applet, you might be getting an error in the order of

Username / Password invalid

Or:

com.serverengines.r.rdr.EndOfStream: EndOfStream

This is a known problem with firmware version 4.24 (or Earlier):

The Virtual Keyboard/Video/Mouse (KVM )will not be accessible
on HP ProLiant 100-series servers with Lights-Out 100 Base 
Management Card Firmware Version 4.24 (or earlier), if the server
has been running without interruption for 248 days (or more). When
this occurs, when attempting to access Virtual KVM/Media as shown
below, the browser will generate the following message[...]

As a solution, HP recommends:

As a workaround, shut down the server and unplug the power cable.
After a few seconds, reconnect the power cable and restart the server.

I've found that it isn't required to actually unplug the powercable. For me, remotely cold-restarting the iLoM card got rid of the problem. You can remotely cold-start the iLoM with ipmitool:

$ ipmitool -H <ILOM_IP> -U <USERNAME> mc
Password:
MC Commands:
  reset <warm|cold>
  guid
  info
  watchdog <get|reset|off>
  selftest
$ ipmitool -H <ILOM_IP> -U <USERNAME> mc reset cold
Password: 
Sent cold reset command to MC

Now we wait until the iLoM comes back up and we can succefully connect to the console via the KVM Java applet.

Quick-n-dirty HAR (HTTP Archive) viewer

HAR, HTTP Archive, is a JSON-encoded dump of a list of requests and their associated headers, bodies, etc. Here's a partial example containing a single request:

{
  "startedDateTime": "2013-09-16T18:02:04.741Z",
  "time": 51,
  "request": {
    "method": "GET",
    "url": "http://electricmonk.nl/",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "queryString": [],
    "cookies": [],
    "headersSize": 38,
    "bodySize": 0
  },
  "response": {
    "status": 301,
    "statusText": "Moved Permanently",
    "httpVersion": "HTTP/1.1",
    "headers": [],
    "cookies": [],
    "content": {
      "size": 0,
      "mimeType": "text/html"
    },
    "redirectURL": "",
    "headersSize": 32,
    "bodySize": 0
  },
  "cache": {},
  "timings": {
    "blocked": 0,
  }
},

HAR files can be exported from Chrome's Network analyser developer tool (ctrl-shift-i → Network tab → capture some requests → Right-click and select Save as HAR with contents. (Additional tip: Check the "Preserve Log on Navigation option – which looks like a recording button – to capture multi-level redirects and such)

As human-readable JSON is, it's still difficult to get a good overview of the requests. So I wrote a quick Python script that turns the JSON into something that's a little easier on our poor sysadmin's eyes:

harview_output

It supports colored output, dumping request headers and response headers and the body of POSTs and responses (although this will be very slow). You can filter out uninteresting requests such as images or CSS/JSS with the --filter-X options.

You can get it by cloning the Git repository from the Bitbucket repository.

Cheers!

bbcloner: create mirrors of your public and private Bitbucket Git repositories

 

bbclonerI wrote a small tool that assists in creating mirrors of your public and private Bitbucket Git repositories and wikis. It also synchronizes already existing mirrors. Initial mirror setup requires that you manually enter your username/password. Subsequent synchronization of mirrors is done using Deployment Keys.

You can download a tar.gz, a Debian/Ubuntu package or clone it from the Bitbucket page.

Features

  • Clone / mirror / backup public and private repositories and wikis.
  • No need to store your username and password to update clones.
  • Exclude repositories.
  • No need to run an SSH agent. Uses passwordless private Deployment Keys. (thus without write access to your repositories)

Usage

Here's how it works in short. Generate a passwordless SSH key:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key: /home/fboender/.ssh/bbcloner_rsa<ENTER>
Enter passphrase (empty for no passphrase):<ENTER>
Enter same passphrase again: <ENTER>

You should add the generated public key to your repositories as a Deployment Key. The first time you use bbcloner, or whenever you've added new public or private repositories, you have to specify your username/password. BBcloner will retrieve a list of your repositories and create mirrors for any new repositories not yet mirrored:

$ bbcloner -n -u fboender /home/fboender/gitclones/
Password: 
Cloning new repositories
Cloning project_a
Cloning project_a wiki
Cloning project_b

Now you can update the mirrors without using a username/password:

$ bbcloner /home/fboender/gitclones/
Updating existing mirrors
Updating /home/fboender/gitclones/project_a.git
Updating /home/fboender/gitclones/project_a-wiki.git
Updating /home/fboender/gitclones/project_b.git

You can run the above from a cronjob. Specify the -s argument to prevent bbcloner from showing normal output.

The mirrors are full remote git repositories, which means you can clone them:

$ git clone /home/fboender/gitclones/project_a.git/
Cloning into project_a...
done.

Don't push changes to it, or the mirror won't be able to sync. Instead, point the remote origin to your Bitbucket repository:

$ git remote rm origin
$ git remote add origin git@bitbucket.org:fboender/project_a.git
$ git push
remote: bb/acl: fboender is allowed. accepted payload.

Get it

Here are ways of getting bbcloner:

More information

Fore more information, please see the Bitbucket repository.

Setting I/O priorities on Linux

All us system admins know about nice, which lets you set the CPU priorities of a process. Every now and then I need to run an I/O-heavy process. This inevitably makes the system intermittently unresponsive or slow, which can be a real annoyance.

If your system is slow to respond, you can check to see if I/O is the problem (which it usually will be) using a program called iotop, which is similar to the normal top program except it doesn't show CPU/Memory but disk reads/writes. You may need to install it first:

# aptitude install iotop

The output looks like this:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                       
12404 be/4 fboender  124.52 K/s  124.52 K/s  0.00 % 99.99 % cp winxp.dev.local.vdi /home/fboender
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]

As you can see, the copy process with PID 12404 is taking up 99.99% of my I/O, leaving little for the rest of the system.

In recent Linux kernels (2.6.13 with the CFQ io scheduler), there's an option to renice the I/O of a process. The ionice tool allows you to renice the processes from userland. It comes pre-installed on Debian/Ubuntu machines in the util-linux package. To use it, you must specify a priority scheduling class using the -c option.

  • -c0 is an old deprecated value of "None", which is now the same as Best-Effort (-c2)
  • -c1 is Real Time priority, which will give the process the highest I/O priority
  • -c2X is Best-Effort priority puts the process in a round-robin queue where it will get a slice of I/O every so often. How much it gets can be specified using the -n option which takes a value from 0 to 7
  • -c3 is Idle, which means the process will only get I/O when no other process requires it.

For example, I want a certain process (PID 12404) to only use I/O when no other process requires it, because the task is I/O-heavy, but is not high priority:

# ionice -c3 -p 12404

The effects are noticeable immediately. My system responds faster, there is less jitter on the desktop and the commandline.

Nice.

Persistent undo history in Vim

Once you quit Vim, the undo history for that file is gone. This sometimes gives me problems if I accidentally made a change in a file without knowing it. This usually happens due to a bad Vim command which, for instance, capitalized a single letter.

There's an option which allows you to make the undo history persistent between Vim sessions. That means you can still undo changes you made in a file, even if you've quit Vim in the meantime.

You can add the following to your .vimrc to enable it:

set undofile   # Maintain undo history between sessions

This will create undo files all over the place, which look like this:

-rw-r--r-- 1 fboender fboender 320 2012-07-26 10:23 bad_gateway.txt
-rw-r--r-- 1 fboender fboender 523 2012-07-24 14:51 .bad_gateway.txt.un~

You can remedy this by including the following option in your configuration:

set undodir=~/.vim/undodir

Make sure to create the undodir:

$ mkdir ~/.vim/undodir

The undo files will now be saved in the undodir:

$ ls -la .vim/undodir/
total 12
drwxr-xr-x  2 fboender fboender 4096 2012-07-26 10:32 .
drwxr-xr-x 12 fboender fboender 4096 2012-07-26 10:24 ..
-rw-r--r--  1 fboender fboender  519 2012-07-26 10:32 %home%fboender%bad_gateway.txt

Conque: Terminal emulators in Vim buffers

For the longest time, I've searched for a way to run terminal emulators in Vim buffers.

As a kind of work-around, I created Bexec, which allows you to run the current contents of a buffer through an external program. It then captures the output and inserts/appends it to another buffer.

Although Bexec works reasonable, and still has it's uses, it's not a true terminal emulator in Vim. Today I finally found a Vim plugin that let's you actually run interactive commands / terminals in Vim buffers: Conque.

It requires Vim with Python support built in. Installation is straight-forward if you've got the requirements.

Download the .vmb file, edit it in vim, and issue:

:so %

It will then be installed. Quit vim, restart it, and you can now run start using it:

:ConqueTerm bash

Very awesome.

Re-use existing SSH agent (cygwin et al)

(Please note that this post is not specific to Windows nor Cygwin; it'll work on a remote unix machine just as well)

On my netbook, I use Windows XP in combination with Cygwin (A unix environment for Windows) and Mintty for my Unixy needs. From there, I usually SSH to some unix-like machine somewhere, so I can do systems administration or development.

Unfortunately, the default use of an SSH agent under Cygwin is difficult, since there's no parent process that can run it and put the required information (SSH_AUTH_SOCK) in the environment. On most Linux distribution, the SSH agent is started after you log in to an X11 session, so that every child process (terminals you open, etc) inherits the SSH_AUTH_SOCK environment setting and SSH can contact the ssh-agent to get your keys. Result? You have to start a new SSH agent, load your key and enter your password for each Mintty terminal you open. Quite annoying.

The upside is, it's not very hard to configure your system properly so that you need only one SSH agent running on your system, and thus only have to enter your password once.

The key lies in how ssh-agent creates the environment. When we start ssh-agent in the traditional manner, we do:

$ eval `ssh-agent`
Agent pid 1784

The command starts the SSH agent and sets a bunch of environment variables:

$ set | grep SSH_
SSH_AGENT_PID=1784
SSH_AUTH_SOCK=/tmp/ssh-QzfPveH696/agent.696

The SSH_AUTH_SOCK is how the ssh command knows how to contact the agent. As you can see, the socket filename is generated randomly. That means you can't reuse the socket, since you can't guess the socket filename.

Good thing ssh-agent allows us to specify the socket filename, so we can easily re-use it.

Put the following in your ~/.bashrc:

# If no SSH agent is already running, start one now. Re-use sockets so we never
# have to start more than one session.

export SSH_AUTH_SOCK=/home/fboender/.ssh-socket

ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
   # No ssh-agent running
   rm -rf $SSH_AUTH_SOCK
   # >| allows output redirection to over-write files if no clobber is set
   ssh-agent -a $SSH_AUTH_SOCK >| /tmp/.ssh-script
   source /tmp/.ssh-script
   echo $SSH_AGENT_PID >| ~/.ssh-agent-pid
   rm /tmp/.ssh-script
fi

What the script above does is, it sets the socket filename manually to /home/yourusername/.ssh-socket. It then runs ssh-add, which will attempt to connect to the ssh-agent through the socket. If it fails, it means no ssh-agent is running, so we do some cleanup and start one.

Now, all you have to do is start a single terminal, and load your keys once:

$ ssh-add ~/.ssh/fboender\@electricmonk.rsa
Enter passphrase for .ssh/fboender@electricmonk.rsa: [PASSWORD]
Identity added: .ssh/fboender@electricmonk.rsa (.ssh/fboender@electricmonk.rsa)

Now you can start as many new terminals as you'd like, and they'll all use the same ssh-agent, never requiring you to enter your password for that key more than once per boot.

Update:

I've updated the script with suggestions from Anthony Geoghegan. It now also works if noclobber is set.