Blog <-

Archive for the ‘shell scripting’ Category

RSS   RSS feed for this category

Script to start a Chrome browser with an SSH Socks5 proxy

Socks5 proxies are great. They allow you to tunnel all traffic for applications that support Socks proxies through the proxy. One example I frequently use is starting a Chrome window that will do everthing as if it was an a remote machine. This is especially useful to bypass firewalls so you can test websites that are only available on localhost on a remote machine, or sites that can only be accessed if you're on a remote network. Basically it's a poor-man's application-specific VPN over SSH.

Normally I run the following:

ssh -D 8000 -N &
chromium-browser --temp-profile --proxy-server="socks5://localhost:8000"

However that quickly becomes tedious to type, so I wrote a script:


if [ -z "$HOST" ]; then
    echo "Usage; $0 <HOST> [SITE]"
    exit 1

while `true`;
    PORT=$(expr 8000 + $RANDOM / 32) # random port in range 8000 - 9000
    if [ \! "$(netstat -lnt | awk '$6 == "LISTEN" && $4 ~ ".$PORT"')" ]; then
        # Port not in use
        ssh -D $PORT -N $HOST &
        chromium-browser --temp-profile --proxy-server="socks5://localhost:$PORT" $SITE
        kill $PID
        exit 0

The script finds a random unused port in the range 8000 – 9000, starts a Socks5 proxy to it and then starts Chromium on that socks proxy.

Together with the excellent Scripts panel plugin for Cinnamon, this makes for a nice menu to easily launch a browser to access remote sites otherwise unreachable:


Update: Added a second optional parameter to specify the site you want the browser to connect too, for added convience.

Read the POSIX standard

Stop reading your local manual pages when programming/scripting stuff, and use the POSIX standard instead:

Online POSIX 2008 (EEE Std 1003.1-2008) standard

There are four main parts:

Some Do's and Dont's:

Finally, read Bash Pitfalls to learn why your shell scripting sucks.

Excluding results of a 'find' command (inverting tests)

In kind of a follow up to my previous post on using find and sed to search and replace multiple files, I found out something else.

I needed to find and replace something in every file, except for any files which had ".svn" in them. After struggling for a few fruitless minutes with -regex, I stumbled upon this example in the manual page:

find /sbin /usr/sbin -executable \! -readable -print

   Search for files which are executable but not readable.

The \! allows us to invert the tests after it. Perfect! All we need to do is use -regex to do our excluding:

find . -type f \! -regex ".*\.svn.*"

And we can now search and replace in all files except those that have ".svn" in them:

find . -type f \! -regex ".*\.svn.*" -print0 | xargs -0 sed -i "s/foo/bar/"

Neat. Note that, again, -regex is a GNU find only construct.

Linux search and replace

I always kept a small Python script around for searching and replacing in Linux. Turns out that GNU sed has an inline edit mode which I didn't know about:

       -i[SUFFIX], --in-place[=SUFFIX]

              edit files in place (makes backup if extension supplied)

This makes searching and replacing in files as simple as:

find . -name "*.txt" -print0 | xargs -0 sed -i "s/foo/bar/"

This replaces all occurences of "foo" with "bar" in all the .txt files in or below the current directory.

Unfortunately, -i appears to be a GNU extension, so it won't work on *BSD or Solaris, probably.

chkrootkit false positives filtering

Chkrootkit is a tool that searches for rootkits, trojans and other signs of break-ins on your system. Like most security scanners, it sometimes generates false positives. Chkrootkit doesn't have a native way to filter those out. From the FAQ:

[Q:] chkrootkit is reporting some files and dirs as suspicious: `.packlist', `.cvsignore', etc. These are clearly false positives. Can't you ignore these?

[A:] Ignoring some files and dirs could impair chkrootkit's accuracy. An attacker might use this, since he knows that chkrootkit will ignore certain files and dirs.

This is true, but getting an email every day is simply too annoying, and makes me skip chkrootkit generated emails on occasion because "It's probably a false positive anyway". So here's a small guide for setting up a filtering of chkrootkit's output.

First, we create a file /etc/chkrootkit.ignore which will hold a bunch of regular expressions that will match everything we don't want to be warned about. For instance, I've got a machine that needs to have a dhcp client installed. Chkrootkit keeps on generating emails with these lines:

eth0: PACKET SNIFFER(/sbin/dhclient[346])
eth1: PACKET SNIFFER(/usr/sbin/dhcpd3[1008])

So what we do is create the file /etc/chkrootkit.ignore and put the following in it:


^eth0: PACKET SNIFFER\(/sbin/dhclient\[[0-9]*\])$
^eth1: PACKET SNIFFER\(/usr/sbin/dhcpd3\[[0-9]*\]\)$

In order to test if the rules we created are correct, we put the two lines with false positives in a separate file (/tmp/chkrk-fp.txt) and run the following:


[root@sharky]/etc# cat /tmp/chkrk-fp.txt | grep -f /etc/chkrootkit.ignore
eth0: PACKET SNIFFER(/sbin/dhclient[346])
eth1: PACKET SNIFFER(/usr/sbin/dhcpd3[1008])

The lines that should be filtered out of the chkrootkit output should appear here. If nothing appears, or if not all of the lines that you want to filter appear, there's a problem. Refine your regular expressions in /etc/chkrootkit.filter until it works.

Now we need to modify the chkrootkit cronjob so that the false positives are filtered. To do this, we edit /etc/cron.daily/chkrootkit. Below is a patch that shows what should be changed. You can apply the patch with the 'patch' command, or you can manually add the lines that start with a '+', replacing the lines with a '-'.

--- /home/root/foo      2007-11-21 11:53:58.532769984 +0100
+++ /etc/cron.daily/chkrootkit  2007-11-21 11:54:00.689442120 +0100
@@ -1,27 +1,28 @@
 #!/bin/sh -e


 if [ ! -x $CHKROOTKIT ]; then
   exit 0

 if [ -f $CF ]; then
     . $CF

 if [ "$RUN_DAILY" = "true" ]; then
     if [ "$DIFF_MODE" = "true" ]; then
+        $CHKROOTKIT $RUN_DAILY_OPTS | grep -v -f $IGNOREF > $LOG_DIR/ 2>&1 || true
         if [ ! -f $LOG_DIR/log.old ] \
            || ! diff -q $LOG_DIR/log.old $LOG_DIR/ > /dev/null 2>&1; then
             cat $LOG_DIR/
         mv $LOG_DIR/ $LOG_DIR/log.old
+        $CHKROOTKIT $RUN_DAILY_OPTS | grep -v -f $IGNOREF || true

Next, we try running chkrootkit, to see if anything shows up:

[root@sharky]/etc/cron.daily# ./chkrootkit

There is no output, so our false positives are now being ignored.

Floating point stuff in Bash

Shell scripting is powerful, but unfortunatelly it gets less easy if you want to perform floating point calculations in it. There's expr, but it only handles integers:

[todsah@jib]~$ echo `expr 0.1 + 0.1`
expr: non-numeric argument

If you wish to perform floating point calculations in shell scripts, you can use the bc tool: "bc – An arbitrary precision calculator language".

Some examples:

[todsah@jib]~$ A=`echo "0.1 + 0.1" | bc`; echo $A;

However, dividing two numbers in bc, which would result in a fractured (floating point) number, doesn't work out-of-the-box. So you'll need to set the scale variable:

[todsah@jib]~$ A=`echo "10 / 3" | bc`; echo $A;
[todsah@jib]~$ A=`echo "scale = 2; 10 / 3" | bc`; echo $A;

You can also evaluate boolean expressions using bc:

[todsah@jib]~$ if [ `echo "10.1 < 10" | bc` -eq 1 ]; then echo "10.1 < 10"; fi
[todsah@jib]~$ if [ `echo "10.1 > 10" | bc` -eq 1 ]; then echo "10.1 > 10"; fi
10.1 > 10

This function might prove to helpful:

function fpexpr() {

    echo `echo "scale = $scale; $expr" | bc`

For example:

[todsah@jib]~/notes/work/organisatorisch$ A=`fpexpr 5 "10 / 3"^J`; echo $A;

bc can do a lot more than this. Consult the manual page for other handy stuff.

More XML Commandline unix tools

A little while ago I reported on a little XML toolset called XMLStarlet. XMLStartlet provided a bunch of commandline tools for reading and converting XML files from the commandline. Usefull in scripts. However, it uses a pretty complex interfacing. For instance, you'll have to know XPath to easily select a particular piece of XML to show.

The XMLCliTools toolset is more in the spirit of traditional Unix tools like grep. It can grep, read and format XML files quite easily. Some examples:

Look for node sequence "top->a" (levelwise). Display from level 1.
jensl:~/c/xmlclitools> xmlgrep -f test.xml 1 top.a
<a><b u="kalle">B1</b></a>

Above with formating.
jensl:~/c/xmlclitools> xmlgrep -gf test.xml 0 b:u=kalle|xmlfmt b:u

XML commandline toolset

Those who are familiar with Unix commandline tools like grep, sed and cut will know about the enormous power they provide. They make it a breeze to mangle, transform and retrieve information in and from text files. Unfortunately, they're mostly dependant on row and column based information. That is, they expect each line in a file to contain one row and each column to be seperated with a certain character (usually a space or a tab). Take, for instance, some lines from a simple Apache logfile - - [26/Jun/2005:09:57:55 +0200] "GET /st.. - - [26/Jun/2005:10:03:56 +0200] "GET / HTTP.. - - [26/Jun/2005:10:14:27 +0200] "GET /imag.. - - [26/Jun/2005:10:21:36 +0200] "GET / HTTP.. - - [26/Jun/2005:10:23:53 +0200] "GET /ima..

If I wanted to list every unique IP in that logfile, I'd simply issue the following command at the shell:

[todsah@jib]~$ cut -d" " -f1 access.log | sort | uniq

'Cut' strips away every column except for the first. 'Sort' sorts list of IP's so that all duplicate will appear under eachother. 'Uniq' then removed all the duplicate IP's, and I'm left with a list of all unique IP's in the log. Writing this small 'script' took about 15 seconds. Now, that's a pretty strong method for statistical analysis.

Unfortunately, XML took that power completely away. It doesn't work on a row/column basis, it's syntax is loose (for example, you can spread a single element with attributes over multiple lines) and you can nest elements inside of other elements.

There is hope, however. A toolset called XMLStarlet offers a powerful XML commandline tool which can do Xpath selects, transformations and more.

Take the following example XML file:

<?xml version='1.0' encoding='UTF-8'?>
<dataq port="50000" daemon="false" verbose="true">

	<queue name='backup' />
	<queue name='mp3' type='fifo' size='1' overflow='pop' />
	<queue name='restricted' type='fifo' size='5' overflow='deny'>
		<access sense="deny">

Suppose I'd want to get all the usernames in this XML file. Using the traditional Unix commandline utilities, I'd have to do this:

[todsah@jib]~$ grep "<username>" dataq.xml | cut -d'>' -f2 | cut -d'<' -f1 

As you can see, this works. But what if we changed the last queue element to be completely on one line?:

<queue name='restricted' type='fifo' size='5' overflow='deny'><access sense="deny"><username>john</username></access></queue>

It's the exact same, valid, XML and should yield the same results, but it does this instead:

[todsah@jib]~$ grep "<username>" dataq.xml | cut -d'>' -f2 | cut -d'<' -f1 

The problem is that you can't assume anything to be the same from one XML file to the next. It's simply not part of the XML specifications.

Using the XMLStarlet commandline tool, we can work around these problems. For instance, selecting all usernames from the XML file works like this:

[todsah@jib]~$ xmlstarlet sel -t -m "//username" -v 'node()' -n dataq.xml

This commandline basically says to use Select mode (sel) with a commandline template (-t) to match all <username> tags (-m "//username") and to show the Value (-v) of each match and to append each value with a newline (-n).

It's use is quite simple, but you do have to know XML and XPath. Some XSLT will also come in handy because underneath, every option to XMLStarlet is translated to an XSLT stylesheet.

XMLStarlet also allows you to completely transform (using XSLT) XML files, translate, validate, format and edit XML files. You can, for instance, use XMLStarlet to delete or insert certain parts of an XML file that match an XPath expression. It can also convert XML to the PYX format, which can then be more easily used with traditional Unix commandline tools.