Electricmonk

Ferry Boender

Programmer, DevOpper, Open Source enthusiast.

Blog

An Audio / Video profile switcher (and app launcher) script for Linux

Tuesday, April 13th, 2021

Since the start of Corona, my company has been mostly working remote. That means more video conference meetings. A lot more. So much in fact, that I decided to automate the process of setting everything up correctly for that and other audio video profiles.

For example, my webcam is in my laptop, whose screen I don’t normally use (because virtual desktops == infinite screens right in front of you). Since I’d like people to see me looking at them when we’re conferencing, I want Google Meet to be on the laptop screen. Also, I need to activate my headset.

Switching the internal screen on, switching to the headset, opening Google Meet, dragging it to the laptop’s screen and making the window sticky so it doesn’t disappear when I switch virtual desktops on my main screen… is a bit of a hassle. And since I’m lazy, I decided I need an Audio Video profile switcher.

No such thing existed, so I cobbled together a script, that you can find in a Gist on Github.

The script is reasonably documented I think. It relies heavily on xdotool, wmctrl and pacmd. At the top are some profile definitions for screen layouts, screen coordinates (to move a window to a different screen), and some sound output profiles:

# Use arandr to setup your displays and then export the profile
SCREEN_LAYOUTS[external_only]="
    --output HDMI-2 --off 
    --output DP-1 --off
    --output DP-2 --off
    --output eDP-1 --off
    --output HDMI-1 --primary --mode 1920x1080 --pos 0x0 --rotate normal
"
SCREEN_LAYOUTS[external_left]="
    --output HDMI-2 --off
    --output DP-1 --off
    --output DP-2 --off
    --output HDMI-1 --primary --mode 1920x1080 --pos 0x0 --rotate normal
    --output eDP-1 --mode 1920x1080 --pos 1920x0 --rotate normal
"

# Screen coordinates for moving windows to a different screen. These are
# coordinates on a virtual desktop that is stretched over all monitors; either
# vertically or horizontally, depending on the screen layout.
SCREEN_COORDS[laptop]="0,1920,0,500,500"
SCREEN_COORDS[external]="0,0,0,500,500"

# This requires a bit of a PhD in pulse audio. There has to be a better way
# Mike!
SOUND_OUTPUTS[headphones]="
    pacmd set-card-profile alsa_card.usb-Logitech_Logitech_USB_Headset-00 output:analog-stereo+input:analog-mono;
    pacmd set-card-profile 0 output:analog-stereo+input:analog-stereo;
    pacmd set-default-sink alsa_output.usb-Logitech_Logitech_USB_Headset-00.analog-stereo;
"
SOUND_OUTPUTS[external_monitor]="
    pacmd set-card-profile alsa_card.usb-Logitech_Logitech_USB_Headset-00 off;
    pacmd set-card-profile 0 output:hdmi-stereo;
    pacmd set-default-sink alsa_output.pci-0000_00_1f.3.hdmi-stereo;
"

Then there’s a bunch of helper functions to do various things such as:

  • Change the monitor layout
  • Change the audio output profile
  • Manipulate windows such as moving them around

Finally, the actual audio video profiles are defined in a big “if, then” statement. I also throw in the launching of some applications just to make things easier. For example, the forementioned “conference” profile:

elif [ "$PROFILE" = "conference" ]; then
   sound_output "headphones"
   screen_layout "external_left"
   firefox --new-window https://meet.google.com/landing?authuser=1
   sleep 1 # give firefox a moment
   win_to_screen "Google Meet" "laptop"
   win_sticky "Google Meet"
   win_focus "Google Meet"

I hook all of this up in my Lurch launcher, so that I can just hit Ctrl-alt-semicolon and type the partial name of a profile to switch to it:

I thought I’d share it, since it might be useful for other people. Note that the script may require tweaking to suit your needs, as it’s written for XFCE and other Window Manager might work slightly different.

Finding and removing packages installed from non-standard repos in Ubuntu

Saturday, April 10th, 2021

Update: Oh, look, right in the nick of time: “Valve Steam through 2021-04-10, when a Source engine game is installed, allows remote authenticated users to execute arbitrary code because of a buffer overflow that occurs for a Steam invite after one click

As part of my big spring cleaning, as well as given all the recent supply chain attacks, I’ve decided that I will no longer run any software from third-party repositories directly on my Linux desktop. The most pressing issues is with packages from PyPi, NPM, Docker Hub and other repositories that don’t support cryptographically signed packages. I now run those in Virtual Machines, but that’s a topic for another blog post.

I also wanted to get rid of all the cruft I’ve installed on my Linux desktop over the last years from third-party Ubuntu repositories. I often tend to try things out, but then forget to clean up after myself, which leaves quite a bit of software lingering around that I never use anyway:

root @ jib /etc/apt/sources.list.d $ ls
000-mailpile.list slack.list
000-mailpile.list.save slack.list.save
crystal.list spotify.list
crystal.list.save spotify.list.save
google-chrome.list steam.list
google-chrome.list.save steam.list.save
google-cloud-sdk.list taskcoach-developers-ubuntu-ppa-bionic.list
google-cloud-sdk.list.save taskcoach-developers-ubuntu-ppa-bionic.list.save
gregory-hainaut-ubuntu-pcsx2_official_ppa-bionic.list teams.list
gregory-hainaut-ubuntu-pcsx2_official_ppa-bionic.list.save teams.list.save
nodesource.list teamviewer.list.save
nodesource.list.save ultradvorka-ubuntu-productivity-bionic.list
peek-developers-ubuntu-stable-bionic.list ultradvorka-ubuntu-productivity-bionic.list.save
peek-developers-ubuntu-stable-bionic.list.save vscode.list
signal-xenial.list vscode.list.save

I mean, I don’t even know what some of that stuff is anymore. Time to clean things up!

First, how do I figure out which packages are in those repositories? The web gives us plenty of tips, but they seem to revolve mostly around aptitude, which I don’t have installed. And the whole idea is to clean things up, not install additional cruft!

Let’s look at /var/lib/apt/lists:

$ cd /var/lib/apt/lists
$ ls | head -n5
deb.nodesource.com_node%5f12.x_dists_bionic_InRelease
deb.nodesource.com_node%5f12.x_dists_bionic_main_binary-amd64_Packages
dist.crystal-lang.org_apt_dists_crystal_InRelease
dist.crystal-lang.org_apt_dists_crystal_main_binary-amd64_Packages
dist.crystal-lang.org_apt_dists_crystal_main_binary-i386_Packages

Okay, that looks promising..

$ cat deb.nodesource.com_node%5f12.x_dists_bionic_main_binary-amd64_Packages | head -n5
Package: nodejs
Version: 12.22.1-1nodesource1
Architecture: amd64
Maintainer: Ivan Iguaran <ivan@nodesource.com>
Installed-Size: 91389

Ah, just what we need. So we can get a list of all the packages in a repo using some grep magic. Note that these are not necessarily packages that have actually been installed, but rather they’re all the packages that are available in the repository.

$ grep '^Package:' deb.nodesource.com*
lists/deb.nodesource.com_node%5f12.x_dists_bionic_main_binary-amd64_Packages:Package: nodejs

For a repo with multiple packages, the output looks like this:

$ grep '^Package:' repository.spotify.com*
lists/repository.spotify.com_dists_stable_non-free_binary-amd64_Packages:Package: spotify-client
lists/repository.spotify.com_dists_stable_non-free_binary-amd64_Packages:Package: spotify-client-0.9.17
lists/repository.spotify.com_dists_stable_non-free_binary-amd64_Packages:Package: spotify-client-gnome-support
lists/repository.spotify.com_dists_stable_non-free_binary-amd64_Packages:Package: spotify-client-qt
lists/repository.spotify.com_dists_stable_non-free_binary-i386_Packages:Package: spotify-client
lists/repository.spotify.com_dists_stable_non-free_binary-i386_Packages:Package: spotify-client-gnome-support
lists/repository.spotify.com_dists_stable_non-free_binary-i386_Packages:Package: spotify-client-qt

Fix that output up a little bit so we only get the package name:

$ grep '^Package:' repository.spotify.com* | sed "s/.*Package: //" | sort | uniq
spotify-client
spotify-client-0.9.17
spotify-client-gnome-support
spotify-client-qt

There we go. We can now use apt to see if any of those packages are installed:

$ apt -qq list $(grep '^Package:' repository.spotify.com* | sed "s/.*Package: //" | sort | uniq) | grep installed
spotify-client/stable,now 1:1.1.55.498.gf9a83c60 amd64 [installed]

Okay, so Spotify has been installed with the spotify-client package. Now, we could purge that package manually, but for some of the repositories there are many installed packages. An easier (but slightly more dangerous) method is to just purge all of the packages mentioned in the repo, whether they’re installed or not:

$ apt purge $(grep '^Package:' repository.spotify.com* | sed "s/.*Package: //" | sort | uniq)
Package 'spotify-client-0.9.17' is not installed, so not removed
Package 'spotify-client-gnome-support' is not installed, so not removed
Package 'spotify-client-qt' is not installed, so not removed
The following packages will be REMOVED:
spotify-client*
0 upgraded, 0 newly installed, 1 to remove and 13 not upgraded.
After this operation, 305 MB disk space will be freed.
Do you want to continue? [Y/n]

Finally, we can remove the source list from our system:

$ rm /etc/apt/sources.list.d/spotify.list*

Rinse and repeat for the other repositories, and soon we’ll have rid our system of not just a bunch of cruft that increases our attack surface, but also of a bunch of closed source, proprietary garbage that I never used in the first place.

Update: Don’t forget to also remove any lingering configuration or data from your home directory or the system in general. How to go about doing that differs per application, so I can’t give any instructions for that. I just did a “find -type d” in my home dir, grepped out a bunch of irrelevant stuff and then went through the entire list and did a “rm -rf” on anything I didn’t think was worth keeping around. Freed up about 90 Gb of disk space too! (mostly due to steam). Make backups  before you do this!

Also, when you’re done removing the source lists, you can just wipe the entire contents of /var/lib/apt/lists. It’ll get rebuild when you do an apt update:

$ rm /var/lib/apt/lists/*
$ apt update

Now, I’m pretty sure that there is some arcane apt, dpkg, apt-get or add-apt-repository command to make this easier. The thing is that finding out which command does exactly what I wanted was taking up more time than just going ahead and cobble some shell oneliners myself.

Stay tuned for a blog post on how I use VirtualBox with linked clones and a little shell script wrapper to super easily spin up a sandboxes virtual machine for each of my development projects!

Shared folder on Virtualbox Ubuntu 20.04 guest fails with “No such device or address”

Friday, February 26th, 2021

In an Ubuntu 18.04 virtual machine in VirtualBox, I could define a Shared Folder like so:

And then mount it like this:

mount -t vboxsf Projects -o uid=1000,gid=1000 /home/fboender/Projects/

Or with the equivalent fstab entry like this:

Projects /home/fboender/Projects/ vboxsf defaults,uid=1000,gid=1000 0 0

This fails on an Ubuntu 20.04 guest with the following error:

/sbin/mount.vboxsf: mounting failed with the error: No such device or address

Some combinations I tried:

mount -t vboxsf /media/sf_Projects -o uid=1000,gid=1000 /home/fboender/Projects/
mount -t vboxsf Projects -o uid=1000,gid=1000 /home/fboender/Projects/
mount -t vboxsf sf_Projects -o uid=1000,gid=1000 /home/fboender/Projects/

None of it worked. It turns out that somehow, things got case-(in?)sensitive, and you need to specify the lower-case version of the Shared Folder name:

mount -t vboxsf projects -o uid=1000,gid=1000 /home/fboender/Projects/

Hope this saves someone somewhere some headaches, cause I couldn’t find anything about it on the Googles.

Sla (Simple Little Automator 🥗) v1.1 now supports long rule descriptions

Saturday, September 26th, 2020

Version 1.1 of the Simple Little Automator adds the ability to have long descriptions for build rules. For example:

install () {
    # Install sla
    # Install sla to $PREFIX (/usr/local by default).
    #
    # You can specify the prefix with an environment variable:
    # 
    # $ PREFIX=/usr sla install
    
    # Set the prefix
    PREFIX=${PREFIX:-/usr/local}
    DEST="$PREFIX/bin/sla"
    env install -m 755 ./sla "$DEST"
    echo "sla installed in $DEST"
}

This documentation can then be access using sla <rule> --help. E.g.:

$ sla install --help
install: Install sla

    Install sla to $PREFIX (/usr/local by default).
    
    You can specify the prefix with an environment variable:

        $ PREFIX=/usr sla install

Get the release from the Github releases page.

Sec-tools v0.3: HTTP Security Headers

Wednesday, July 24th, 2019

The latest version of my sec-tools project includes a new tool “sec-gather-http-headers“. It scans one of more URLs for security HTTP headers. As usual, you can use sec-diff to generate alerts about changes in the output and sec-report to generate a matrix overview of the headers for each URL.

The JSON output looks like this:

$ sec-gather-http-headers https://github.com/ https://gitlab.com/
{
    "http_headers": {
        "https://github.com/": {
            "Expect-CT": "max-age=2592000, report-uri=\"https://api.github.com/_private/browser/errors\"",
            "Feature-Policy": null,
            "Access-Control-Allow-Origin": null,
            "X-Frame-Options": "deny",
            "Referrer-Policy": "origin-when-cross-origin, strict-origin-when-cross-origin",
            "Access-Control-Allow-Headers": null,
            "X-XSS-Protection": "1; mode=block",
            "Strict-Transport-Security": "max-age=31536000; includeSubdomains; preload",
            "Public-key-pins": null,
            "Content-Security-Policy": "default-src 'none'; base-uri 'self'; block-all-mixed-content; connect-src 'self' uploads.github.com www.githubstatus.com collector.githubapp.com api.github.com www.google-analytics.com github-cloud.s3.amazonaws.com github-production-repository-file-5c1aeb.s3.amazonaws.com github-production-upload-manifest-file-7fdce7.s3.amazonaws.com github-production-user-asset-6210df.s3.amazonaws.com wss://live.github.com; font-src github.githubassets.com; form-action 'self' github.com gist.github.com; frame-ancestors 'none'; frame-src render.githubusercontent.com; img-src 'self' data: github.githubassets.com identicons.github.com collector.githubapp.com github-cloud.s3.amazonaws.com *.githubusercontent.com customer-stories-feed.github.com; manifest-src 'self'; media-src 'none'; script-src github.githubassets.com; style-src 'unsafe-inline' github.githubassets.com",
            "X-Content-Type-Options": "nosniff",
            "Access-Control-Allow-Methods": null
        },
        "https://gitlab.com/": {
            "Expect-CT": null,
            "Feature-Policy": null,
            "Access-Control-Allow-Origin": null,
            "X-Frame-Options": null,
            "Referrer-Policy": null,
            "Access-Control-Allow-Headers": null,
            "X-XSS-Protection": "1; mode=block",
            "Strict-Transport-Security": "max-age=31536000; includeSubdomains",
            "Public-key-pins": null,
            "Content-Security-Policy": "frame-ancestors 'self' https://gitlab.lookbookhq.com https://learn.gitlab.com;",
            "X-Content-Type-Options": "nosniff",
            "Access-Control-Allow-Methods": null
        }
    }
}

An example PDF output with a matrix overview:

http_headers

WordPress update hangs after “Unpacking the update”

Monday, March 11th, 2019

If your WordPress installation updates just stops after showing the message “Unpacking the update”, try increasing the memory limit of PHP. Unzipping the update takes quite a bit of memory. Newer versions of WordPress keep getting larger and larger, requiring more memory to unpack. So it can suddenly break, as it did for me.

You may want to check the actual limit PHP is using by creating a small “php info” PHP page in your webroot and opening that in your browser. For example:

<?php
phpinfo();
?>

Name it something like “phpinfo_52349602384.php”. The random name is so that if you forget to remove the file, automated vulnerability scanners won’t find it. Open that file in the browser and the memory limit should be mentioned somewhere under “memory_limit”.

sla: The Simple Little Automator

Monday, November 19th, 2018

I’m tired of using Make and its arcane syntax. 90% of the projects I write or deal with don’t require any kind of incremental compilation, but that’s all any build system talks about. That, and how insanely fast it is. The drawback is usually that you need to install several terabytes of dependencies, and then you need a 6-day course to learn how to actually write targets in it. I just want to convert some file formats, run a linter or execute some tests.

So I took an hour or two and wrote sla: the Simple Little Automator. (the name was chosen because ‘sla’ is easy to type; sorry).

sla is simple. It’s just shell functions in a file called build.sla. The sla script searches your project for this file, and runs the requested function. Simple, elegant, powerful, extensible and portable. Here’s an example build.sla:

#
# This is a script containing functions that are used as build rules. You can
# use the Simple Little Automator (https://github.com/fboender/sla) to run
# these rules, or you can run them directly in your shell:
#
#   $ bash -c ". build.sla && test"
#

clean () {
    # Clean artifacts and trash from repo
    find ./ -name "*.pyc" -delete
    find ./ -name "*.tmp" -delete
}

test () {
    # Run some code tests
    clean  # Depend on 'clean' rule
    flake8 --exclude src/llt --ignore=E501 src/*.py
}

You can run rules with:

$ sla test
./src/tools.py:25:80: E501 line too long (111 > 79 characters)
Exection of rule 'test' failed with exitcode 2

The best thing is, since it’s simple shell functions, you don’t even need sla installed in order to run them! Sourcing the rules.sla file in your shell is all you need:

$ bash -c ". build.sla && test"
./src/tools.py:25:80: E501 line too long (111 > 79 characters)
Exection of rule 'test' failed with exitcode 2

More info and installation instructions can be found on the Github project page.

multi-git-status can now do a “git fetch” for each repo.

Saturday, October 27th, 2018

Just a quick note:

My multi-git-status project can now do a “git fetch” for each repo, before showing the status. This fetches the latest changes in the remote repository (without changing anything in your local checked out branch), so that mgitstatus will also show any “git pull”s you’d have to do. 

direnv: Directory-specific environments

Sunday, June 3rd, 2018

Over the course of a single day I might work on a dozen different admin or development projects. In the morning I could be hacking on some Zabbix monitoring scripts, in the afternoon on auto-generated documentation and in the evening on a Python or C project.

I try to keep my system clean and my projects as compartmentalized as possible, to avoid library version conflicts and such. When jumping from one project to another, the requirements of my shell environment can change significantly. One project may require /opt/nim/bin to be in my PATH. Another project might require a Python VirtualEnv to be active, or to have GOPATH set to the correct value. All in all, switching from one project to another incurs some overhead, especially if I haven’t worked on it for a while.

Wouldn’t it be nice if we could have our environment automatically set up simply by changing to the project’s directory? With direnv we can.

direnv is an environment switcher for the shell. It knows how to hook into bash, zsh, tcsh, fish shell and elvish to load or unload environment variables depending on the current directory. This allows project-specific environment variables without cluttering the ~/.profile file.

Before each prompt, direnv checks for the existence of a “.envrc” file in the current and parent directories. If the file exists (and is authorized), it is loaded into a bash sub-shell and all exported variables are then captured by direnv and then made available to the current shell.

It’s easy to use. Here’s a quick guide:

Install direnv (I’m using Ubuntu, but direnv is available for many Unix-like systems):

fboender @ jib ~ $ sudo apt install direnv

You’ll have to add direnv to your .bashrc in order for it to work:

fboender @ jib ~ $ tail -n1 ~/.bashrc
eval "$(direnv hook bash)"

In the base directory of your project, create a .envrc file. For example:

fboender @ jib ~ $ cat ~/Projects/fboender/foobar/.envrc 
#!/bin/bash

# Settings
PROJ_DIR="$PWD"
PROJ_NAME="foobar"
VENV_DIR="/home/fboender/.pyenvs"
PROJ_VENV="$VENV_DIR/$PROJ_NAME"

# Create Python virtualenv if it doesn't exist yet
if [ \! -d "$PROJ_VENV" ]; then
    echo "Creating new environment"
    virtualenv -p python3 $PROJ_VENV
    echo "Installing requirements"
    $PROJ_VENV/bin/pip3 install -r ./requirements.txt
fi

# Emulate the virtualenv's activate, because we can't source things in direnv
export VIRTUAL_ENV="$PROJ_VENV"
export PATH="$PROJ_VENV/bin:$PATH:$PWD"
export PS1="(`basename \"$VIRTUAL_ENV\"`) $PS1"
export PYTHONPATH="$PWD/src"

This example automatically creates a Python3 virtualenv for the project if it doesn’t exist yet, and installs the dependencies. Since we can only export environment variables directly, I’m emulating the virtualenv’s bin/activate script by setting some Python-specific variables and exporting a new prompt.

Now when we change to the project’s directory, or any underlying directory, direnv tries to activate the environment:

fboender @ jib ~ $ cd ~/Projects/fboender/foobar/
direnv: error .envrc is blocked. Run `direnv allow` to approve its content.

This warning is to be expected. Running random code when you switch to a directory can be dangerous, so direnv wants you to explicitly confirm that it’s okay. When you see this message, you should always verify the contents of the .envrc file!

We allow the .envrc, and direnv starts executing the contents. Since the python virtualenv is missing, it automatically creates it and installs the required dependencies. It then sets some paths in the environment and changes the prompt:

fboender @ jib ~ $ direnv allow
direnv: loading .envrc
Creating new environment
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/fboender/.pyenvs/foobar/bin/python3
Also creating executable in /home/fboender/.pyenvs/foobar/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.
Installing requirements
Collecting jsonxs (from -r ./requirements.txt (line 1))
Collecting requests (from -r ./requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl
Collecting tempita (from -r ./requirements.txt (line 3))
Collecting urllib3<1.23,>=1.21.1 (from requests->-r ./requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/63/cb/6965947c13a94236f6d4b8223e21beb4d576dc72e8130bd7880f600839b8/urllib3-1.22-py2.py3-none-any.whl
Collecting chardet<3.1.0,>=3.0.2 (from requests->-r ./requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests->-r ./requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/7c/e6/92ad559b7192d846975fc916b65f667c7b8c3a32bea7372340bfe9a15fa5/certifi-2018.4.16-py2.py3-none-any.whl
Collecting idna<2.7,>=2.5 (from requests->-r ./requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl
Installing collected packages: jsonxs, urllib3, chardet, certifi, idna, requests, tempita
Successfully installed certifi-2018.4.16 chardet-3.0.4 idna-2.6 jsonxs-0.6 requests-2.18.4 tempita-0.5.2 urllib3-1.22
direnv: export +PYTHONPATH +VIRTUAL_ENV ~PATH
(foobar) fboender @ jib ~/Projects/fboender/foobar (master) $

I can now work on the project without having to manually switch anything. When I’m done with the project and change to a different dir, it automatically unloads:

(foobar) fboender @ jib ~/Projects/fboender/foobar (master) $ cd ~
direnv: unloading
fboender @ jib ~ $

And that’s about it! You can read more about direnv on its homepage.

SSL/TLS client certificate verification with Python v3.4+ SSLContext

Saturday, June 2nd, 2018

Normally, an SSL/TLS client verifies the server’s certificate. It’s also possible for the server to require a signed certificate from the client. These are called Client Certificates. This ensures that not only can the client trust the server, but the server can also trusts the client.

Traditionally in Python, you’d pass the ca_certs parameter to the ssl.wrap_socket() function on the server to enable client certificates:

# Client
ssl.wrap_socket(s, ca_certs="ssl/server.crt", cert_reqs=ssl.CERT_REQUIRED,
                certfile="ssl/client.crt", keyfile="ssl/client.key")

# Server
ssl.wrap_socket(connection, server_side=True, certfile="ssl/server.crt",
                keyfile="ssl/server.key", ca_certs="ssl/client.crt")

Since Python v3.4, the more secure, and thus preferred method of wrapping a socket in the SSL/TLS layer is to create an SSLContext instance and call SSLContext.wrap_socket(). However, the SSLContext.wrap_socket() method does not have the ca_certs parameter. Neither is it directly obvious how to enable requirement of client certificates on the server-side.

The documentation for SSLContext.load_default_certs() does mention client certificates:

Purpose.CLIENT_AUTH loads CA certificates for client certificate verification on the server side.

But SSLContext.load_default_certs() loads the system’s default trusted Certificate Authority chains so that the client can verify the server‘s certificates. You generally don’t want to use these for client certificates.

In the Verifying Certificates section, it mentions that you need to specify CERT_REQUIRED:

In server mode, if you want to authenticate your clients using the SSL layer (rather than using a higher-level authentication mechanism), you’ll also have to specify CERT_REQUIRED and similarly check the client certificate.

I didn’t spot how to specify CERT_REQUIRED in either the SSLContext constructor or the wrap_socket() method. Turns out you have to manually set a property on the SSLContext on the server to enable client certificate verification, like this:

context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.verify_mode = ssl.CERT_REQUIRED
context.load_cert_chain(certfile=server_cert, keyfile=server_key)
context.load_verify_locations(cafile=client_certs)

Here’s a full example of a client and server who both validate each other’s certificates:

For this example, we’ll create Self-signed server and client certificates. Normally you’d use a server certificate from a Certificate Authority such as Let’s Encrypt, and would setup your own Certificate Authority so you can sign and revoke client certificates.

Create server certificate:

openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout server.key -out server.crt

Make sure to enter ‘example.com’ for the Common Name.

Next, generate a client certificate:

openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout client.key -out client.crt

The Common Name for the client certificate doesn’t really matter.

Client code:

#!/usr/bin/python3

import socket
import ssl

host_addr = '127.0.0.1'
host_port = 8082
server_sni_hostname = 'example.com'
server_cert = 'server.crt'
client_cert = 'client.crt'
client_key = 'client.key'

context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH, cafile=server_cert)
context.load_cert_chain(certfile=client_cert, keyfile=client_key)

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conn = context.wrap_socket(s, server_side=False, server_hostname=server_sni_hostname)
conn.connect((host_addr, host_port))
print("SSL established. Peer: {}".format(conn.getpeercert()))
print("Sending: 'Hello, world!")
conn.send(b"Hello, world!")
print("Closing connection")
conn.close()

Server code:

#!/usr/bin/python3

import socket
from socket import AF_INET, SOCK_STREAM, SO_REUSEADDR, SOL_SOCKET, SHUT_RDWR
import ssl

listen_addr = '127.0.0.1'
listen_port = 8082
server_cert = 'server.crt'
server_key = 'server.key'
client_certs = 'client.crt'

context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.verify_mode = ssl.CERT_REQUIRED
context.load_cert_chain(certfile=server_cert, keyfile=server_key)
context.load_verify_locations(cafile=client_certs)

bindsocket = socket.socket()
bindsocket.bind((listen_addr, listen_port))
bindsocket.listen(5)

while True:
    print("Waiting for client")
    newsocket, fromaddr = bindsocket.accept()
    print("Client connected: {}:{}".format(fromaddr[0], fromaddr[1]))
    conn = context.wrap_socket(newsocket, server_side=True)
    print("SSL established. Peer: {}".format(conn.getpeercert()))
    buf = b''  # Buffer to hold received client data
    try:
        while True:
            data = conn.recv(4096)
            if data:
                # Client sent us data. Append to buffer
                buf += data
            else:
                # No more data from client. Show buffer and close connection.
                print("Received:", buf)
                break
    finally:
        print("Closing connection")
        conn.shutdown(socket.SHUT_RDWR)
        conn.close()

Output from the server looks like this:

$ python3 ./server.py 
Waiting for client
Client connected: 127.0.0.1:51372
SSL established. Peer: {'subject': ((('countryName', 'AU'),),
(('stateOrProvinceName', 'Some-State'),), (('organizationName', 'Internet
Widgits Pty Ltd'),), (('commonName', 'someclient'),)), 'issuer':
((('countryName', 'AU'),), (('stateOrProvinceName', 'Some-State'),),
(('organizationName', 'Internet Widgits Pty Ltd'),), (('commonName',
'someclient'),)), 'notBefore': 'Jun  1 08:05:39 2018 GMT', 'version': 3,
'serialNumber': 'A564F9767931F3BC', 'notAfter': 'Jun  1 08:05:39 2019 GMT'}
Received: b'Hello, world!'
Closing connection
Waiting for client

Output from the client:

$ python3 ./client.py 
SSL established. Peer: {'notBefore': 'May 30 20:47:38 2018 GMT', 'notAfter':
'May 30 20:47:38 2019 GMT', 'subject': ((('countryName', 'NL'),),
(('stateOrProvinceName', 'GLD'),), (('localityName', 'Ede'),),
(('organizationName', 'Electricmonk'),), (('commonName', 'example.com'),)),
'issuer': ((('countryName', 'NL'),), (('stateOrProvinceName', 'GLD'),),
(('localityName', 'Ede'),), (('organizationName', 'Electricmonk'),),
(('commonName', 'example.com'),)), 'version': 3, 'serialNumber':
'CAEC89334941FD9F'}
Sending: 'Hello, world!
Closing connection

A few notes:

  • You can concatenate multiple client certificates into a single PEM file to authenticate different clients.
  • You can re-use the same cert and key on both the server and client. This way, you don’t need to generate a specific client certificate. However, any clients using that certificate will require the key, and will be able to impersonate the server. There’s also no way to distinguish between clients anymore.
  • You don’t need to setup your own Certificate Authority and sign client certificates. You can just generate them with the above mentioned openssl command and add them to the trusted certificates file. If you no longer trust the client, just remove the certificate from the file.
  • I’m not sure if the server verifies the client certificate’s expiration date.

 

The text of all posts on this blog, unless specificly mentioned otherwise, are licensed under this license.