Prettier Code

If you care about code formatting, you might want to take a look at Prettier. It changed the way I think about coding styles quite a bit.

So I used to spend a lot of time fiddling with code styles. From debating spaces vs. tabs to comparing Symfony’s Coding Standards and Google’s Styleguides. With JavaScript starting to be the language of choice for most new projects, I settled on the Airbnb JS Style Guide and with the matching linter rules the topic was settled for quite some time.

But half a year ago, we decided at work to use Prettier for a new project. And this has changed how I think about code styleguides in a pretty fundamental way: I just don’t care anymore.

What prettier does: Instead of looking at the code style as it was written and applying rules on it, Prettier parses the code and prints it in its own format. So the leeway classic styleguides give every developer isn’t a topic to ponder on anymore.

Like many linters, it automatically reformats the files on saving. At first it felt like a heavy intrusion in my work. After all I – at least pretended – to put some effort and pride into the styling of the code I wrote. But a few days later I almost completely stopped thinking about code formatting. Months later I’m on the other end: I write code and am always heavily confused, if it doesn’t automatically get reformated in the now familiar Prettier-style.

So if you happen to start a fresh project, just give Prettier a try for a couple of days.

DNS Resolution in Docker Containers

Networks in Docker are a powerful and only recently I learned about the embedded DNS server. So if you maintain containers for which DNS resolution is important but which might not have the most reliable connection to the common DNS servers (e.g. Google’s and CloudFlare’s this might be a feature to look into.

In my situation a Openresty / nginx container runs in multiple regions (EU, US, China) and its main purpose is to distribute requests to other upstream services. To do so it’s necessary to set the resolver directive and tell nginx which DNS server to use. First decision: and This worked fine until the container in China started to get timeouts while attempting to connect to those DNS servers. Essentially bringing down the whole service.

To get around this I toyed with different approaches:

  • Switch from hostnames to IP addresses for routing — didn’t work directly because of SSL certificates.
  • Adding a local DNS service in the container (dnsmasq) — didn’t really want to add any more complexity to the container itself.
  • Adding a separate container to handle DNS resolution.

Only then I stumbled across the embedded DNS server. If the container runs in a custom network, it’s always available at and will adhere to the host’s DNS resolution configuration. While all other host machines already had a robust enough DNS config, I manually added the most crucial IP addresses to the /ets/hosts file on the Chinese host. Bingo, no more DNS issues ever since.

I guess the lesson here for me is to dig a bit deeper in the tools already at hand before going down the rabbithole and constructing overly complex systems.

iA Writer Quattro


Recently iA released a new font: iA Writer Quattro. It looks very monospace-y but has some wider and smaller characters. Few days ago I set it as default font for Markdown files and really like its feel. (Using it for code didn’t work out for me.)

The fonts are available for free on Github.

Makefiles to Rule Them All

In my last blogpost you might have already stumbled over me using a Makefile to simplify a project task. My enthusiasm for Makefiles goes a bit further and I add one to most of my projects by now. Here is why I do this and how they make my everyday developer life a bit easier.

None of my projects is written in C or C++ which rely on a Makefile for compiling them. Instead my projects are written in JavaScript (Node), Ruby and PHP. I use the make command as a wrapper for the individual tools coming with each ecosystem to create a common interface. This way I can just run make test no matter whether the tests are in JavaScript and use Mocha or use PHPUnit.

Adding a Microblog to Jekyll

Since I learned about back in January I wanted to give it a try on my blog. For the last few days I had a few attempts on it. And as you can see at and on I was successful. Now I want to share how I implemented it.

A hat tip to the Creating a Microblog with Jekyll post which covers a lot of the groundwork. I picked up a lot of his ideas and will focus a bit more on the integration with my existing blog in this post.

Update: As you might have noticed I removed the microblog from this site. My interest in tailed off rather quickly. Nonetheless the implementation below still works as expected.

Goodbye Google Analytics

I just removed Google Analytics from this blog. I use the Firefox Tracking Protection and thus Google Analytics is blocked on all websites I visit anyway (including this blog) — time to quit this double standard.

I ended up not bothering to have a permanent setup. Instead I run goaccess with the latest access logs whenever I want to look at the numbers:

zcat -f access.log* | goaccess

Deploying Jekyll with Bitbucket Pipelines

The technology behind this blog is in a permanent flux. It’s my primary playground to try out new stuff. As you might know, this Jekyll generates this blog. It is a static site generator which takes my posts (written as Markdown files) and generates the HTML pages you’re looking at right now. To be more specific: The source files are stored on and a server at Hetzner serves the HTML files. When changes are made in Bitbucket, it would trigger the Jekyll setup on the server to publish everything straight away.

Some time ago, introduced Pipelines. A feature which gives you the ability to run build and deployment scripts on Bitbucket itself. Curious on how much of a continuous deployment pipeline I could create, I decided to give it a try. To move Jekyll from running on my server to let Bitbucket take care of it. This post details the process and what I came up with and some general thoughts on this feature-set of Bitbucket.

PHP Integration Tests Using the Built-in Server

Microservices have plopped up everywhere and I have written a few of them at work by now. Given the limited scope I find microservices a pleasure to write tests for compared to most (legacy) monolith. In this post I want to share an approach on how to test microservices written in PHP using it’s built-in server and some pitfalls we encountered so far.

Let’s assume we have a nice little microservice built with Slim, Lumen or which ever framework you prefer. We have some routes that return JSON responses and accept various JSON payloads.

We’re able to run this application using PHP’s built-in server by running for example:

ENVIRONMENT=production php -S -t public public/index.php

In this case the public/index.php is the entry file for any HTTP request, and the server is no reachable at http://localhost:8080.

Also you already have a PHPUnit setup available which covers unit tests. For the sake of brevity we will skip the whole PHPUnit setup part and jump directly into writing tests against the built-in server.

Moving Clouds: Hello Hetzner

This page and most my other online stuff are now served from a small machine in the Hetzner Cloud. Previously I had a small droplet over at DigitalOcean. Using the smallest instances on both, I get about the same performance for half the costs. Since most stuff I run is based on static pages, there is currently not much need for a bigger instance.

The most tedious thing of moving all the stuff over, were the DNS entries. But all records got published surprisingly quickly, so that I had to wait for less than an hour in most cases. By now all caches should be updated and I shut down the old instance – of course with my fingers crossed that I didn’t forget anything.

For anybody curious, this is my current setup:

The Comeback of Feeds

A year ago I wrote how newsletters became my main way to stay up-to-date and how RSS died a little for me when Google Reader was sunsetted. Now it’s 2018 and things changed a bit: I’m back on the “feeds-are-awesome”-team.

As I tend to only keep mails in my inbox which are still “to-do”, the newsletters felt more and more like a task I had to complete. Especially whenever a few started to pile up, this became annoying. Recently I saw two things that spurred my interest in feeds again. On the one hand Brent Simmons is currently working on a new opensource feed reader for macOS, called Evergreen. And on the other hand I stumbled about, a sort of RSS-based Twitter.

So I reactivated my Feedbin account – a web-based feed reader. For now I mostly subscribed to personal blogs of people I know and/or admire. And while I initially unsubscribed from a few newsletters, I notices a few days ago that you can subscribe to newsletters via Feedbin, so I re-subscribed to a few via Feedbin. For now I use the webinterface only and look forward to Evergreen gain the ability to sync with Feedbin.

On the second topic of The idea is that you can write twitter-like posts on your own blog and syndicate them through RSS. So all publishing happens decentralized under my own control. The platform then consolidates all those feeds into a neat interface. I yet have to fully set my up though.

And as a general benefit, this blog has now a favicon.