EOY 2019
I'm somewhat surprised my last year in review post is about 2014. I tend to write those for myself yearly but apparently didn't get to create a public version from it in the last few years. So here an unsorted list about things I did in 2019, topics I found interesting and stuff that changed.
Most memorable where the four weeks I spent road-tripping Australia in October and November. Driving from Sydney to Cairns in a campervan. Likely the best vacation of my life. Still have a ton of photos and videos to sort through. As we left when the fires started, I am really sad about the current situation over there.
Some media consumption stats: Read 17 books. Listened some 36.000 minutes on Spotify. Went to 7 concerts (fav: Muse in Berlin).
Upgraded to the iPhone 11 for the trip to Australia — worth it for the camera alone. Also decided to give the Apple Watch a try — not yet convinced.
Bought a Synology DS218+ as center piece for our home network. Currently acts as VPN, Time Machine backup destination, PiHole, Unifi Controller and Plex server (and as data store for everything else).
At work I got to do two AWS certifications: Developer Associate and DevOps Professional. Worked on a lot of interesting projects at work, mostly centred around the services for Home Connect.
Also got to attend the We Are Developers Congress in Berlin. Favourite talk: Rasmus Lerdorf: 25 Years of PHP.
New favorite coding tools: TypeScript; Dank Mono as coding font; Color scheme toggles between Monokai Pro and Tomorrow Night.
Went to the gym more often than the years before but regularity only set in in November / December.
Onto a great 2020.
dapp
alias dapp='docker-compose exec app'
Going through things I changed in my workflow this year, I saw this alias. I use it very frequently to run commands in the Docker container named app
.
Most projects I work on have their main component in such a container named app
, so this makes a nice shortcut to do anything development related in the container. (And it replaced my dssh
script for the most part.)
Some examples:
# Install dependencies in a node project
dapp npm install
# Update dependencies in a a PHP project
dapp composer update
5 Lessons Learned From Developing on the Fitbit Platform
Earlier this year our customer BSH tasked us with an exciting project: Let’s create a Home Connect app for Fitbit smartwatches. The goal: Home Connect users can monitor and control their home appliances like coffeemachines, washers or ovens from their wrists. Six weeks later we were proud to see it launch.
A few days ago I published a piece on developing applications for Fitbit watches on the Scandio blog.
Passing the AWS Certified Developer Exam
Two weeks ago I passed the AWS Certified Developer Exam. As it was a huge help for me to read about other people's experience, I also want to share my story. So this a short retro on the what, how and why.
The AWS Developer Certification
Amazon Web Services are one of those clouds everybody keeps talking about. It's a multitude of different services for all kinds of use cases. I've only been using a small fraction that fitted our projects at work.
AWS offers different tracks of certifications:
- Solutions Architect — Focusing on the infrastructure to run your application on AWS
- SysOps — Concentrating on the automation and the operations part
- Developer — Using AWS from a software developer perspective
The chance to do the certification came from work. My employer, Scandio, is AWS partner and in this role we're required to have certified staff. As one of the more experienced AWS users in our company I volunteered to give the Developer path a try. Why the developer and not the solutions architect (which is the most common one)? The developer path overlapped the most with what I've been doing with AWS so far.
How I Tackled the Exam
This certification was the first exam I took since leaving university and I actually enjoyed "studying" more than I anticipated.
I used the mornings and afternoons of my workdays to put in the time for studying. In total I put in a bit over 30 hours over the course of two weeks.
As a starting point I went through the Cloud Guru course in about a week. My personal take on the topics covered in this course ranged from "I know this already, let's quickly skim through it" (ElasticBeanstalk, EC2, S3) to "used once but let's see how it's actually meant to be used" (SNS, SQS, Lambda, API Gateway) to "never used, let's see what's behind this" (Kinesis, Cognito, CodeBuild, CodePipeline).
The second half was more focused on working with the services I didn't have much experience with so far. Reading up on the FAQs and actually setting up examples. I also read a lot of other people's posts about their experiences with the exam. This helped me quite a bit deciding what to look into.
As a last preparation step I took the test exam provided by AWS and the exam simulator offered by Cloud Guru. This allowed me to get a feeling for the type of questions.
The whitepapers which are also recommended for the preparation I mostly skimmed during my daily commute. Personally there wasn't much new information which I didn't learn during the last few years as a software developer.
The Exam Itself
The questions in the exam were mostly scenario based — like "You are asked to set up an automated deployment for X. How can you achieve this while always having a capacity of Y%?". Sometimes all answers would solve the problem at hand, only one or two would actually fulfill the specific criteria asked for. So I actually took the time to read every question twice.
Per NDA I'm not allowed to share any specific questions that were asked in the exam. But I want to share at least the topics which were part of my exam as I also benefited from others doing so while preparing.
Deeply covered
- CI / CD with AWS: CodeBuild, CodeDeploy, CodePipeline — How can you override configurations; what options are offered by the different services; which service is the right one for specific scenarios
- SAM, Lambda, API Gateway: How are they used effectively together; Some more specific questions on the services themselves
- Cognito: When to use which feature
- Elastic Container Service / Docker: How to set it up properly and use effectively with other services
- CloudWatch: Mostly in relation to other services how CloudWatch could help in specific scenarios
Superficially covered
- EC2 / VPC / Security Groups
- RDS
- SNS
- X-Ray
- CloudFormation
Not covered
- Kinesis
- Details from the AWS Whitepapers unrelated to the services
Closing Thoughts
Having gone through the process I definitely got a better understanding of many AWS services. In some cases I was also already able to uses some of my learnings at work. The result of the exam (954 / 1000) was better than I expected before starting with the certification. So, would I do it again? Yes.
If I were to do this again (or as a personal learning for other certifications), I would put more time into actually using the services I'm not familiar with. In this cases that would have been SAM, API Gateway and the CodePipeline-related services.
But I would again try to fit into a few weeks at most, because this allowed me to keep the concentration and not get carried away by everyday business.
Prettier Code
If you care about code formatting, you might want to take a look at Prettier. It changed the way I think about coding styles quite a bit.
So I used to spend a lot of time fiddling with code styles. From debating spaces vs. tabs to comparing Symfony's Coding Standards and Google's Styleguides. With JavaScript starting to be the language of choice for most new projects, I settled on the Airbnb JS Style Guide and with the matching linter rules the topic was settled for quite some time.
But half a year ago, we decided at work to use Prettier for a new project. And this has changed how I think about code styleguides in a pretty fundamental way: I just don't care anymore.
What prettier does: Instead of looking at the code style as it was written and applying rules on it, Prettier parses the code and prints it in its own format. So the leeway classic styleguides give every developer isn't a topic to ponder on anymore.
Like many linters, it automatically reformats the files on saving. At first it felt like a heavy intrusion in my work. After all I – at least pretended – to put some effort and pride into the styling of the code I wrote. But a few days later I almost completely stopped thinking about code formatting. Months later I'm on the other end: I write code and am always heavily confused, if it doesn't automatically get reformated in the now familiar Prettier-style.
So if you happen to start a fresh project, just give Prettier a try for a couple of days.
DNS Resolution in Docker Containers
Networks in Docker are a powerful and only recently I learned about the embedded DNS server. So if you maintain containers for which DNS resolution is important but which might not have the most reliable connection to the common DNS servers (e.g. Google's 8.8.8.8
and CloudFlare's 1.1.1.1
) this might be a feature to look into.
In my situation a Openresty / nginx container runs in multiple regions (EU, US, China) and its main purpose is to distribute requests to other upstream services. To do so it's necessary to set the resolver
directive and tell nginx which DNS server to use. First decision: 8.8.8.8
and 1.1.1.1
. This worked fine until the container in China started to get timeouts while attempting to connect to those DNS servers. Essentially bringing down the whole service.
To get around this I toyed with different approaches:
- Switch from hostnames to IP addresses for routing — didn't work directly because of SSL certificates.
- Adding a local DNS service in the container (dnsmasq) — didn't really want to add any more complexity to the container itself.
- Adding a separate container to handle DNS resolution.
Only then I stumbled across the embedded DNS server. If the container runs in a custom network, it's always available at 127.0.0.11
and will adhere to the host's DNS resolution configuration. While all other host machines already had a robust enough DNS config, I manually added the most crucial IP addresses to the /ets/hosts
file on the Chinese host. Bingo, no more DNS issues ever since.
I guess the lesson here for me is to dig a bit deeper in the tools already at hand before going down the rabbithole and constructing overly complex systems.
iA Writer Quattro
Recently iA released a new font: iA Writer Quattro. It looks very monospace-y but has some wider and smaller characters. Few days ago I set it as default font for Markdown files and really like its feel. (Using it for code didn't work out for me.)
The fonts are available for free on Github.
Makefiles to Rule Them All
In my last blogpost you might have already stumbled over me using a Makefile to simplify a project task. My enthusiasm for Makefiles goes a bit further and I add one to most of my projects by now. Here is why I do this and how they make my everyday developer life a bit easier.
None of my projects is written in C or C++ which rely on a Makefile for compiling them. Instead my projects are written in JavaScript (Node), Ruby and PHP. I use the make command as a wrapper for the individual tools coming with each ecosystem to create a common interface. This way I can just run make test
no matter whether the tests are in JavaScript and use Mocha or use PHPUnit.
Goodbye Google Analytics
I just removed Google Analytics from this blog. I use the Firefox Tracking Protection and thus Google Analytics is blocked on all websites I visit anyway (including this blog) — time to quit this double standard.
Dear interwebs, let's say I want to ditch Google Analytics and client-side user analytics in general. Whats the best way to get some stats out of the server logs? Good ol' AWStats or is there a more modern tool?
— Max (@maxlmator) February 10, 2018
I ended up not bothering to have a permanent setup. Instead I run goaccess with the latest access logs whenever I want to look at the numbers:
zcat -f access.log* | goaccess
Deploying Jekyll with Bitbucket Pipelines
The technology behind this blog is in a permanent flux. It's my primary playground to try out new stuff. As you might know, this Jekyll generates this blog. It is a static site generator which takes my posts (written as Markdown files) and generates the HTML pages you're looking at right now. To be more specific: The source files are stored on Bitbucket.org and a server at Hetzner serves the HTML files. When changes are made in Bitbucket, it would trigger the Jekyll setup on the server to publish everything straight away.
Some time ago, Bitbucket.org introduced Pipelines. A feature which gives you the ability to run build and deployment scripts on Bitbucket itself. Curious on how much of a continuous deployment pipeline I could create, I decided to give it a try. To move Jekyll from running on my server to let Bitbucket take care of it. This post details the process and what I came up with and some general thoughts on this feature-set of Bitbucket.