Passing the AWS Certified Developer Exam

Two weeks ago I passed the AWS Certified Developer Exam. As it was a huge help for me to read about other people’s experience, I also want to share my story. So this a short retro on the what, how and why.

The AWS Developer Certification

Amazon Web Services are one of those clouds everybody keeps talking about. It’s a multitude of different services for all kinds of use cases. I’ve only been using a small fraction that fitted our projects at work.

AWS offers different tracks of certifications:

  • Solutions Architect — Focusing on the infrastructure to run your application on AWS
  • SysOps — Concentrating on the automation and the operations part
  • Developer — Using AWS from a software developer perspective

The chance to do the certification came from work. My employer, Scandio, is AWS partner and in this role we’re required to have certified staff. As one of the more experienced AWS users in our company I volunteered to give the Developer path a try. Why the developer and not the solutions architect (which is the most common one)? The developer path overlapped the most with what I’ve been doing with AWS so far.

How I Tackled the Exam

This certification was the first exam I took since leaving university and I actually enjoyed “studying” more than I anticipated.

I used the mornings and afternoons of my workdays to put in the time for studying. In total I put in a bit over 30 hours over the course of two weeks.

As a starting point I went through the Cloud Guru course in about a week. My personal take on the topics covered in this course ranged from “I know this already, let’s quickly skim through it” (ElasticBeanstalk, EC2, S3) to “used once but let’s see how it’s actually meant to be used” (SNS, SQS, Lambda, API Gateway) to “never used, let’s see what’s behind this” (Kinesis, Cognito, CodeBuild, CodePipeline).

The second half was more focused on working with the services I didn’t have much experience with so far. Reading up on the FAQs and actually setting up examples. I also read a lot of other people’s posts about their experiences with the exam. This helped me quite a bit deciding what to look into.

As a last preparation step I took the test exam provided by AWS and the exam simulator offered by Cloud Guru. This allowed me to get a feeling for the type of questions.

The whitepapers which are also recommended for the preparation I mostly skimmed during my daily commute. Personally there wasn’t much new information which I didn’t learn during the last few years as a software developer.

The Exam Itself

The questions in the exam were mostly scenario based — like “You are asked to set up an automated deployment for X. How can you achieve this while always having a capacity of Y%?”. Sometimes all answers would solve the problem at hand, only one or two would actually fulfill the specific criteria asked for. So I actually took the time to read every question twice.

Per NDA I’m not allowed to share any specific questions that were asked in the exam. But I want to share at least the topics which were part of my exam as I also benefited from others doing so while preparing.

Deeply covered

  • CI / CD with AWS: CodeBuild, CodeDeploy, CodePipeline — How can you override configurations; what options are offered by the different services; which service is the right one for specific scenarios
  • SAM, Lambda, API Gateway: How are they used effectively together; Some more specific questions on the services themselves
  • Cognito: When to use which feature
  • Elastic Container Service / Docker: How to set it up properly and use effectively with other services
  • CloudWatch: Mostly in relation to other services how CloudWatch could help in specific scenarios

Superficially covered

  • EC2 / VPC / Security Groups
  • RDS
  • SNS
  • X-Ray
  • CloudFormation

Not covered

  • Kinesis
  • Details from the AWS Whitepapers unrelated to the services

Closing Thoughts

Having gone through the process I definitely got a better understanding of many AWS services. In some cases I was also already able to uses some of my learnings at work. The result of the exam (954 / 1000) was better than I expected before starting with the certification. So, would I do it again? Yes.

If I were to do this again (or as a personal learning for other certifications), I would put more time into actually using the services I’m not familiar with. In this cases that would have been SAM, API Gateway and the CodePipeline-related services.

But I would again try to fit into a few weeks at most, because this allowed me to keep the concentration and not get carried away by everyday business.

Prettier Code

If you care about code formatting, you might want to take a look at Prettier. It changed the way I think about coding styles quite a bit.

So I used to spend a lot of time fiddling with code styles. From debating spaces vs. tabs to comparing Symfony’s Coding Standards and Google’s Styleguides. With JavaScript starting to be the language of choice for most new projects, I settled on the Airbnb JS Style Guide and with the matching linter rules the topic was settled for quite some time.

But half a year ago, we decided at work to use Prettier for a new project. And this has changed how I think about code styleguides in a pretty fundamental way: I just don’t care anymore.

What prettier does: Instead of looking at the code style as it was written and applying rules on it, Prettier parses the code and prints it in its own format. So the leeway classic styleguides give every developer isn’t a topic to ponder on anymore.

Like many linters, it automatically reformats the files on saving. At first it felt like a heavy intrusion in my work. After all I – at least pretended – to put some effort and pride into the styling of the code I wrote. But a few days later I almost completely stopped thinking about code formatting. Months later I’m on the other end: I write code and am always heavily confused, if it doesn’t automatically get reformated in the now familiar Prettier-style.

So if you happen to start a fresh project, just give Prettier a try for a couple of days.

DNS Resolution in Docker Containers

Networks in Docker are a powerful and only recently I learned about the embedded DNS server. So if you maintain containers for which DNS resolution is important but which might not have the most reliable connection to the common DNS servers (e.g. Google’s 8.8.8.8 and CloudFlare’s 1.1.1.1) this might be a feature to look into.

In my situation a Openresty / nginx container runs in multiple regions (EU, US, China) and its main purpose is to distribute requests to other upstream services. To do so it’s necessary to set the resolver directive and tell nginx which DNS server to use. First decision: 8.8.8.8 and 1.1.1.1. This worked fine until the container in China started to get timeouts while attempting to connect to those DNS servers. Essentially bringing down the whole service.

To get around this I toyed with different approaches:

  • Switch from hostnames to IP addresses for routing — didn’t work directly because of SSL certificates.
  • Adding a local DNS service in the container (dnsmasq) — didn’t really want to add any more complexity to the container itself.
  • Adding a separate container to handle DNS resolution.

Only then I stumbled across the embedded DNS server. If the container runs in a custom network, it’s always available at 127.0.0.11 and will adhere to the host’s DNS resolution configuration. While all other host machines already had a robust enough DNS config, I manually added the most crucial IP addresses to the /ets/hosts file on the Chinese host. Bingo, no more DNS issues ever since.

I guess the lesson here for me is to dig a bit deeper in the tools already at hand before going down the rabbithole and constructing overly complex systems.

iA Writer Quattro

permalink

Recently iA released a new font: iA Writer Quattro. It looks very monospace-y but has some wider and smaller characters. Few days ago I set it as default font for Markdown files and really like its feel. (Using it for code didn’t work out for me.)

The fonts are available for free on Github.

Makefiles to Rule Them All

In my last blogpost you might have already stumbled over me using a Makefile to simplify a project task. My enthusiasm for Makefiles goes a bit further and I add one to most of my projects by now. Here is why I do this and how they make my everyday developer life a bit easier.

None of my projects is written in C or C++ which rely on a Makefile for compiling them. Instead my projects are written in JavaScript (Node), Ruby and PHP. I use the make command as a wrapper for the individual tools coming with each ecosystem to create a common interface. This way I can just run make test no matter whether the tests are in JavaScript and use Mocha or use PHPUnit.

Adding a Microblog to Jekyll

Since I learned about micro.blog back in January I wanted to give it a try on my blog. For the last few days I had a few attempts on it. And as you can see at xam.io/microblog and on micro.blog I was successful. Now I want to share how I implemented it.

A hat tip to the Creating a Microblog with Jekyll post which covers a lot of the groundwork. I picked up a lot of his ideas and will focus a bit more on the integration with my existing blog in this post.

Update: As you might have noticed I removed the microblog from this site. My interest in micro.blog tailed off rather quickly. Nonetheless the implementation below still works as expected.

Goodbye Google Analytics

I just removed Google Analytics from this blog. I use the Firefox Tracking Protection and thus Google Analytics is blocked on all websites I visit anyway (including this blog) — time to quit this double standard.

I ended up not bothering to have a permanent setup. Instead I run goaccess with the latest access logs whenever I want to look at the numbers:

zcat -f access.log* | goaccess

Deploying Jekyll with Bitbucket Pipelines

The technology behind this blog is in a permanent flux. It’s my primary playground to try out new stuff. As you might know, this Jekyll generates this blog. It is a static site generator which takes my posts (written as Markdown files) and generates the HTML pages you’re looking at right now. To be more specific: The source files are stored on Bitbucket.org and a server at Hetzner serves the HTML files. When changes are made in Bitbucket, it would trigger the Jekyll setup on the server to publish everything straight away.

Some time ago, Bitbucket.org introduced Pipelines. A feature which gives you the ability to run build and deployment scripts on Bitbucket itself. Curious on how much of a continuous deployment pipeline I could create, I decided to give it a try. To move Jekyll from running on my server to let Bitbucket take care of it. This post details the process and what I came up with and some general thoughts on this feature-set of Bitbucket.

PHP Integration Tests Using the Built-in Server

Microservices have plopped up everywhere and I have written a few of them at work by now. Given the limited scope I find microservices a pleasure to write tests for compared to most (legacy) monolith. In this post I want to share an approach on how to test microservices written in PHP using it’s built-in server and some pitfalls we encountered so far.

Let’s assume we have a nice little microservice built with Slim, Lumen or which ever framework you prefer. We have some routes that return JSON responses and accept various JSON payloads.

We’re able to run this application using PHP’s built-in server by running for example:

ENVIRONMENT=production php -S 0.0.0.0:8080 -t public public/index.php

In this case the public/index.php is the entry file for any HTTP request, and the server is no reachable at http://localhost:8080.

Also you already have a PHPUnit setup available which covers unit tests. For the sake of brevity we will skip the whole PHPUnit setup part and jump directly into writing tests against the built-in server.

Moving Clouds: Hello Hetzner

This page and most my other online stuff are now served from a small machine in the Hetzner Cloud. Previously I had a small droplet over at DigitalOcean. Using the smallest instances on both, I get about the same performance for half the costs. Since most stuff I run is based on static pages, there is currently not much need for a bigger instance.

The most tedious thing of moving all the stuff over, were the DNS entries. But all records got published surprisingly quickly, so that I had to wait for less than an hour in most cases. By now all caches should be updated and I shut down the old instance – of course with my fingers crossed that I didn’t forget anything.

For anybody curious, this is my current setup: